chore: add markdownlint to lint commands (#9353)

* add markdownlint config

* update make lint commands

* update markdownlint config

* run make lint-fix

* fix empty link

* resuse docker container

* run lint-fix

* do not echo commands

Co-authored-by: ryanchrypto <12519942+ryanchrypto@users.noreply.github.com>
This commit is contained in:
Ryan Christoffersen 2021-05-27 08:31:04 -07:00 committed by GitHub
parent b56e1a164b
commit cb66c99eab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
147 changed files with 806 additions and 771 deletions

17
.markdownlint.json Normal file
View File

@ -0,0 +1,17 @@
{
"default": true,
"MD001": false,
"MD004": false,
"MD007": { "indent": 4 },
"MD013": false,
"MD024": { "siblings_only": true },
"MD025": false,
"MD026": { "punctuation": ".,;:" },
"MD029": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD040": false,
"MD041": false,
"no-hard-tabs": false
}

3
.markdownlintignore Normal file
View File

@ -0,0 +1,3 @@
CHANGELOG.md
docs/core/proto-docs.md
docs/node_modules

View File

@ -1,21 +1,21 @@
# Contributing
- [Contributing](#contributing)
- [Architecture Decision Records (ADR)](#architecture-decision-records-adr)
- [Pull Requests](#pull-requests)
- [Process for reviewing PRs](#process-for-reviewing-prs)
- [Updating Documentation](#updating-documentation)
- [Forking](#forking)
- [Dependencies](#dependencies)
- [Protobuf](#protobuf)
- [Testing](#testing)
- [Branching Model and Release](#branching-model-and-release)
- [PR Targeting](#pr-targeting)
- [Development Procedure](#development-procedure)
- [Pull Merge Procedure](#pull-merge-procedure)
- [Release Procedure](#release-procedure)
- [Point Release Procedure](#point-release-procedure)
- [Code Owner Membership](#code-owner-membership)
- [Architecture Decision Records (ADR)](#architecture-decision-records-adr)
- [Pull Requests](#pull-requests)
- [Process for reviewing PRs](#process-for-reviewing-prs)
- [Updating Documentation](#updating-documentation)
- [Forking](#forking)
- [Dependencies](#dependencies)
- [Protobuf](#protobuf)
- [Testing](#testing)
- [Branching Model and Release](#branching-model-and-release)
- [PR Targeting](#pr-targeting)
- [Development Procedure](#development-procedure)
- [Pull Merge Procedure](#pull-merge-procedure)
- [Release Procedure](#release-procedure)
- [Point Release Procedure](#point-release-procedure)
- [Code Owner Membership](#code-owner-membership)
Thank you for considering making contributions to Cosmos-SDK and related
repositories!
@ -80,12 +80,12 @@ All PRs require two Reviews before merge (except docs changes, or variable name-
- `LGTM` without an explicit approval means that the changes look good, but you haven't pulled down the code, run tests locally and thoroughly reviewed it.
- `Approval` through the GH UI means that you understand the code, documentation/spec is updated in the right places, you have pulled down and tested the code locally. In addition:
- You must also think through anything which ought to be included but is not
- You must think through whether any added code could be partially combined (DRYed) with existing code
- You must think through any potential security issues or incentive-compatibility flaws introduced by the changes
- Naming must be consistent with conventions and the rest of the codebase
- Code must live in a reasonable location, considering dependency structures (e.g. not importing testing modules in production code, or including example code modules in production code).
- if you approve of the PR, you are responsible for fixing any of the issues mentioned here and more
- You must also think through anything which ought to be included but is not
- You must think through whether any added code could be partially combined (DRYed) with existing code
- You must think through any potential security issues or incentive-compatibility flaws introduced by the changes
- Naming must be consistent with conventions and the rest of the codebase
- Code must live in a reasonable location, considering dependency structures (e.g. not importing testing modules in production code, or including example code modules in production code).
- if you approve of the PR, you are responsible for fixing any of the issues mentioned here and more
- If you sat down with the PR submitter and did a pairing review please note that in the `Approval`, or your PR comments.
- If you are only making "surface level" reviews, submit any notes as `Comments` without adding a review.
@ -229,10 +229,10 @@ should be targeted against the release candidate branch.
- Create the release candidate branch `rc/v*` (going forward known as **RC**)
and ensure it's protected against pushing from anyone except the release
manager/coordinator
- **no PRs targeting this branch should be merged unless exceptional circumstances arise**
- **no PRs targeting this branch should be merged unless exceptional circumstances arise**
- On the `RC` branch, prepare a new version section in the `CHANGELOG.md`
- All links must be link-ified: `$ python ./scripts/linkify_changelog.py CHANGELOG.md`
- Copy the entries into a `RELEASE_CHANGELOG.md`, this is needed so the bot knows which entries to add to the release page on github.
- All links must be link-ified: `$ python ./scripts/linkify_changelog.py CHANGELOG.md`
- Copy the entries into a `RELEASE_CHANGELOG.md`, this is needed so the bot knows which entries to add to the release page on github.
- Kick off a large round of simulation testing (e.g. 400 seeds for 2k blocks)
- If errors are found during the simulation testing, commit the fixes to `master`
and create a new `RC` branch (making sure to increment the `rcN`)
@ -314,22 +314,21 @@ have had acted maliciously or grossly negligent, code-owner privileges may be
stripped with no prior warning or consent from the member in question.
Other potential removal criteria:
* Missing 3 scheduled meetings results in ICF evaluating whether the member should be
* Missing 3 scheduled meetings results in ICF evaluating whether the member should be
removed / replaced
* Violation of Code of Conduct
* Violation of Code of Conduct
Earning this privilege should be considered to be no small feat and is by no
means guaranteed by any quantifiable metric. It is a symbol of great trust of
the community of this project.
## Concept & Release Approval Process
The process for how Cosmos SDK maintainers take features and ADRs from concept to release
is broken up into three distinct stages: **Strategy Discovery**, **Concept Approval**, and
**Implementation & Release Approval**
### Strategy Discovery
* Develop long term priorities, strategy and roadmap for the SDK
@ -356,6 +355,7 @@ the current state of its discussion.
If an ADR is taking longer than 4 weeks to reach a final conclusion, the **Concept Approval Committee**
should convene to rectify the situation by either:
- unanimously setting a new time bound period for this ADR
- making changes to the Concept Approval Process (as outlined here)
- making changes to the members of the Concept Approval Committee
@ -378,8 +378,8 @@ Members must:
* Be active contributors to the SDK, and furthermore should be continuously making substantial contributions
to the project's codebase, review process, documentation and ADRs
* Have stake in the Cosmos SDK project, represented by:
* Being a client / user of the Comsos SDK
* "[giving back](https://www.debian.org/social_contract)" to the software
* Being a client / user of the Comsos SDK
* "[giving back](https://www.debian.org/social_contract)" to the software
* Delegate representation in case of vacation or absence
Code owners need to maintain participation in the process, ideally as members of **Concept Approval Committee**

View File

@ -327,11 +327,18 @@ benchmark:
### Linting ###
###############################################################################
containerMarkdownLintImage=tmknom/markdownlint
containerMarkdownLint=cosmos-sdk-markdownlint
containerMarkdownLintFix=cosmos-sdk-markdownlint-fix
lint:
golangci-lint run --out-format=tab
@if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerMarkdownLint}$$"; then docker start -a $(containerMarkdownLint); else docker run --name $(containerMarkdownLint) -i -v "$(CURDIR):/work" $(containerMarkdownLintImage); fi
lint-fix:
golangci-lint run --fix --out-format=tab --issues-exit-code=0
@if docker ps -a --format '{{.Names}}' | grep -Eq "^${containerMarkdownLintFix}$$"; then docker start -a $(containerMarkdownLintFix); else docker run --name $(containerMarkdownLintFix) -i -v "$(CURDIR):/work" $(containerMarkdownLintImage) . --fix; fi
.PHONY: lint lint-fix
format:

View File

@ -40,9 +40,6 @@ parent:
<img alt="Lint Satus" src="https://github.com/cosmos/cosmos-sdk/workflows/Lint/badge.svg" />
</div>
The Cosmos-SDK is a framework for building blockchain applications in Golang.
It is being used to build [`Gaia`](https://github.com/cosmos/gaia), the first implementation of the Cosmos Hub.
@ -65,7 +62,7 @@ The Cosmos Hub application, `gaia`, has moved to its [own repository](https://gi
## Interblockchain Communication (IBC)
The IBC module for the SDK has moved to its [own repository](https://github.com/cosmos/ibc-go). Go there to build and integrate with the IBC module.
The IBC module for the SDK has moved to its [own repository](https://github.com/cosmos/ibc-go). Go there to build and integrate with the IBC module.
## Starport

View File

@ -51,7 +51,7 @@ the code.
- HD key derivation, local and Ledger, and all key-management functionality
- Side-channel attack vectors with our implementations
- e.g. key exfiltration based on time or memory-access patterns when decrypting privkey
- e.g. key exfiltration based on time or memory-access patterns when decrypting privkey
## Disclosure Process
@ -73,6 +73,7 @@ This process can take some time. Every effort is made to handle the bug in as ti
### Disclosure Communications
Communications to partners usually include the following details:
1. Affected version or versions
1. New release version
1. Impact on user funds
@ -81,13 +82,14 @@ Communications to partners usually include the following details:
1. Potential required actions if an adverse condition arises during the security release process
An example notice looks like:
```
Dear Cosmos SDK partners,
A critical security vulnerability has been identified in Cosmos SDK vX.X.X.
A critical security vulnerability has been identified in Cosmos SDK vX.X.X.
User funds are NOT at risk; however, the vulnerability can result in a chain halt.
This notice is to inform you that on [[**March 1 at 1pm EST/6pm UTC**]], we will be releasing Cosmos SDK vX.X.Y to fix the security issue.
This notice is to inform you that on [[**March 1 at 1pm EST/6pm UTC**]], we will be releasing Cosmos SDK vX.X.Y to fix the security issue.
We ask all validators to upgrade their nodes ASAP.
If the chain halts, validators with sufficient voting power must upgrade and come online for the chain to resume.

View File

@ -52,13 +52,13 @@ To smoothen the update to the latest stable release, the SDK includes a set of C
### What qualifies as a Stable Release Update (SRU)
* **High-impact bugs**
* Bugs that may directly cause a security vulnerability.
* *Severe regressions* from a Cosmos-SDK's previous release. This includes all sort of issues
* Bugs that may directly cause a security vulnerability.
* *Severe regressions* from a Cosmos-SDK's previous release. This includes all sort of issues
that may cause the core packages or the `x/` modules unusable.
* Bugs that may cause **loss of user's data**.
* Bugs that may cause **loss of user's data**.
* Other safe cases:
* Bugs which don't fit in the aforementioned categories for which an obvious safe patch is known.
* Relatively small yet strictly non-breaking changes that introduce forward-compatible client
* Bugs which don't fit in the aforementioned categories for which an obvious safe patch is known.
* Relatively small yet strictly non-breaking changes that introduce forward-compatible client
features to smoothen the migration to successive releases.
### What does not qualify as SRU
@ -71,17 +71,17 @@ To smoothen the update to the latest stable release, the SDK includes a set of C
Pull requests that fix bugs that fall in the following categories do not require a **Stable Release Exception** to be granted to be included in a stable point-release:
* **Severe regressions**.
* Bugs that may cause **client applications** to be **largely unusable**.
* Bugs that may cause **state corruption or data loss**.
* Bugs that may directly or indirectly cause a **security vulnerability**.
* **Severe regressions**.
* Bugs that may cause **client applications** to be **largely unusable**.
* Bugs that may cause **state corruption or data loss**.
* Bugs that may directly or indirectly cause a **security vulnerability**.
## What pull requests will NOT be automatically included in stable point-releases
As rule of thumb, the following changes will **NOT** be automatically accepted into stable point-releases:
* **State machine changes**.
* **Client application's code-breaking changes**, i.e. changes that prevent client applications to *build without modifications* to the client application's source code.
* **State machine changes**.
* **Client application's code-breaking changes**, i.e. changes that prevent client applications to *build without modifications* to the client application's source code.
In some circumstances, PRs that don't meet the aforementioned criteria might be raised and asked to be granted a *Stable Release Exception*.
@ -89,9 +89,11 @@ As rule of thumb, the following changes will **NOT** be automatically accepted i
1. Check that the bug is either fixed or not reproducible in `master`. It is, in general, not appropriate to release bug fixes for stable releases without first testing them in `master`. Please apply the label [0.42 «Stargate»](https://github.com/cosmos/cosmos-sdk/labels/0.42%20LTS%20%28Stargate%29) to the issue.
2. Add a comment to the issue and ensure it contains the following information (see the bug template below):
* **[Impact]** An explanation of the bug on users and justification for backporting the fix to the stable release.
* A **[Test Case]** section containing detailed instructions on how to reproduce the bug.
* A **[Regression Potential]** section with a clear assessment on how regressions are most likely to manifest as a result of the pull request that aims to fix the bug in the target stable release.
* **[Impact]** An explanation of the bug on users and justification for backporting the fix to the stable release.
* A **[Test Case]** section containing detailed instructions on how to reproduce the bug.
* A **[Regression Potential]** section with a clear assessment on how regressions are most likely to manifest as a result of the pull request that aims to fix the bug in the target stable release.
3. **Stable Release Managers** will review and discuss the PR. Once *consensus* surrounding the rationale has been reached and the technical review has successfully concluded, the pull request will be merged in the respective point-release target branch (e.g. `release/v0.42.x`) and the PR included in the point-release's respective milestone (e.g. `0.42.5`).
### Stable Release Exception - Bug template
@ -119,9 +121,10 @@ according to the [stable release policy](#stable-release-policy) and [release pr
Decisions are made by consensus.
Their responsibilites include:
* Driving the Stable Release Exception process.
* Approving/rejecting proposed changes to a stable release series.
* Executing the release process of stable point-releases in compliance with the [Point Release Procedure](CONTRIBUTING.md).
* Driving the Stable Release Exception process.
* Approving/rejecting proposed changes to a stable release series.
* Executing the release process of stable point-releases in compliance with the [Point Release Procedure](CONTRIBUTING.md).
The Stable Release Managers are appointed by the Interchain Foundation. Currently residing Stable Release Managers:

View File

@ -3,7 +3,7 @@
Installation:
```
$ git config core.hooksPath contrib/githooks
git config core.hooksPath contrib/githooks
```
## pre-commit
@ -14,8 +14,8 @@ that all the aforementioned commands are installed and available
in the user's search `$PATH` environment variable:
```
$ go get golang.org/x/tools/cmd/goimports
$ go get github.com/golangci/misspell/cmd/misspell@master
go get golang.org/x/tools/cmd/goimports
go get github.com/golangci/misspell/cmd/misspell@master
```
It also runs `go mod tidy` and `golangci-lint` if available.

View File

@ -5,6 +5,7 @@ This directory contains the files required to run the rosetta CI. It builds `sim
## docker-compose.yaml
Builds:
- cosmos-sdk simapp node, with prefixed data directory, keys etc. This is required to test historical balances.
- faucet is required so we can test construction API, it was literally impossible to put there a deterministic address to request funds for
- rosetta is the rosetta node used by rosetta-cli to interact with the cosmos-sdk app

View File

@ -28,7 +28,7 @@ if there was an error.
## Data Folder Layout
`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and
`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and
subprocesses that are controlled by it. The folder content is organised as follows:
```
@ -66,6 +66,7 @@ directory layout:
## Usage
The system administrator admin is responsible for:
* installing the `cosmovisor` binary and configure the host's init system (e.g. `systemd`, `launchd`, etc) along with the environmental variables appropriately;
* installing the `genesis` folder manually;
* installing the `upgrades/<name>` folders manually.
@ -95,6 +96,7 @@ valid format to specify a download in such a message:
1. Store an os/architecture -> binary URI map in the upgrade plan info field
as JSON under the `"binaries"` key, eg:
```json
{
"binaries": {
@ -102,12 +104,13 @@ as JSON under the `"binaries"` key, eg:
}
}
```
2. Store a link to a file that contains all information in the above format (eg. if you want
to specify lots of binaries, changelog info, etc without filling up the blockchain).
e.g. `https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e`
This file contained in the link will be retrieved by [go-getter](https://github.com/hashicorp/go-getter)
This file contained in the link will be retrieved by [go-getter](https://github.com/hashicorp/go-getter)
and the `"binaries"` field will be parsed as above.
If there is no local binary, `DAEMON_ALLOW_DOWNLOAD_BINARIES=on`, and we can access a canonical url for the new binary,
@ -120,7 +123,7 @@ or hijacks the DNS. go-getter will always ensure the downloaded file matches the
is provided. go-getter will also handle unpacking archives into directories (so these download links should be
a zip of all data in the `bin` directory).
To properly create a checksum on linux, you can use the `sha256sum` utility. e.g.
To properly create a checksum on linux, you can use the `sha256sum` utility. e.g.
`sha256sum ./testdata/repo/zip_directory/autod.zip`
which should return `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`.
You can also use `sha512sum` if you like longer hashes, or `md5sum` if you like to use broken hashes.
@ -174,13 +177,13 @@ Submit a software upgrade proposal:
```
./build/simd tx gov submit-proposal software-upgrade test1 --title "upgrade-demo" --description "upgrade" --from validator --upgrade-height 100 --deposit 10000000stake --chain-id test --keyring-backend test -y
```
Query the proposal to ensure it was correctly broadcast and added to a block:
```
./build/simd query gov proposal 1
```
Submit a `Yes` vote for the upgrade proposal:
```

View File

@ -8,6 +8,7 @@ Optimized C library for EC operations on curve secp256k1.
This library is a work in progress and is being used to research best practices. Use at your own risk.
Features:
* secp256k1 ECDSA signing/verification and key generation.
* Adding/multiplying private/public keys.
* Serialization/parsing of private keys, public keys, signatures.
@ -19,43 +20,43 @@ Implementation details
----------------------
* General
* No runtime heap allocation.
* Extensive testing infrastructure.
* Structured to facilitate review and analysis.
* Intended to be portable to any system with a C89 compiler and uint64_t support.
* Expose only higher level interfaces to minimize the API surface and improve application security. ("Be difficult to use insecurely.")
* No runtime heap allocation.
* Extensive testing infrastructure.
* Structured to facilitate review and analysis.
* Intended to be portable to any system with a C89 compiler and uint64_t support.
* Expose only higher level interfaces to minimize the API surface and improve application security. ("Be difficult to use insecurely.")
* Field operations
* Optimized implementation of arithmetic modulo the curve's field size (2^256 - 0x1000003D1).
* Using 5 52-bit limbs (including hand-optimized assembly for x86_64, by Diederik Huys).
* Using 10 26-bit limbs.
* Field inverses and square roots using a sliding window over blocks of 1s (by Peter Dettman).
* Optimized implementation of arithmetic modulo the curve's field size (2^256 - 0x1000003D1).
* Using 5 52-bit limbs (including hand-optimized assembly for x86_64, by Diederik Huys).
* Using 10 26-bit limbs.
* Field inverses and square roots using a sliding window over blocks of 1s (by Peter Dettman).
* Scalar operations
* Optimized implementation without data-dependent branches of arithmetic modulo the curve's order.
* Using 4 64-bit limbs (relying on __int128 support in the compiler).
* Using 8 32-bit limbs.
* Optimized implementation without data-dependent branches of arithmetic modulo the curve's order.
* Using 4 64-bit limbs (relying on __int128 support in the compiler).
* Using 8 32-bit limbs.
* Group operations
* Point addition formula specifically simplified for the curve equation (y^2 = x^3 + 7).
* Use addition between points in Jacobian and affine coordinates where possible.
* Use a unified addition/doubling formula where necessary to avoid data-dependent branches.
* Point/x comparison without a field inversion by comparison in the Jacobian coordinate space.
* Point addition formula specifically simplified for the curve equation (y^2 = x^3 + 7).
* Use addition between points in Jacobian and affine coordinates where possible.
* Use a unified addition/doubling formula where necessary to avoid data-dependent branches.
* Point/x comparison without a field inversion by comparison in the Jacobian coordinate space.
* Point multiplication for verification (a*P + b*G).
* Use wNAF notation for point multiplicands.
* Use a much larger window for multiples of G, using precomputed multiples.
* Use Shamir's trick to do the multiplication with the public key and the generator simultaneously.
* Optionally (off by default) use secp256k1's efficiently-computable endomorphism to split the P multiplicand into 2 half-sized ones.
* Use wNAF notation for point multiplicands.
* Use a much larger window for multiples of G, using precomputed multiples.
* Use Shamir's trick to do the multiplication with the public key and the generator simultaneously.
* Optionally (off by default) use secp256k1's efficiently-computable endomorphism to split the P multiplicand into 2 half-sized ones.
* Point multiplication for signing
* Use a precomputed table of multiples of powers of 16 multiplied with the generator, so general multiplication becomes a series of additions.
* Access the table with branch-free conditional moves so memory access is uniform.
* No data-dependent branches
* The precomputed tables add and eventually subtract points for which no known scalar (private key) is known, preventing even an attacker with control over the private key used to control the data internally.
* Use a precomputed table of multiples of powers of 16 multiplied with the generator, so general multiplication becomes a series of additions.
* Access the table with branch-free conditional moves so memory access is uniform.
* No data-dependent branches
* The precomputed tables add and eventually subtract points for which no known scalar (private key) is known, preventing even an attacker with control over the private key used to control the data internally.
Build steps
-----------
libsecp256k1 is built using autotools:
$ ./autogen.sh
$ ./configure
$ make
$ ./tests
$ sudo make install # optional
./autogen.sh
./configure
make
./tests
sudo make install # optional

View File

@ -108,14 +108,17 @@ much as possible with its [counterpart in the Tendermint Core repo](https://gith
### Update and Build the RPC docs
1. Execute the following command at the root directory to install the swagger-ui generate tool.
```bash
make tools
```
2. Edit API docs
1. Directly Edit API docs manually: `client/lcd/swagger-ui/swagger.yaml`.
2. Edit API docs within the [Swagger Editor](https://editor.swagger.io/). Please refer to this [document](https://swagger.io/docs/specification/2-0/basic-structure/) for the correct structure in `.yaml`.
3. Download `swagger.yaml` and replace the old `swagger.yaml` under fold `client/lcd/swagger-ui`.
4. Compile gaiacli
```bash
make install
```

View File

@ -6,7 +6,6 @@
4. Add an entry to a list in the [README](./README.md) file.
5. Create a Pull Request to propose a new ADR.
## ADR life cycle
ADR creation is an **iterative** process. Instead of trying to solve all decisions in a single ADR pull request, we MUST firstly understand the problem and collect feedback through a GitHub Issue.
@ -23,7 +22,6 @@ ADR creation is an **iterative** process. Instead of trying to solve all decisio
6. Merged ADRs SHOULD NOT be pruned.
### ADR status
Status has two components:
@ -44,7 +42,6 @@ DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEEDED
ABANDONED
```
+ `DRAFT`: [optional] an ADR which is work in progress, not being ready for a general review. This is to present an early work and get an early feedback in a Draft Pull Request form.
+ `PROPOSED`: an ADR covering a full solution architecture and still in the review - project stakeholders haven't reached an agreed yet.
+ `LAST CALL <date for the last call>`: [optional] clear notify that we are close to accept updates. Changing a status to `LAST CALL` means that social consensus (of Cosmos SDK maintainers) has been reached and we still want to give it a time to let the community react or analyze.
@ -53,7 +50,6 @@ DRAFT -> PROPOSED -> LAST CALL yyyy-mm-dd -> ACCEPTED | REJECTED -> SUPERSEEDED
+ `SUPERSEEDED by ADR-xxx`: ADR which has been superseded by a new ADR.
+ `ABANDONED`: the ADR is no longer pursued by the original authors.
## Language used in ADR
+ The context/background should be written in the present tense.

View File

@ -32,7 +32,6 @@ it stands today.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
## Creating new ADR
Read about the [PROCESS](./PROCESS.md).

View File

@ -33,7 +33,7 @@ past transactions and assigned to particular modes), and keep them in a memory-o
chain is running.
The `CapabilityKeeper` will include a persistent `KVStore`, a `MemoryStore`, and an in-memory map.
The persistent `KVStore` tracks which capability is owned by which modules.
The persistent `KVStore` tracks which capability is owned by which modules.
The `MemoryStore` stores a forward mapping that map from module name, capability tuples to capability names and
a reverse mapping that map from module name, capability name to the capability index.
Since we cannot marshal the capability into a `KVStore` and unmarshal without changing the memory location of the capability,

View File

@ -7,12 +7,10 @@
- 2020-01-14: Updates from review feedback
- 2020-01-30: Updates from implementation
### Glossary
* denom / denomination key -- unique token identifier.
## Context
With permissionless IBC, anyone will be able to send arbitrary denominations to any other account. Currently, all non-zero balances are stored along with the account in an `sdk.Coins` struct, which creates a potential denial-of-service concern, as too many denominations will become expensive to load & store each time the account is modified. See issues [5467](https://github.com/cosmos/cosmos-sdk/issues/5467) and [4982](https://github.com/cosmos/cosmos-sdk/issues/4982) for additional context.

View File

@ -9,23 +9,20 @@
## Context
Currently, an SDK application's CLI directory stores key material and metadata in a plain text database in the users home directory. Key material is encrypted by a passphrase, protected by bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text.
Currently, an SDK application's CLI directory stores key material and metadata in a plain text database in the users home directory. Key material is encrypted by a passphrase, protected by bcrypt hashing algorithm. Metadata (e.g. addresses, public keys, key storage details) is available in plain text.
This is not desirable for a number of reasons. Perhaps the biggest reason is insufficient security protection of key material and metadata. Leaking the plain text allows an attacker to surveil what keys a given computer controls via a number of techniques, like compromised dependencies without any privilege execution. This could be followed by a more targeted attack on a particular user/computer.
All modern desktop computers OS (Ubuntu, Debian, MacOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data.
All modern desktop computers OS (Ubuntu, Debian, MacOS, Windows) provide a built-in secret store that is designed to allow applications to store information that is isolated from all other applications and requires passphrase entry to access the data.
We are seeking solution that provides a common abstraction layer to the many different backends and reasonable fallback for minimal platforms that dont provide a native secret store.
## Decision
We recommend replacing the current Keybase backend based on LevelDB with [Keyring](https://github.com/99designs/keyring) by 99 designs. This application is designed to provide a common abstraction and uniform interface between many secret stores and is used by AWS Vault application by 99-designs application.
This appears to fulfill the requirement of protecting both key material and metadata from rouge software on a users machine.
## Status
Accepted
@ -55,4 +52,3 @@ Running tests locally on a Mac require numerous repetitive password entries.
- #5097 Add keys migrate command [__MERGED__]
- #5180 Drop on-disk keybase in favor of keyring [_PENDING_REVIEW_]
- cosmos/gaia#164 Drop on-disk keybase in favor of keyring (gaia's changes) [_PENDING_REVIEW_]

View File

@ -11,7 +11,7 @@ creation of a decentralized Computer Emergency Response Team (dCERT), whose
members would be elected by a governing community and would fulfill the role of
coordinating the community under emergency situations. This thinking
can be further abstracted into the conception of "blockchain specialization
groups".
groups".
The creation of these groups are the beginning of specialization capabilities
within a wider blockchain community which could be used to enable a certain
@ -19,30 +19,29 @@ level of delegated responsibilities. Examples of specialization which could be
beneficial to a blockchain community include: code auditing, emergency response,
code development etc. This type of community organization paves the way for
individual stakeholders to delegate votes by issue type, if in the future
governance proposals include a field for issue type.
governance proposals include a field for issue type.
## Decision
A specialization group can be broadly broken down into the following functions
(herein containing examples):
- Membership Admittance
- Membership Acceptance
- Membership Revocation
- (probably) Without Penalty
- member steps down (self-Revocation)
- replaced by new member from governance
- (probably) With Penalty
- due to breach of soft-agreement (determined through governance)
- due to breach of hard-agreement (determined by code)
- Execution of Duties
- Special transactions which only execute for members of a specialization
- Membership Admittance
- Membership Acceptance
- Membership Revocation
- (probably) Without Penalty
- member steps down (self-Revocation)
- replaced by new member from governance
- (probably) With Penalty
- due to breach of soft-agreement (determined through governance)
- due to breach of hard-agreement (determined by code)
- Execution of Duties
- Special transactions which only execute for members of a specialization
group (for example, dCERT members voting to turn off transaction routes in
an emergency scenario)
- Compensation
- Group compensation (further distribution decided by the specialization group)
- Individual compensation for all constituents of a group from the
an emergency scenario)
- Compensation
- Group compensation (further distribution decided by the specialization group)
- Individual compensation for all constituents of a group from the
greater community
Membership admittance to a specialization group could take place over a wide
@ -56,31 +55,31 @@ some of these possiblities in a common interface dubbed the `Electionator`. For
its initial implementation as a part of this ADR we recommend that the general
election abstraction (`Electionator`) is provided as well as a basic
implementation of that abstraction which allows for a continuous election of
members of a specialization group.
members of a specialization group.
``` golang
// The Electionator abstraction covers the concept space for
// The Electionator abstraction covers the concept space for
// a wide variety of election kinds.
type Electionator interface {
// is the election object accepting votes.
Active() bool
Active() bool
// functionality to execute for when a vote is cast in this election, here
// the vote field is anticipated to be marshalled into a vote type used
// by an election.
//
// the vote field is anticipated to be marshalled into a vote type used
// by an election.
//
// NOTE There are no explicit ids here. Just votes which pertain specifically
// to one electionator. Anyone can create and send a vote to the electionator item
// which will presumably attempt to marshal those bytes into a particular struct
// and apply the vote information in some arbitrary way. There can be multiple
// Electionators within the Cosmos-Hub for multiple specialization groups, votes
// would need to be routed to the Electionator upstream of here.
Vote(addr sdk.AccAddress, vote []byte)
Vote(addr sdk.AccAddress, vote []byte)
// here lies all functionality to authenticate and execute changes for
// when a member accepts being elected
AcceptElection(sdk.AccAddress)
AcceptElection(sdk.AccAddress)
// Register a revoker object
RegisterRevoker(Revoker)
@ -89,25 +88,25 @@ type Electionator interface {
SealRevokers()
// register hooks to call when an election actions occur
RegisterHooks(ElectionatorHooks)
RegisterHooks(ElectionatorHooks)
// query for the current winner(s) of this election based on arbitrary
// election ruleset
QueryElected() []sdk.AccAddress
QueryElected() []sdk.AccAddress
// query metadata for an address in the election this
// query metadata for an address in the election this
// could include for example position that an address
// is being elected for within a group
//
// this metadata may be directly related to
// is being elected for within a group
//
// this metadata may be directly related to
// voting information and/or privileges enabled
// to members within a group.
// to members within a group.
QueryMetadata(sdk.AccAddress) []byte
}
// ElectionatorHooks, once registered with an Electionator,
// trigger execution of relevant interface functions when
// Electionator events occur.
// ElectionatorHooks, once registered with an Electionator,
// trigger execution of relevant interface functions when
// Electionator events occur.
type ElectionatorHooks interface {
AfterVoteCast(addr sdk.AccAddress, vote []byte)
AfterMemberAccepted(addr sdk.AccAddress)
@ -117,30 +116,30 @@ type ElectionatorHooks interface {
// Revoker defines the function required for a membership revocation rule-set
// used by a specialization group. This could be used to create self revoking,
// and evidence based revoking, etc. Revokers types may be created and
// reused for different election types.
//
// reused for different election types.
//
// When revoking the "cause" bytes may be arbitrarily marshalled into evidence,
// memos, etc.
type Revoker interface {
RevokeName() string // identifier for this revoker type
RevokeName() string // identifier for this revoker type
RevokeMember(addr sdk.AccAddress, cause []byte) error
}
```
Certain level of commonality likely exists between the existing code within
`x/governance` and required functionality of elections. This common
functionality should be abstracted during implementation. Similarly for each
functionality should be abstracted during implementation. Similarly for each
vote implementation client CLI/REST functionality should be abstracted
to be reused for multiple elections.
to be reused for multiple elections.
The specialization group abstraction firstly extends the `Electionator`
but also further defines traits of the group.
but also further defines traits of the group.
``` golang
type SpecializationGroup interface {
Electionator
Electionator
GetName() string
GetDescription() string
GetDescription() string
// general soft contract the group is expected
// to fulfill with the greater community
@ -151,7 +150,7 @@ type SpecializationGroup interface {
// logic to be executed at endblock, this may for instance
// include payment of a stipend to the group members
// for participation in the security group.
// for participation in the security group.
EndBlocker(ctx sdk.Context)
}
```
@ -164,16 +163,15 @@ type SpecializationGroup interface {
### Positive
- increases specialization capabilities of a blockchain
- improve abstractions in `x/gov/` such that they can be used with specialization groups
- increases specialization capabilities of a blockchain
- improve abstractions in `x/gov/` such that they can be used with specialization groups
### Negative
- could be used to increase centralization within a community
- could be used to increase centralization within a community
### Neutral
## References
- (dCERT ADR)[./adr-008-dCERT-group.md]
- [dCERT ADR](./adr-008-dCERT-group.md)

View File

@ -15,13 +15,13 @@ bug-hunters, and developers. During a time of crisis, the dCERT group would
aggregate and relay input from a variety of stakeholders to the developers who
are actively devising a patch to the software, this way sensitive information
does not need to be publicly disclosed while some input from the community can
still be gained.
still be gained.
Additionally, a special privilege is proposed for the dCERT group: the capacity
to "circuit-break" (aka. temporarily disable) a particular message path. Note
that this privilege should be enabled/disabled globally with a governance
parameter such that this privilege could start disabled and later be enabled
through a parameter change proposal, once a dCERT group has been established.
through a parameter change proposal, once a dCERT group has been established.
In the future it is foreseeable that the community may wish to expand the roles
of dCERT with further responsibilities such as the capacity to "pre-approve" a
@ -32,52 +32,55 @@ vulnerability being patched on the live network.
## Decision
The dCERT group is proposed to include an implementation of a `SpecializationGroup`
as defined in [ADR 007](./adr-007-specialization-groups.md). This will include the
implementation of:
- continuous voting
- slashing due to breach of soft contract
- revoking a member due to breach of soft contract
- emergency disband of the entire dCERT group (ex. for colluding maliciously)
- compensation stipend from the community pool or other means decided by
as defined in [ADR 007](./adr-007-specialization-groups.md). This will include the
implementation of:
- continuous voting
- slashing due to breach of soft contract
- revoking a member due to breach of soft contract
- emergency disband of the entire dCERT group (ex. for colluding maliciously)
- compensation stipend from the community pool or other means decided by
governance
This system necessitates the following new parameters:
- blockly stipend allowance per dCERT member
- maximum number of dCERT members
- required staked slashable tokens for each dCERT member
- quorum for suspending a particular member
- proposal wager for disbanding the dCERT group
- stabilization period for dCERT member transition
- circuit break dCERT privileges enabled
This system necessitates the following new parameters:
These parameters are expected to be implemented through the param keeper such
that governance may change them at any given point.
- blockly stipend allowance per dCERT member
- maximum number of dCERT members
- required staked slashable tokens for each dCERT member
- quorum for suspending a particular member
- proposal wager for disbanding the dCERT group
- stabilization period for dCERT member transition
- circuit break dCERT privileges enabled
These parameters are expected to be implemented through the param keeper such
that governance may change them at any given point.
### Continuous Voting Electionator
An `Electionator` object is to be implemented as continuous voting and with the
following specifications:
- All delegation addresses may submit votes at any point which updates their
preferred representation on the dCERT group.
- Preferred representation may be arbitrarily split between addresses (ex. 50%
to John, 25% to Sally, 25% to Carol)
- In order for a new member to be added to the dCERT group they must
- All delegation addresses may submit votes at any point which updates their
preferred representation on the dCERT group.
- Preferred representation may be arbitrarily split between addresses (ex. 50%
to John, 25% to Sally, 25% to Carol)
- In order for a new member to be added to the dCERT group they must
send a transaction accepting their admission at which point the validity of
their admission is to be confirmed.
- A sequence number is assigned when a member is added to dCERT group.
their admission is to be confirmed.
- A sequence number is assigned when a member is added to dCERT group.
If a member leaves the dCERT group and then enters back, a new sequence number
is assigned.
- Addresses which control the greatest amount of preferred-representation are
eligible to join the dCERT group (up the _maximum number of dCERT members_).
- Addresses which control the greatest amount of preferred-representation are
eligible to join the dCERT group (up the _maximum number of dCERT members_).
If the dCERT group is already full and new member is admitted, the existing
dCERT member with the lowest amount of votes is kicked from the dCERT group.
- In the split situation where the dCERT group is full but a vying candidate
has the same amount of vote as an existing dCERT member, the existing
member should maintain its position.
- In the split situation where somebody must be kicked out but the two
- In the split situation where the dCERT group is full but a vying candidate
has the same amount of vote as an existing dCERT member, the existing
member should maintain its position.
- In the split situation where somebody must be kicked out but the two
addresses with the smallest number of votes have the same number of votes,
the address with the smallest sequence number maintains its position.
- A stabilization period can be optionally included to reduce the
- A stabilization period can be optionally included to reduce the
"flip-flopping" of the dCERT membership tail members. If a stabilization
period is provided which is greater than 0, when members are kicked due to
insufficient support, a queue entry is created which documents which member is
@ -106,7 +109,7 @@ Membership suspension by the dCERT group takes place through a voting procedure
by the dCERT group members. After this suspension has taken place, a governance
proposal to slash the dCERT member must be submitted, if the proposal is not
approved by the time the rescinding member has completed unbonding their
tokens, then the tokens are no longer staked and unable to be slashed.
tokens, then the tokens are no longer staked and unable to be slashed.
Additionally in the case of an emergency situation of a colluding and malicious
dCERT group, the community needs the capability to disband the entire dCERT
@ -119,24 +122,25 @@ wager should be required is because as soon as the proposal is made, the
capability of the dCERT group to halt message routes is put on temporarily
suspended, meaning that a malicious actor who created such a proposal could
then potentially exploit a bug during this period of time, with no dCERT group
capable of shutting down the exploitable message routes.
capable of shutting down the exploitable message routes.
### dCERT membership transactions
Active dCERT members
- change of the description of the dCERT group
- circuit break a message route
- vote to suspend a dCERT member.
Active dCERT members
- change of the description of the dCERT group
- circuit break a message route
- vote to suspend a dCERT member.
Here circuit-breaking refers to the capability to disable a groups of messages,
This could for instance mean: "disable all staking-delegation messages", or
"disable all distribution messages". This could be accomplished by verifying
that the message route has not been "circuit-broken" at CheckTx time (in
`baseapp/baseapp.go`).
`baseapp/baseapp.go`).
"unbreaking" a circuit is anticipated only to occur during a hard fork upgrade
meaning that no capability to unbreak a message route on a live chain is
required.
required.
Note also, that if there was a problem with governance voting (for instance a
capability to vote many times) then governance would be broken and should be
@ -153,15 +157,15 @@ they should all be severely slashed.
### Positive
- Potential to reduces the number of parties to coordinate with during an emergency
- Reduction in possibility of disclosing sensitive information to malicious parties
- Potential to reduces the number of parties to coordinate with during an emergency
- Reduction in possibility of disclosing sensitive information to malicious parties
### Negative
- Centralization risks
- Centralization risks
### Neutral
## References
(Specialization Groups ADR)[./adr-007-specialization-groups.md]
[Specialization Groups ADR](./adr-007-specialization-groups.md)

View File

@ -17,10 +17,12 @@ For example, let's say a user wants to implement some custom signature verificat
One approach is to use the [ModuleManager](https://godoc.org/github.com/cosmos/cosmos-sdk/types/module) and have each module implement its own antehandler if it requires custom antehandler logic. The ModuleManager can then be passed in an AnteHandler order in the same way it has an order for BeginBlockers and EndBlockers. The ModuleManager returns a single AnteHandler function that will take in a tx and run each module's `AnteHandle` in the specified order. The module manager's AnteHandler is set as the baseapp's AnteHandler.
Pros:
1. Simple to implement
2. Utilizes the existing ModuleManager architecture
Cons:
1. Improves granularity but still cannot get more granular than a per-module basis. e.g. If auth's `AnteHandle` function is in charge of validating memo and signatures, users cannot swap the signature-checking functionality while keeping the rest of auth's `AnteHandle` functionality.
2. Module AnteHandler are run one after the other. There is no way for one AnteHandler to wrap or "decorate" another.
@ -53,10 +55,12 @@ func (example Decorator) Deliver(ctx Context, store KVStore, tx Tx, next Deliver
```
Pros:
1. Weave Decorators can wrap over the next decorator/handler in the chain. The ability to both pre-process and post-process may be useful in certain settings.
2. Provides a nested modular structure that isn't possible in the solution above, while also allowing for a linear one-after-the-other structure like the solution above.
Cons:
1. It is hard to understand at first glance the state updates that would occur after a Decorator runs given the `ctx`, `store`, and `tx`. A Decorator can have an arbitrary number of nested Decorators being called within its function body, each possibly doing some pre- and post-processing before calling the next decorator on the chain. Thus to understand what a Decorator is doing, one must also understand what every other decorator further along the chain is also doing. This can get quite complicated to understand. A linear, one-after-the-other approach while less powerful, may be much easier to reason about.
### Chained Micro-Functions
@ -65,17 +69,17 @@ The benefit of Weave's approach is that the Decorators can be very concise, whic
Another approach is to split the AnteHandler functionality into tightly scoped "micro-functions", while preserving the one-after-the-other ordering that would come from the ModuleManager approach.
We can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions.
We can then have a way to chain these micro-functions so that they run one after the other. Modules may define multiple ante micro-functions and then also provide a default per-module AnteHandler that implements a default, suggested order for these micro-functions.
Users can order the AnteHandlers easily by simply using the ModuleManager. The ModuleManager will take in a list of AnteHandlers and return a single AnteHandler that runs each AnteHandler in the order of the list provided. If the user is comfortable with the default ordering of each module, this is as simple as providing a list with each module's antehandler (exactly the same as BeginBlocker and EndBlocker).
If however, users wish to change the order or add, modify, or delete ante micro-functions in anyway; they can always define their own ante micro-functions and add them explicitly to the list that gets passed into module manager.
#### Default Workflow:
#### Default Workflow
This is an example of a user's AnteHandler if they choose not to make any custom micro-functions.
##### SDK code:
##### SDK code
```go
// Chains together a list of AnteHandler micro-functions that get run one after the other.
@ -137,7 +141,7 @@ func (mm ModuleManager) GetAnteHandler() AnteHandler {
}
```
##### User Code:
##### User Code
```go
// Note: Since user is not making any custom modifications, we can just SetAnteHandlerOrder with the default AnteHandlers provided by each module in our preferred order
@ -166,11 +170,13 @@ moduleManager.SetAnteHandlerOrder([]AnteHandler(ValidateMemo, CustomSigVerify, D
```
Pros:
1. Allows for ante functionality to be as modular as possible.
2. For users that do not need custom ante-functionality, there is little difference between how antehandlers work and how BeginBlock and EndBlock work in ModuleManager.
3. Still easy to understand
Cons:
1. Cannot wrap antehandlers with decorators like you can with Weave.
### Simple Decorators

View File

@ -33,6 +33,7 @@ However in practice, we likely don't want a linear relation between amount of st
#### Parameterization
This requires parameterizing a logistic function. It is very well understood how to parameterize this. It has four parameters:
1) A minimum slashing factor
2) A maximum slashing factor
3) The inflection point of the S-curve (essentially where do you want to center the S)
@ -66,7 +67,6 @@ We then will iterate over all the SlashEvents in the queue, adding their `Valida
Once we have the `NewSlashPercent`, we then iterate over all the `SlashEvent`s in the queue once again, and if `NewSlashPercent > SlashedSoFar` for that SlashEvent, we call the `staking.Slash(slashEvent.Address, slashEvent.Power, Math.Min(Math.Max(minSlashPercent, NewSlashPercent - SlashedSoFar), maxSlashPercent)` (we pass in the power of the validator before any slashes occured, so that we slash the right amount of tokens). We then set `SlashEvent.SlashedSoFar` amount to `NewSlashPercent`.
## Status
Proposed

View File

@ -7,7 +7,7 @@
## Context
Validator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. https://github.com/tendermint/tendermint/issues/1136). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos-SDK.
Validator consensus key rotation feature has been discussed and requested for a long time, for the sake of safer validator key management policy (e.g. https://github.com/tendermint/tendermint/issues/1136). So, we suggest one of the simplest form of validator consensus key rotation implementation mostly onto Cosmos-SDK.
We don't need to make any update on consensus logic in Tendermint because Tendermint does not have any mapping information of consensus key and validator operator key, meaning that from Tendermint point of view, a consensus key rotation of a validator is simply a replacement of a consensus key to another.
@ -23,7 +23,6 @@ Also, it should be noted that this ADR includes only the simplest form of consen
- start validating with new consensus key.
- validators using HSM and KMS should update the consensus key in HSM to use the new rotated key after the height `h` when `MsgRotateConsPubKey` committed to the blockchain.
### Considerations
- consensus key mapping information management strategy
@ -33,13 +32,13 @@ Also, it should be noted that this ADR includes only the simplest form of consen
- key rotation costs related to LCD and IBC
- LCD and IBC will have traffic/computation burden when there exists frequent power changes
- In current Tendermint design, consensus key rotations are seen as power changes from LCD or IBC perspective
- Therefore, to minimize unnecessary frequent key rotation behavior, we limited maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee
- Therefore, to minimize unnecessary frequent key rotation behavior, we limited maximum number of rotation in recent unbonding period and also applied exponentially increasing rotation fee
- limits
- a validator cannot rotate its consensus key more than `MaxConsPubKeyRotations` time for any unbonding period, to prevent spam.
- parameters can be decided by governance and stored in genesis file.
- key rotation fee
- a validator should pay `KeyRotationFee` to rotate the consensus key which is calculated as below
- `KeyRotationFee` = (max(`VotingPowerPercentage` * 100, 1) * `InitialKeyRotationFee`) * 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period)
- `KeyRotationFee` = (max(`VotingPowerPercentage` *100, 1)* `InitialKeyRotationFee`) * 2^(number of rotations in `ConsPubKeyRotationHistory` in recent unbonding period)
- evidence module
- evidence module can search corresponding consensus key for any height from slashing keeper so that it can decide which consensus key is supposed to be used for given height.
- abci.ValidatorUpdate
@ -50,7 +49,6 @@ Also, it should be noted that this ADR includes only the simplest form of consen
- `MaxConsPubKeyRotations` : maximum number of rotation can be executed by a validator in recent unbonding period. default value 10 is suggested(11th key rotation will be rejected)
- `InitialKeyRotationFee` : the initial key rotation fee when no key rotation has happened in recent unbonding period. default value 1atom is suggested(1atom fee for the first key rotation in recent unbonding period)
### Workflow
1. The validator generates a new consensus keypair.
@ -64,7 +62,7 @@ Also, it should be noted that this ADR includes only the simplest form of consen
```
3. `handleMsgRotateConsPubKey` gets `MsgRotateConsPubKey`, calls `RotateConsPubKey` with emits event
4. `RotateConsPubKey`
4. `RotateConsPubKey`
- checks if `NewPubKey` is not duplicated on `ValidatorsByConsAddr`
- checks if the validator is does not exceed parameter `MaxConsPubKeyRotations` by iterating `ConsPubKeyRotationHistory`
- checks if the signing account has enough balance to pay `KeyRotationFee`
@ -83,7 +81,7 @@ Also, it should be noted that this ADR includes only the simplest form of consen
}
```
5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator
5. `ApplyAndReturnValidatorSetUpdates` checks if there is `ConsPubKeyRotationHistory` with `ConsPubKeyRotationHistory.RotatedHeight == ctx.BlockHeight()` and if so, generates 2 `ValidatorUpdate` , one for a remove validator and one for create new validator
```go
abci.ValidatorUpdate{
@ -99,6 +97,7 @@ Also, it should be noted that this ADR includes only the simplest form of consen
6. at `previousVotes` Iteration logic of `AllocateTokens`, `previousVote` using `OldConsPubKey` match up with `ConsPubKeyRotationHistory`, and replace validator for token allocation
7. Migrate `ValidatorSigningInfo` and `ValidatorMissedBlockBitArray` from `OldConsPubKey` to `NewConsPubKey`
- Note : All above features shall be implemented in `staking` module.
## Status

View File

@ -16,7 +16,7 @@ However, we would like to avoid the creation of an entire second voting process
Thus, we propose the following mechanism:
### Params:
### Params
- The current gov param `VotingPeriod` is to be replaced by a `MinVotingPeriod` param. This is the the default voting period that all governance proposal voting periods start with.
- There is a new gov param called `MaxVotingPeriodExtension`.

View File

@ -182,6 +182,7 @@ In addition to serving as a whitelist, `InterfaceRegistry` can also serve
to communicate the list of concrete types that satisfy an interface to clients.
In .proto files:
* fields which accept interfaces should be annotated with `cosmos_proto.accepts_interface`
using the same full-qualified name passed as `protoName` to `InterfaceRegistry.RegisterInterface`
* interface implementations should be annotated with `cosmos_proto.implements_interface`
@ -210,7 +211,7 @@ Note that `InterfaceRegistry` usage does not deviate from standard protobuf
usage of `Any`, it just introduces a security and introspection layer for
golang usage.
`InterfaceRegistry` will be a member of `ProtoCodec`
`InterfaceRegistry` will be a member of `ProtoCodec`
described above. In order for modules to register interface types, app modules
can optionally implement the following interface:

View File

@ -52,7 +52,6 @@ high to justify its usage. However for queries this is not a concern, and
providing generic module-level queries that use `Any` does not preclude apps
from also providing app-level queries that return use the app-level `oneof`s.
A hypothetical example for the `gov` module would look something like:
```proto
@ -126,7 +125,6 @@ The signature for this method matches the existing
`RegisterServer` method on the GRPC `Server` type where `handler` is the custom
query server implementation described above.
GRPC-like requests are routed by the service name (ex. `cosmos_sdk.x.bank.v1.Query`)
and method name (ex. `QueryBalance`) combined with `/`s to form a full
method name (ex. `/cosmos_sdk.x.bank.v1.Query/QueryBalance`). This gets translated
@ -140,7 +138,7 @@ there is a quite natural mapping of GRPC-like rpc methods to the existing
This basic specification allows us to reuse protocol buffer `service` definitions
for ABCI custom queries substantially reducing the need for manual decoding and
encoding in query methods.
encoding in query methods.
### GRPC Protocol Support
@ -178,7 +176,7 @@ service Query {
}
```
grpc-gateway will work direcly against the GRPC proxy described above which will
grpc-gateway will work direcly against the GRPC proxy described above which will
translate requests to ABCI queries under the hood. grpc-gateway can also
generate Swagger definitions automatically.
@ -211,7 +209,7 @@ we have tweaked the grpc codegen to use an interface rather than concrete type
for the generated client struct. This allows us to also reuse the GRPC infrastructure
for ABCI client queries.
1Context` will receive a new method `QueryConn` that returns a `ClientConn`
1Context`will receive a new method`QueryConn`that returns a`ClientConn`
that routes calls to ABCI queries
Clients (such as CLI methods) will then be able to call query methods like this:

View File

@ -13,6 +13,7 @@ the need to rewrite whole BaseApp. Also there's one special case for `sdk.ErrorO
might be handled in a "standard" way (middleware) alongside the others.
We propose middleware-solution, which could help developers implement the following cases:
* add external logging (let's say sending reports to external services like [Sentry](https://sentry.io));
* call panic for specific error cases;
@ -56,7 +57,7 @@ An example:
func exampleErrHandler(recoveryObj interface{}) error {
err, ok := recoveryObj.(error)
if !ok { return nil }
if someSpecificError.Is(err) {
panic(customPanicMsg)
} else {
@ -87,13 +88,14 @@ func newRecoveryMiddleware(handler RecoveryHandler, next recoveryMiddleware) rec
```
Function receives a `recoveryObj` object and returns:
* (next `recoveryMiddleware`, `nil`) if object wasn't handled (not a target type) by `RecoveryHandler`;
* (`nil`, not nil `error`) if input object was handled and other middlewares in the chain should not be executed;
* (`nil`, `nil`) in case of invalid behavior. Panic recovery might not have been properly handled;
this can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`');
this can be avoided by always using a `default` as a rightmost middleware in the chain (always returns an `error`');
`OutOfGas` middleware example:
```go
func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recoveryMiddleware) recoveryMiddleware {
handler := func(recoveryObj interface{}) error {
@ -106,12 +108,13 @@ func newOutOfGasRecoveryMiddleware(gasWanted uint64, ctx sdk.Context, next recov
),
)
}
return newRecoveryMiddleware(handler, next)
}
```
`Default` middleware example:
```go
func newDefaultRecoveryMiddleware() recoveryMiddleware {
handler := func(recoveryObj interface{}) error {
@ -119,7 +122,7 @@ func newDefaultRecoveryMiddleware() recoveryMiddleware {
sdkerrors.ErrPanic, fmt.Sprintf("recovered: %v\nstack:\n%v", recoveryObj, string(debug.Stack())),
)
}
return newRecoveryMiddleware(handler, nil)
}
```

View File

@ -57,6 +57,7 @@ third-party modules.
As a starting point, we should adopt all of the [DEFAULT](https://buf.build/docs/lint-checkers#default)
checkers in [Buf's](https://buf.build) including [`PACKAGE_DIRECTORY_MATCH`](https://buf.build/docs/lint-checkers#file_layout),
except:
* [PACKAGE_VERSION_SUFFIX](https://buf.build/docs/lint-checkers#package_version_suffix)
* [SERVICE_SUFFIX](https://buf.build/docs/lint-checkers#service_suffix)
@ -118,6 +119,7 @@ to prevent such breakage.
With that in mind, different stable versions (i.e. `v1` or `v2`) of a package should more or less be considered
different packages and this should be last resort approach for upgrading protobuf schemas. Scenarios where creating
a `v2` may make sense are:
* we want to create a new module with similar functionality to an existing module and adding `v2` is the most natural
way to do this. In that case, there are really just two different, but similar modules with different APIs.
* we want to add a new revamped API for an existing module and it's just too cumbersome to add it to the existing package,
@ -127,11 +129,12 @@ so putting it in `v2` is cleaner for users. In this case, care should be made to
#### Guidelines on unstable (alpha and beta) package versions
The following guidelines are recommended for marking packages as alpha or beta:
* marking something as `alpha` or `beta` should be a last resort and just putting something in the
stable package (i.e. `v1` or `v2`) should be preferred
* a package *should* be marked as `alpha` *if and only if* there are active discussions to remove
or significantly alter the package in the near future
* a package *should* be marked as `beta` *if and only if* there is an active discussion to
* a package *should* be marked as `beta` *if and only if* there is an active discussion to
significantly refactor/rework the functionality in the near future but not remove it
* modules *can and should* have types in both stable (i.e. `v1` or `v2`) and unstable (`alpha` or `beta`) packages.
@ -140,6 +143,7 @@ Whenever code is released into the wild, especially on a blockchain, there is a
cases, for instance with immutable smart contracts, a breaking change may be impossible to fix.
When marking something as `alpha` or `beta`, maintainers should ask the questions:
* what is the cost of asking others to change their code vs the benefit of us maintaining the optionality to change it?
* what is the plan for moving this to `v1` and how will that affect users?
@ -151,6 +155,7 @@ and so if they actually went and changed the package to `grpc.reflection.v1`, so
they probably don't want to do that... So now the `v1alpha` package is more or less the de-facto `v1`. Let's not do that.
The following are guidelines for working with non-stable packages:
* [Buf's recommended version suffix](https://buf.build/docs/lint-checkers#package_version_suffix)
(ex. `v1alpha1`) _should_ be used for non-stable packages
* non-stable packages should generally be excluded from breaking change detection

View File

@ -38,7 +38,7 @@ step when sending and signing transactions.
### Decision
The following encoding scheme is to be used by other ADRs,
The following encoding scheme is to be used by other ADRs,
and in particular for `SignDoc` serialization.
## Specification
@ -71,7 +71,6 @@ malleability.
Among other sources of non-determinism, this ADR eliminates the possibility of
encoding malleability.
### Serialization rules
The serialization is based on the
@ -275,7 +274,6 @@ for all protobuf documents we need in the context of Cosmos SDK signing.
### Neutral
### Usage in SDK
For the reasons mentioned above ("Negative" section) we prefer to keep workarounds

View File

@ -18,14 +18,12 @@ This ADR defines an address format for all addressable SDK accounts. That includ
Issue [\#3685](https://github.com/cosmos/cosmos-sdk/issues/3685) identified that public key
address spaces are currently overlapping. We confirmed that it significantly decreases security of Cosmos SDK.
### Problem
An attacker can control an input for an address generation function. This leads to a birthday attack, which significantly decreases the security space.
To overcome this, we need to separate the inputs for different kind of account types:
a security break of one account type shouldn't impact the security of other account types.
### Initial proposals
One initial proposal was extending the address length and
@ -47,7 +45,6 @@ And explained how this approach should be sufficiently collision resistant:
This led to the first proposal (which we proved to be not good enough):
we concatenate a key type with a public key, hash it and take the first 20 bytes of that hash, summarized as `sha256(keyTypePrefix || keybytes)[:20]`.
### Review and Discussions
In [\#5694](https://github.com/cosmos/cosmos-sdk/issues/5694) we discussed various solutions.
@ -55,6 +52,7 @@ We agreed that 20 bytes it's not future proof, and extending the address length
This disqualifies the initial proposal.
In the issue we discussed various modifications:
+ Choice of the hash function.
+ Move the prefix out of the hash function: `keyTypePrefix + sha256(keybytes)[:20]` [post-hash-prefix-proposal].
+ Use double hashing: `sha256(keyTypePrefix + sha256(keybytes)[:20])`.
@ -65,13 +63,11 @@ In the issue we discussed various modifications:
+ Support currently used tools - we don't want to break an ecosystem, or add a long adaptation period. Ref: https://github.com/cosmos/cosmos-sdk/issues/8041
+ Try to keep the address length small - addresses are widely used in state, both as part of a key and object value.
### Scope
This ADR only defines a process for the generation of address bytes. For end-user interactions with addresses (through the API, or CLI, etc.), we still use bech32 to format these addresses as strings. This ADR doesn't change that.
Using Bech32 for string encoding gives us support for checksum error codes and handling of user typos.
## Decision
We define the following account types, for which we define the address function:
@ -81,7 +77,6 @@ We define the following account types, for which we define the address function:
3. composed accounts with a native address key (ie: bls, group module accounts)
4. module accounts: basically any accounts which cannot sign transactions and which are managed internally by modules
### Legacy Public Key Addresses Don't Change
Currently (Jan 2021), the only officially supported SDK user accounts are `secp256k1` basic accounts and legacy amino multisig.
@ -120,11 +115,12 @@ and it's more secure than [post-hash-prefix-proposal] (which uses the first 20 b
Moreover the cryptographer motivated the choice of adding `typ` in the hash to protect against a switch table attack.
We use the `address.Hash` function for generating addresses for all accounts represented by a single key:
* simple public keys: `address.Hash(keyType, pubkey)`
+ aggregated keys (eg: BLS): `address.Hash(keyType, aggregatedPubKey)`
+ modules: `address.Hash("module", moduleName)`
### Composed Addresses
For simple composed accounts (like new naive multisig), we generalize the `address.Hash`. The address is constructed by recursively creating addresses for the sub accounts, sorting the addresses and composing them into a single address. It ensures that the ordering of keys doesn't impact the resulting address.
@ -199,11 +195,13 @@ func Module(moduleName string, key []byte) []byte{
```
**Example** A lending BTC pool address would be:
```
btcPool := address.Module("lending", btc.Addrress()})
```
If we want to create an address for a module account depending on more than one key, we can concatenate them:
```
btcAtomAMM := address.Module("amm", btc.Addrress() + atom.Address()})
```
@ -221,11 +219,11 @@ func Derive(address []byte, derivationKey []byte) []byte {
Note: `Module` is a special case of the more general _derived_ address, where we set the `"module"` string for the _from address_.
**Example** For a cosmwasm smart-contract address we could use the following construction:
```
smartContractAddr := Derived(Module("cosmwasm", smartContractsNamespace), []{smartContractKey})
```
### Schema Types
A `typ` parameter used in `Hash` function SHOULD be unique for each account type.
@ -246,8 +244,6 @@ These names are derived directly from .proto files in a standardized way and use
in other places such as the type URL in `Any`s. We can easily obtain the name using
`proto.MessageName(msg)`.
## Consequences
### Backwards Compatibility
@ -272,19 +268,18 @@ This ADR is compatible with what was committed and directly supported in the SDK
- protobuf message names are used as key type prefixes
## Further Discussions
Some accounts can have a fixed name or may be constructed in other way (eg: modules). We were discussing an idea of an account with a predefined name (eg: `me.regen`), which could be used by institutions.
Without going into details, these kinds of addresses are compatible with the hash based addresses described here as long as they don't have the same length.
More specifically, any special account address must not have a length equal to 20 or 32 bytes.
## Appendix: Consulting session
End of Dec 2020 we had a session with [Alan Szepieniec](https://scholar.google.be/citations?user=4LyZn8oAAAAJ&hl=en) to consult the approach presented above.
Alan general observations:
+ we dont need 2-preimage resistance
+ we need 32bytes address space for collision resistance
+ when an attacker can control an input for object with an address then we have a problem with birthday attack
@ -292,11 +287,12 @@ Alan general observations:
+ sha2 mining can be use to breaking address pre-image
Hashing algorithm
+ any attack breaking blake3 will break blake2
+ Alan is pretty confident about the current security analysis of the blake hash algorithm. It was a finalist, and the author is well known in security analysis.
Algorithm:
+ Alan recommends to hash the prefix: `address(pub_key) = hash(hash(key_type) + pub_key)[:32]`, main benefits:
+ we are free to user arbitrary long prefix names
+ we still dont risk collisions
@ -305,24 +301,29 @@ Algorithm:
+ Aaron asked about post hash prefixes (`address(pub_key) = key_type + hash(pub_key)`) and differences. Alan noted that this approach has longer address space and its stronger.
Algorithm for complex / composed keys:
+ merging tree like addresses with same algorithm are fine
Module addresses: Should module addresses have different size to differentiate it?
+ we will need to set a pre-image prefix for module addresse to keept them in 32-byte space: `hash(hash('module') + module_key)`
+ Aaron observation: we already need to deal with variable length (to not break secp256k1 keys).
Discssion about arithmetic hash function for ZKP
+ Posseidon / Rescue
+ Problem: much bigger risk because we dont know much techniques and history of crypto-analysis of arithmetic constructions. Its still a new ground and area of active research.
Post quantum signature size
+ Alan suggestion: Falcon: speed / size ration - very good.
+ Aaron - should we think about it?
Alan: based on early extrapolation this thing will get able to break EC cryptography in 2050 . But thats a lot of uncertainty. But there is magic happening with recurions / linking / simulation and that can speedup the progress.
Other ideas
+ Lets say we use same key and two different address algorithms for 2 different use cases. Is it still safe to use it? Alan: if we want to hide the public key (which is not our use case), then its less secure but there are fixes.
### References
+ [Notes](https://hackmd.io/_NGWI4xZSbKzj1BkCqyZMw)

View File

@ -115,6 +115,7 @@ message MsgGrantAllowance {
```
In order to use allowances in transactions, we add a new field `granter` to the transaction `Fee` type:
```proto
package cosmos.tx.v1beta1;

View File

@ -19,6 +19,7 @@ on behalf of that account to other accounts.
## Context
The concrete use cases which motivated this module include:
- the desire to delegate the ability to vote on proposals to other accounts besides the account which one has
delegated stake
- "sub-keys" functionality, as originally proposed in [\#4480](https://github.com/cosmos/cosmos-sdk/issues/4480) which

View File

@ -182,6 +182,7 @@ This also allows us to change how we perform functional tests. Instead of mockin
Finally, closing a module to client API opens desirable OCAP patterns discussed in ADR-033. Since server implementation and interface is hidden, nobody can hold "keepers"/servers and will be forced to relay on the client interface, which will drive developers for correct encapsulation and software engineering patterns.
### Pros
- communicates return type clearly
- manual handler registration and return type marshaling is no longer needed, just implement the interface and register it
- communication interface is automatically generated, the developer can now focus only on the state transition methods - this would improve the UX of [\#7093](https://github.com/cosmos/cosmos-sdk/issues/7093) approach (1) if we chose to adopt that
@ -189,8 +190,8 @@ Finally, closing a module to client API opens desirable OCAP patterns discussed
- dramatically reduces and simplifies the code
### Cons
- using `service` definitions outside the context of gRPC could be confusing (but doesnt violate the proto3 spec)
- using `service` definitions outside the context of gRPC could be confusing (but doesnt violate the proto3 spec)
## References

View File

@ -8,7 +8,7 @@
- Anil Kumar (@anilcse)
- Jack Zampolin (@jackzampolin)
- Adam Bozanich (@boz)
- Adam Bozanich (@boz)
## Status
@ -16,13 +16,13 @@ Proposed
## Abstract
Currently in the SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
Currently in the SDK, events are defined in the handlers for each message as well as `BeginBlock` and `EndBlock`. Each module doesn't have types defined for each event, they are implemented as `map[string]string`. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
## Context
Currently in the SDK, events are defined in the handlers for each message, meaning each module doesn't have a cannonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
Currently in the SDK, events are defined in the handlers for each message, meaning each module doesn't have a cannonical set of types for each event. Above all else this makes these events difficult to consume as it requires a great deal of raw string matching and parsing. This proposal focuses on updating the events to use **typed events** defined in each module such that emiting and subscribing to events will be much easier. This workflow comes from the experience of the Akash Network team.
[Our platform](http://github.com/ovrclk/akash) requires a number of programatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes developement, our team has developed a standard method for defining and consuming typed events in SDK modules. We have found that it is extremely useful in building this type of event driven application.
[Our platform](http://github.com/ovrclk/akash) requires a number of programatic on chain interactions both on the provider (datacenter - to bid on new orders and listen for leases created) and user (application developer - to send the app manifest to the provider) side. In addition the Akash team is now maintaining the IBC [`relayer`](https://github.com/ovrclk/relayer), another very event driven process. In working on these core pieces of infrastructure, and integrating lessons learned from Kubernetes developement, our team has developed a standard method for defining and consuming typed events in SDK modules. We have found that it is extremely useful in building this type of event driven application.
As the SDK gets used more extensively for apps like `peggy`, other peg zones, IBC, DeFi, etc... there will be an exploding demand for event driven applications to support new features desired by users. We propose upstreaming our findings into the SDK to enable all SDK applications to quickly and easily build event driven apps to aid their core application. Wallets, exchanges, explorers, and defi protocols all stand to benefit from this work.
@ -39,7 +39,7 @@ __Step-1__: Implement additional functionality in the `types` package: `EmitTyp
```go
// types/events.go
// EmitTypedEvent takes typed event and emits converting it into sdk.Event
// EmitTypedEvent takes typed event and emits converting it into sdk.Event
func (em *EventManager) EmitTypedEvent(event proto.Message) error {
evtType := proto.MessageName(event)
evtJSON, err := codec.ProtoMarshalJSON(event)
@ -82,7 +82,7 @@ func ParseTypedEvent(event abci.Event) (proto.Message, error) {
} else {
value = reflect.Zero(concreteGoType)
}
protoMsg, ok := value.Interface().(proto.Message)
if !ok {
return nil, fmt.Errorf("%q does not implement proto.Message", event.Type)
@ -109,7 +109,7 @@ func ParseTypedEvent(event abci.Event) (proto.Message, error) {
Here, the `EmitTypedEvent` is a method on `EventManager` which takes typed event as input and apply json serialization on it. Then it maps the JSON key/value pairs to `event.Attributes` and emits it in form of `sdk.Event`. `Event.Type` will be the type URL of the proto message.
When we subscribe to emitted events on the tendermint websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message.
When we subscribe to emitted events on the tendermint websocket, they are emitted in the form of an `abci.Event`. `ParseTypedEvent` parses the event back to it's original proto message.
__Step-2__: Add proto definitions for typed events for msgs in each module:
@ -165,11 +165,10 @@ Please see the below code sample for more detail on this flow looks for clients.
### Negative
## Detailed code example of publishing events
This ADR also proposes adding affordances to emit and consume these events. This way developers will only need to write
`EventHandler`s which define the actions they desire to take.
`EventHandler`s which define the actions they desire to take.
```go
// EventEmitter is a type that describes event emitter functions
@ -193,7 +192,7 @@ func main() {
}
// SubmitProposalEventHandler is an example of an event handler that prints proposal details
// when any EventSubmitProposal is emitted.
// when any EventSubmitProposal is emitted.
func SubmitProposalEventHandler(ev proto.Message) (err error) {
switch event := ev.(type) {
// Handle governance proposal events creation events
@ -206,9 +205,9 @@ func SubmitProposalEventHandler(ev proto.Message) (err error) {
}
}
// TxEmitter is an example of an event emitter that emits just transaction events. This can and
// should be implemented somewhere in the SDK. The SDK can include an EventEmitters for tm.event='Tx'
// and/or tm.event='NewBlock' (the new block events may contain typed events)
// TxEmitter is an example of an event emitter that emits just transaction events. This can and
// should be implemented somewhere in the SDK. The SDK can include an EventEmitters for tm.event='Tx'
// and/or tm.event='NewBlock' (the new block events may contain typed events)
func TxEmitter(ctx context.Context, cliCtx client.Context, ehs ...EventHandler) (err error) {
// Instantiate and start tendermint RPC client
client, err := cliCtx.GetNode()
@ -290,7 +289,7 @@ func PublishChainTxEvents(ctx context.Context, client tmclient.EventsClient, bus
if !evt.Result.IsOK() {
continue
}
// range over events, parse them using the basic manager and
// range over events, parse them using the basic manager and
// send them to the pubsub bus
for _, abciEv := range events {
typedEvent, err := sdk.ParseTypedEvent(abciEv)
@ -315,5 +314,6 @@ func PublishChainTxEvents(ctx context.Context, client tmclient.EventsClient, bus
```
## References
- [Publish Custom Events via a bus](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L19-L58)
- [Consuming the events in `Client`](https://github.com/ovrclk/deploy/blob/bf6c633ab6c68f3026df59efd9982d6ca1bf0561/cmd/event-handlers.go#L57)

View File

@ -13,6 +13,7 @@ Proposed
This ADR introduces a system for permissioned inter-module communication leveraging the protobuf `Query` and `Msg`
service definitions defined in [ADR 021](./adr-021-protobuf-query-encoding.md) and
[ADR 031](./adr-031-msg-service.md) which provides:
- stable protobuf based module interfaces to potentially later replace the keeper paradigm
- stronger inter-module object capabilities (OCAPs) guarantees
- module accounts and sub-account authorization
@ -24,6 +25,7 @@ In the current Cosmos SDK documentation on the [Object-Capability Model](../core
> We assume that a thriving ecosystem of Cosmos-SDK modules that are easy to compose into a blockchain application will contain faulty or malicious modules.
There is currently not a thriving ecosystem of Cosmos SDK modules. We hypothesize that this is in part due to:
1. lack of a stable v1.0 Cosmos SDK to build modules off of. Module interfaces are changing, sometimes dramatically, from
point release to point release, often for good reasons, but this does not create a stable foundation to build on.
2. lack of a properly implemented object capability or even object-oriented encapsulation system which makes refactors
@ -84,6 +86,7 @@ this ADR does not necessitate the creation of new protobuf definitions or servic
based service interfaces already used by clients for inter-module communication.
Using this `QueryClient`/`MsgClient` approach has the following key benefits over exposing keepers to external modules:
1. Protobuf types are checked for breaking changes using [buf](https://buf.build/docs/breaking-overview) and because of
the way protobuf is designed this will give us strong backwards compatibility guarantees while allowing for forward
evolution.
@ -95,6 +98,7 @@ enabling atomicy of operations ([currently a problem](https://github.com/cosmos/
transaction
This mechanism has the added benefits of:
- reducing boilerplate through code generation, and
- allowing for modules in other languages either via a VM like CosmWasm or sub-processes using gRPC
@ -116,6 +120,7 @@ For example, module `A` could use its `A.ModuleKey` to create `MsgSend` object f
will assure that the `from` account (`A.ModuleKey` in this case) is the signer.
Here's an example of a hypothetical module `foo` interacting with `x/bank`:
```go
package foo
@ -247,7 +252,6 @@ In [ADR 031](./adr-031-msg-service.md), the `AppModule.RegisterService(Configura
inter-module communication, we extend the `Configurator` interface to pass in the `ModuleKey` and to allow modules to
specify their dependencies on other modules using `RequireServer()`:
```go
type Configurator interface {
MsgServer() grpc.Server
@ -334,9 +338,10 @@ other modules. This will be addressed in separate ADRs or updates to this ADR.
### Future Work
Other future improvements may include:
* custom code generation that:
* simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`)
* optimizes inter-module calls - for instance caching resolved methods after first invocation
* simplifies interfaces (ex. generates code with `sdk.Context` instead of `context.Context`)
* optimizes inter-module calls - for instance caching resolved methods after first invocation
* combining `StoreKey`s and `ModuleKey`s into a single interface so that modules have a single OCAPs handle
* code generation which makes inter-module communication more performant
* decoupling `ModuleKey` creation from `AppModuleBasic.Name()` so that app's can override root module account names
@ -353,8 +358,8 @@ The advantages of the approach described in this ADR are mostly around how it in
specifically:
* protobuf so that:
* code generation of interfaces can be leveraged for a better dev UX
* module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview)
* code generation of interfaces can be leveraged for a better dev UX
* module interfaces are versioned and checked for breakage using [buf](https://docs.buf.build/breaking-overview)
* sub-module accounts as per ADR 028
* the general `Msg` passing paradigm and the way signers are specified by `GetSigners`

View File

@ -16,7 +16,7 @@ Account rekeying is a process hat allows an account to replace its authenticatio
Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it can not be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons.
Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferrable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.
Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferrable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.
## Decision
@ -43,19 +43,15 @@ The MsgChangePubKey transaction needs to be signed by the existing pubkey in sta
Once, approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg.
An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this this externality (this bound gas amount is configured as parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action.
```go
amount := ak.GetParams(ctx).PubKeyChangeCost
ctx.GasMeter().ConsumeGas(amount, "pubkey change fee")
```
Everytime a key for an address is changed, we will store a log of this change in the state of the chain, thus creating a stack of all previous keys for an address and the time intervals for which they were active. This allows dapps and clients to easily query past keys for an account which may be useful for features such as verifying timestamped off-chain signed messages.
## Consequences
### Positive
@ -70,7 +66,6 @@ Breaks the current assumed relationship between address and pubkeys as H(pubkey)
* This makes wallets that support this feature more complicated. For example, if an address on chain was updated, the corresponding key in the CLI wallet also needs to be updated.
* Cannot automatically prune accounts with 0 balance that have had their pubkey changed.
### Neutral
* While the purpose of this is intended to allow the owner of an account to update to a new pubkey they own, this could technically also be used to transfer ownership of an account to a new owner. For example, this could be use used to sell a staked position without unbonding or an account that has vesting tokens. However, the friction of this is very high as this would essentially have to be done as a very specific OTC trade. Furthermore, additional constraints could be added to prevent accouns with Vesting tokens to use this feature.

View File

@ -7,33 +7,33 @@
- Alessio Treglia (@alessio)
- Frojdy Dymylja (@fdymylja)
## Changelog
- 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the SDK.
## Context
[Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to
[Rosetta API](https://www.rosetta-api.org/) is an open-source specification and set of tools developed by Coinbase to
standardise blockchain interactions.
Through the use of a standard API for integrating blockchain applications it will
* Be easier for a user to interact with a given blockchain
* Allow exchanges to integrate new blockchains quickly and easily
* Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at
* Enable application developers to build cross-blockchain applications such as block explorers, wallets and dApps at
considerably lower cost and effort.
## Decision
It is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and
It is clear that adding Rosetta API support to the Cosmos SDK will bring value to all the developers and
Cosmos SDK based chains in the ecosystem. How it is implemented is key.
The driving principles of the proposed design are:
1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network
1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network
configurations to expose Rosetta API-compliant services.
2. **Long term support:** This proposal aims to provide support for all the supported Cosmos SDK release series.
3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable
3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable
branches of Cosmos SDK is a cost that needs to be reduced.
We will achieve these delivering on these principles by the following:
@ -44,12 +44,11 @@ We will achieve these delivering on these principles by the following:
b. The `Server` functionality as this is independent of the Cosmos SDK version.
c. The `Online/OfflineNetwork`, which is not exported, and implements the rosetta API using the `Client` interface to query the node, build tx and so on.
d. The `errors` package to extend rosetta errors.
2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface.
2. Due to differences between the Cosmos release series, each series will have its own specific implementation of `Client` interface.
3. There will be two options for starting an API service in applications:
a. API shares the application process
b. API-specific process.
## Architecture
### The External Repo
@ -65,6 +64,7 @@ The constructor follows:
`func NewServer(settings Settings) (Server, error)`
`Settings`, which are used to construct a new server, are the following:
```go
// Settings define the rosetta server settings
type Settings struct {
@ -88,7 +88,6 @@ type Settings struct {
Package types uses a mixture of rosetta types and custom defined type wrappers, that the client must parse and return while executing operations.
##### Interfaces
Every SDK version uses a different format to connect (rpc, gRPC, etc), query and build transactions, we have abstracted this in what is the `Client` interface.
@ -180,6 +179,7 @@ type Msg interface {
```
Hence developers who want to extend the rosetta set of supported operations just need to extend their module's sdk.Msgs with the `ToOperations` and `FromOperations` methods.
### 3. API service invocation
As stated at the start, application developers will have two methods for invocation of the Rosetta API service:
@ -191,12 +191,10 @@ As stated at the start, application developers will have two methods for invocat
Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spinned in offline mode (tx building capabilities only).
#### Separate API service
Client application developers can write a new command to launch a Rosetta API server as a separate process too, using the rosetta command contained in the `/server/rosetta` package. Construction of the command depends on cosmos sdk version. Examples can be found inside `simd` for stargate, and `contrib/rosetta/simapp` for other release series.
## Status
Proposed

View File

@ -5,6 +5,7 @@
- 28/10/2020 - Initial draft
## Authors
- Antoine Herzog (@antoineherzog)
- Zaki Manian (@zmanian)
- Aleksandr Bezobchuk (alexanderbez) [1]
@ -18,7 +19,8 @@ Draft
Currently, in the SDK, there is no convention to sign arbitrary message like on Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages.
This specification serves the purpose of covering every use case, this means that cosmos-sdk applications developers decide how to serialize and represent `Data` to users.
This specification serves the purpose of covering every use case, this means that cosmos-sdk applications developers decide how to serialize and represent `Data` to users.
## Context
Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data includes, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device.
@ -36,7 +38,7 @@ Cosmos SDK 0.40 also introduces a concept of “auth_info” this can specify SI
A spec should include an `auth_info` that supports SIGN_MODE_DIRECT and SIGN_MODE_LEGACY_AMINO.
Create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages.
An offchain transaction follows these rules:
- the memo must be empty
@ -51,10 +53,10 @@ The first message added to the `offchain` package is `MsgSignData`.
`MsgSignData` allows developers to sign arbitrary bytes valid offchain only. Where `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context.
It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent.
It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent.
Proto definition:
```proto
// MsgSignData defines an arbitrary, general-purpose, off-chain message
message MsgSignData {
@ -64,7 +66,9 @@ message MsgSignData {
bytes Data = 2 [(gogoproto.jsontag) = "data"];
}
```
Signed MsgSignData json example:
```json
{
"type": "cosmos-sdk/StdTx",
@ -98,7 +102,7 @@ Signed MsgSignData json example:
## Consequences
There is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed.
There is a specification on how messages, that are not meant to be broadcast to a live chain, should be formed.
### Backwards Compatibility
@ -125,4 +129,4 @@ Backwards compatibility is maintained as this is a new message spec definition.
1. https://github.com/cosmos/ics/pull/33
2. https://github.com/cosmos/cosmos-sdk/pull/7727#discussion_r515668204
3. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-722478477
4. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923
4. https://github.com/cosmos/cosmos-sdk/pull/7727#issuecomment-721062923

View File

@ -36,6 +36,7 @@ type Vote struct {
```
And for backwards compatibility, we introduce `MsgVoteWeighted` while keeping `MsgVote`.
```
type MsgVote struct {
ProposalID int64
@ -51,6 +52,7 @@ type MsgVoteWeighted struct {
```
The `ValidateBasic` of a `MsgVoteWeighted` struct would require that
1. The sum of all the Rates is equal to 1.0
2. No Option is repeated
@ -69,16 +71,18 @@ tally() {
```
The CLI command for creating a multi-option vote would be as such:
```sh
simd tx gov vote 1 "yes=0.6,no=0.3,abstain=0.05,no_with_veto=0.05" --from mykey
```
To create a single-option vote a user can do either
```
simd tx gov vote 1 "yes=1" --from mykey
```
or
or
```sh
simd tx gov vote 1 yes --from mykey
@ -86,19 +90,22 @@ simd tx gov vote 1 yes --from mykey
to maintain backwards compatibility.
## Consequences
### Backwards Compatibility
- Previous VoteMsg types will remain the same and so clients will not have to update their procedure unless they want to support the WeightedVoteMsg feature.
- When querying a Vote struct from state, its structure will be different, and so clients wanting to display all voters and their respective votes will have to handle the new format and the fact that a single voter can have split votes.
- The result of querying the tally function should have the same API for clients.
### Positive
- Can make the voting process more accurate for addresses representing multiple stakeholders, often some of the largest addresses.
### Negative
- Is more complex than simple voting, and so may be harder to explain to users. However, this is mostly mitigated because the feature is opt-in.
### Neutral
- Relatively minor change to governance tally function.

View File

@ -229,6 +229,7 @@ This service uses the same `StoreKVPairWriteListener` for every KVStore, writing
out to the same files, relying on the `StoreKey` field in the `StoreKVPair` protobuf message to later distinguish the source for each pair.
The file naming schema is as such:
* After every `BeginBlock` request a new file is created with the name `block-{N}-begin`, where N is the block number. All
subsequent state changes are written out to this file until the first `DeliverTx` request is received. At the head of these files,
the length-prefixed protobuf encoded `BeginBlock` request is written, and the response is written at the tail.
@ -398,38 +399,37 @@ func (app *BaseApp) RegisterHooks(s StreamingService) {
We will also modify the `BeginBlock`, `EndBlock`, and `DeliverTx` methods to pass ABCI requests and responses to any streaming service hooks registered
with the `BaseApp`.
```go
func (app *BaseApp) BeginBlock(req abci.RequestBeginBlock) (res abci.ResponseBeginBlock) {
...
// Call the streaming service hooks with the BeginBlock messages
for _, hook := range app.hooks {
hook.ListenBeginBlock(app.deliverState.ctx, req, res)
}
return res
}
```
```go
func (app *BaseApp) EndBlock(req abci.RequestEndBlock) (res abci.ResponseEndBlock) {
...
// Call the streaming service hooks with the EndBlock messages
for _, hook := range app.hooks {
hook.ListenEndBlock(app.deliverState.ctx, req, res)
}
return res
}
```
```go
func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
...
gInfo, result, err := app.runTx(runTxModeDeliver, req.Tx)
@ -450,12 +450,12 @@ func (app *BaseApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx
Data: result.Data,
Events: sdk.MarkEventsToIndex(result.Events, app.indexEvents),
}
// Call the streaming service hooks with the DeliverTx messages
for _, hook := range app.hooks {
hook.ListenDeliverTx(app.deliverState.ctx, req, res)
}
return res
}
```
@ -544,7 +544,6 @@ func FileStreamingConstructor(opts servertypes.AppOptions, keys []sdk.StoreKey)
As a demonstration, we will implement the state watching features as part of SimApp.
For example, the below is a very rudimentary integration of the state listening features into the SimApp `AppCreator` function:
```go
func NewSimApp(
logger log.Logger, db dbm.DB, traceStore io.Writer, loadLatest bool, skipUpgradeHeights map[int64]bool,
@ -560,7 +559,7 @@ func NewSimApp(
govtypes.StoreKey, paramstypes.StoreKey, ibchost.StoreKey, upgradetypes.StoreKey,
evidencetypes.StoreKey, ibctransfertypes.StoreKey, capabilitytypes.StoreKey,
)
// configure state listening capabilities using AppOptions
listeners := cast.ToStringSlice(appOpts.Get("store.streamers"))
for _, listenerName := range listeners {
@ -590,7 +589,7 @@ func NewSimApp(
// kick off the background streaming service loop
streamingService.Stream(wg, quitChan) // maybe this should be done from inside BaseApp instead?
}
...
return app

View File

@ -21,7 +21,7 @@ This ADR updates the proof of stake module to buffer the staking weight updates
The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was implementationally simplest, and because we at the time believed that this would lead to better UX for clients.
An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This 'epoch'd proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.
An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This 'epoch'd proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.
Additionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed.
@ -31,7 +31,7 @@ Furthermore, it has become clearer over time that immediate execution of staking
* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more header in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn.
* Fairness of deterministic leader election. Currently we have no ways of reasoning of fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still havent proven if our current algorithm is fair with > 2 validators in the presence of stake changes)
* Fairness of deterministic leader election. Currently we have no ways of reasoning of fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still havent proven if our current algorithm is fair with > 2 validators in the presence of stake changes)
* Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs.
@ -58,7 +58,6 @@ For threshold based cryptography in particular, we need a pipeline for epoch cha
This can be handled by making a parameter for the epoch pipeline length. This parameter should not be alterable except during hard forks, to mitigate implementation complexity of switching the pipeline length.
With pipeline length 1, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+1.
With pipeline length 2, if I redelegate during epoch N, then my redelegation is applied prior to the beginning of epoch N+2.
@ -74,11 +73,12 @@ Until an ABCI mechanism for variable block times is introduced, it is ill-advise
## Decision
__Step-1__: Implement buffering of all staking and slashing messages.
__Step-1__: Implement buffering of all staking and slashing messages.
First we create a pool for storing tokens that are being bonded, but should be applied at the epoch boundary called the `EpochDelegationPool`. Then, we have two separate queues, one for staking, one for slashing. We describe what happens on each message being delivered below:
### Staking messages
- **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account.
- **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch.
- **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account.
@ -86,18 +86,19 @@ First we create a pool for storing tokens that are being bonded, but should be a
- **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
### Slashing messages
- **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch.
- **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be setup such that this slash applies immediately.
### Evidence Messages
- **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued.
- **MsgSubmitEvidence**: This gets executed immediately, and the validator gets jailed immediately. However in slashing, the actual slash event gets queued.
Then we add methods to the end blockers, to ensure that at the epoch boundary the queues are cleared and delegation updates are applied.
__Step-2__: Implement querying of queued staking txs.
__Step-2__: Implement querying of queued staking txs.
When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events.
When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events.
As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is do-able by maintaining an auxilliary hashmap for indexing upcoming staking events by address)

View File

@ -8,12 +8,10 @@
DRAFT Not Implemented
## Abstract
Sparse Merke Tree ([SMT](https://osf.io/8mcnh/)) is a version of a Merkle Tree with various storage and performance optimizations. This ADR defines a separation of state commitments from data storage and the SDK transition from IAVL to SMT.
## Context
Currently, Cosmos SDK uses IAVL for both state [commitments](https://cryptography.fandom.com/wiki/Commitment_scheme) and data storage.
@ -30,7 +28,6 @@ In the current design, IAVL is used for both data storage and as a Merkle Tree f
Moreover, the IAVL project lacks support and a maintainer and we already see better and well-established alternatives. Instead of optimizing the IAVL, we are looking into other solutions for both storage and state commitments.
## Decision
We propose to separate the concerns of state commitment (**SC**), needed for consensus, and state storage (**SS**), needed for state machine. Finally we replace IAVL with [LazyLedgers' SMT](https://github.com/lazyledger/smt). LazyLedger SMT is based on Diem (called jellyfish) design [*] - it uses a compute-optimised SMT by replacing subtrees with only default values with a single node (same approach is used by Ethereum2) and implements compact proofs.
@ -39,7 +36,6 @@ The storage model presented here doesn't deal with data structure nor serializat
### Decouple state commitment from storage
Separation of storage and commitment (by the SMT) will allow the optimization of different components according to their usage and access patterns.
`SS` (SMT) is used to commit to a data and compute merkle proofs. `SC` is used to directly access data. To avoid collisions, both `SS` and `SC` will use a separate storage namespace (they could use the same database underneath). `SC` will store each `(key, value)` pair directly (map key -> value).
@ -47,16 +43,17 @@ Separation of storage and commitment (by the SMT) will allow the optimization of
SMT is a merkle tree structure: we don't store keys directly. For every `(key, value)` pair, `hash(key)` is stored in a path (we hash a key to evenly distribute keys in the tree) and `hash(key, value)` in a leaf. Since we don't know a structure of a value (in particular if it contains the key) we hash both the key and the value in the `SC` leaf.
For data access we propose 2 additional KV buckets (namespaces for the key-value pairs, sometimes called [column family](https://github.com/facebook/rocksdb/wiki/Terminology)):
1. B1: `key → value`: the principal object storage, used by a state machine, behind the SDK `KVStore` interface: provides direct access by key and allows prefix iteration (KV DB backend must support it).
2. B2: `hash(key, value) → key`: a reverse index to get a key from an SMT path. Recall that SMT will store `(k, v)` as `(hash(k), hash(key, value))`. So, we can get an object value by composing `SMT_path → B2 → B1`.
3. we could use more buckets to optimize the app usage if needed.
Above, we propose to use a KV DB. However, for the state machine, we could use an RDBMS, which we discuss below.
### Requirements
State Storage requirements:
+ range queries
+ quick (key, value) access
+ creating a snapshot
@ -64,16 +61,15 @@ State Storage requirements:
+ pruning (garbage collection)
State Commitment requirements:
+ fast updates
+ tree path should be short
+ pruning (garbage collection)
### LazyLedger SMT for State Commitment
A Sparse Merkle tree is based on the idea of a complete Merkle tree of an intractable size. The assumption here is that as the size of the tree is intractable, there would only be a few leaf nodes with valid data blocks relative to the tree size, rendering a sparse tree.
### Snapshots for storage sync and state versioning
Below, with simple _snapshot_ we refer to a database snapshot mechanism, not to a _ABCI snapshot sync_. The latter will be referred as _snapshot sync_ (which will directly use DB snapshot as described below).
@ -93,7 +89,6 @@ Pruning old snapshots is effectively done by a database. Whenever we update a re
To manage the active snapshots we will either us a DB _max number of snapshots_ option (if available), or will remove snapshots in the `EndBlocker`. The latter option can be done efficiently by identifying snapshots with block height.
#### Accessing old state versions
One of the functional requirements is to access old state. This is done through `abci.Query` structure. The version is specified by a block height (so we query for an object by a key `K` at block height `H`). The number of old versions supported for `abci.Query` is configurable. Accessing an old state is done by using available snapshots.
@ -103,25 +98,20 @@ Moreover, SDK could provide a way to directly access the state. However, a state
We positively [validated](https://github.com/cosmos/cosmos-sdk/discussions/8297) a versioning and snapshot mechanism for querying old state with regards to the database we evaluated.
### State Proofs
For any object stored in State Store (SS), we have corresponding object in `SC`. A proof for object `V` identified by a key `K` is a branch of `SC`, where the path corresponds to the key `hash(K)`, and the leaf is `hash(K, V)`.
### Rollbacks
We need to be able to process transactions and roll-back state updates if a transaction fails. This can be done in the following way: during transaction processing, we keep all state change requests (writes) in a `CacheWrapper` abstraction (as it's done today). Once we finish the block processing, in the `Endblocker`, we commit a root store - at that time, all changes are written to the SMT and to the `SS` and a snapshot is created.
### Committing to an object without saving it
We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly.
## Consequences
### Backwards Compatibility
This ADR doesn't introduce any SDK level API changes.
@ -143,14 +133,12 @@ We change the storage layout of the state machine, a storage hard fork and netwo
+ Deprecating IAVL, which is one of the core proposals of Cosmos Whitepaper.
## Alternative designs.
## Alternative designs
Most of the alternative designs were evaluated in [state commitments and storage report](https://paper.dropbox.com/published/State-commitments-and-storage-review--BDvA1MLwRtOx55KRihJ5xxLbBw-KeEB7eOd11pNrZvVtqUgL3h).
Ethereum research published [Verkle Tire](https://notes.ethereum.org/_N1mutVERDKtqGIEYc-Flw#fnref1) - an idea of combining polynomial commitments with merkle tree in order to reduce the tree height. This concept has a very good potential, but we think it's too early to implement it. The current, SMT based design could be easily updated to the Verkle Tire once other research implement all necessary libraries. The main advantage of the design described in this ADR is the separation of state commitments from the data storage and designing a more powerful interface.
## Further Discussions
### Evaluated KV Databases
@ -165,7 +153,6 @@ Use of RDBMS instead of simple KV store for state. Use of RDBMS will require an
We were discussing use case where modules can use a support database, which is not automatically committed. Module will responsible for having a sound storage model and can optionally use the feature discussed in __Committing to an object without saving it_ section.
## References
+ [IAVL What's Next?](https://github.com/cosmos/cosmos-sdk/issues/7100)

View File

@ -84,6 +84,7 @@ We introduce a new prefix store in `x/upgrade`'s store. This store will track ea
```
0x2 | {bytes(module_name)} => BigEndian(module_consensus_version)
```
The initial state of the store is set from `app.go`'s `InitChainer` method.
The UpgradeHandler signature needs to be updated to take a `VersionMap`, as well as return an upgraded `VersionMap` and an error:
@ -118,7 +119,7 @@ Once all the migration handlers are registered inside the configurator (which ha
- Get the old ConsensusVersion of the module from its `VersionMap` argument (let's call it `M`).
- Fetch the new ConsensusVersion of the module from the `ConsensusVersion()` method on `AppModule` (call it `N`).
- If `N>M`, run all registered migrations for the module sequentially `M -> M+1 -> M+2...` until `N`.
- There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store.
- There is a special case where there is no ConsensusVersion for the module, as this means that the module has been newly added during the upgrade. In this case, no migration function is run, and the module's current ConsensusVersion is saved to `x/upgrade`'s store.
If a required migration is missing (e.g. if it has not been registered in the `Configurator`), then the `RunMigrations` function will error.

View File

@ -15,6 +15,7 @@ This ADR defines the `x/group` module which allows the creation and management o
## Context
The legacy amino multi-signature mechanism of the Cosmos SDK has certain limitations:
- Key rotation is not possible, although this can be solved with [account rekeying](adr-034-account-rekeying.md).
- Thresholds can't be changed.
- UX is cumbersome for non-technical users ([#5661](https://github.com/cosmos/cosmos-sdk/issues/5661)).
@ -47,7 +48,7 @@ message GroupInfo {
// admin is the account address of the group's admin.
string admin = 2;
// metadata is any arbitrary metadata to attached to the group.
bytes metadata = 3;
@ -78,10 +79,10 @@ message Member {
// address is the member's account address.
string address = 1;
// weight is the member's voting weight that should be greater than 0.
string weight = 2;
// metadata is any arbitrary metadata to attached to the member.
bytes metadata = 3;
}
@ -106,13 +107,13 @@ message GroupAccountInfo {
// address is the group account address.
string address = 1;
// group_id is the ID of the Group the GroupAccount belongs to.
uint64 group_id = 2;
// admin is the account address of the group admin.
string admin = 3;
// metadata is any arbitrary metadata of this group account.
bytes metadata = 4;
@ -132,13 +133,13 @@ For instance, a group admin could be another group account which could "elects"
### Decision Policy
A decision policy is the mechanism by which members of a group can vote on
A decision policy is the mechanism by which members of a group can vote on
proposals.
All decision policies should have a minimum and maximum voting window.
The minimum voting window is the minimum duration that must pass in order
for a proposal to potentially pass, and it may be set to 0. The maximum voting
window is the maximum time that a proposal may be voted on and executed if
window is the maximum time that a proposal may be voted on and executed if
it reached enough support before it is closed.
Both of these values must be less than a chain-wide max voting window parameter.
@ -171,7 +172,7 @@ message ThresholdDecisionPolicy {
// threshold is the minimum weighted sum of support votes for a proposal to succeed.
string threshold = 1;
// voting_period is the duration from submission of a proposal to the end of voting period
// Within this period, votes and exec messages can be submitted.
google.protobuf.Duration voting_period = 2 [(gogoproto.nullable) = false];
@ -185,6 +186,7 @@ A proposal consists of a set of `sdk.Msg`s that will be executed if the proposal
passes as well as any metadata associated with the proposal. These `sdk.Msg`s get validated as part of the `Msg/CreateProposal` request validation. They should also have their signer set as the group account.
Internally, a proposal also tracks:
- its current `Status`: submitted, closed or aborted
- its `Result`: unfinalized, accepted or rejected
- its `VoteState` in the form of a `Tally`, which is calculated on new votes and when executing the proposal.
@ -196,13 +198,13 @@ message Tally {
// yes_count is the weighted sum of yes votes.
string yes_count = 1;
// no_count is the weighted sum of no votes.
string no_count = 2;
// abstain_count is the weighted sum of abstainers.
string abstain_count = 3;
// veto_count is the weighted sum of vetoes.
string veto_count = 4;
}
@ -265,13 +267,13 @@ Inter-module communication introduced by [ADR-033](adr-033-protobuf-inter-module
- Convergence of `/group` and `x/gov` as both support proposals and voting: https://github.com/cosmos/cosmos-sdk/discussions/9066
- `x/group` possible future improvements:
- Execute proposals on submission (https://github.com/regen-network/regen-ledger/issues/288)
- Withdraw a proposal (https://github.com/regen-network/cosmos-modules/issues/41)
- Make `Tally` more flexible and support non-binary choices
- Execute proposals on submission (https://github.com/regen-network/regen-ledger/issues/288)
- Withdraw a proposal (https://github.com/regen-network/cosmos-modules/issues/41)
- Make `Tally` more flexible and support non-binary choices
## References
- Initial specification:
- https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module
- [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236)
- https://gist.github.com/aaronc/b60628017352df5983791cad30babe56#group-module
- [#5236](https://github.com/cosmos/cosmos-sdk/pull/5236)
- Proposal to add `x/group` into the SDK: [#7633](https://github.com/cosmos/cosmos-sdk/issues/7633)

View File

@ -11,35 +11,29 @@
> Please have a look at the [PROCESS](./PROCESS.md#adr-status) page.
> Use DRAFT if the ADR is in a draft stage (draft PR) or PROPOSED if it's in review.
## Abstract
> "If you can't explain it simply, you don't understand it well enough." Provide a simplified and layman-accessible explanation of the ADR.
> A short (~200 word) description of the issue being addressed.
## Context
> This section describes the forces at play, including technological, political, social, and project local. These forces are probably in tension, and should be called out as such. The language in this section is value-neutral. It is simply describing facts. It should clearly explain the problem and motivation that the proposal aims to resolve.
> {context body}
## Decision
> This section describes our response to these forces. It is stated in full sentences, with active voice. "We will ..."
> {decision body}
## Consequences
> This section describes the resulting context, after applying the decision. All consequences should be listed here, not just the "positive" ones. A particular decision may have positive, negative, and neutral consequences, but all of them affect the team and project in the future.
### Backwards Compatibility
> All ADRs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The ADR must explain how the author proposes to deal with these incompatibilities. ADR submissions without a sufficient backwards compatibility treatise may be rejected outright.
### Positive
{positive consequences}
@ -52,18 +46,15 @@
{neutral consequences}
## Further Discussions
While an ADR is in the DRAFT or PROPOSED stage, this section should contain a summary of issues to be solved in future iterations (usually referencing comments from a pull-request discussion).
Later, this section can optionally list ideas or improvements the author or reviewers found during the analysis of this ADR.
## Test Cases [optional]
Test cases for an implementation are mandatory for ADRs that are affecting consensus changes. Other ADRs can choose to include links to test cases if applicable.
## References
- {reference link}

View File

@ -58,7 +58,6 @@ For HD key derivation the Cosmos SDK uses a standard called [BIP32](https://gith
In the Cosmos SDK, keys are stored and managed by using an object called a [`Keyring`](#keyring).
## Keys, accounts, addresses, and signatures
The principal way of authenticating a user is done using [digital signatures](https://en.wikipedia.org/wiki/Digital_signature). Users sign transactions using their own private key. Signature verification is done with the associated public key. For on-chain signature verification purposes, we store the public key in an `Account` object (alongside other data required for a proper transaction validation).
@ -71,7 +70,6 @@ The Cosmos SDK supports the following digital key schemes for creating digital s
- `secp256r1`, as implemented in the [SDK's `crypto/keys/secp256r1` package](https://github.com/cosmos/cosmos-sdk/blob/master/crypto/keys/secp256r1/pubkey.go),
- `tm-ed25519`, as implemented in the [SDK `crypto/keys/ed25519` package](https://github.com/cosmos/cosmos-sdk/blob/v0.42.1/crypto/keys/ed25519/ed25519.go). This scheme is supported only for the consensus validation.
| | Address length | Public key length | Used for transaction | Used for consensus |
| | in bytes | in bytes | authentication | (tendermint) |
|--------------+----------------+-------------------+----------------------+--------------------|
@ -79,8 +77,6 @@ The Cosmos SDK supports the following digital key schemes for creating digital s
| `secp256r1` | 32 | 33 | yes | no |
| `tm-ed25519` | -- not used -- | 32 | no | yes |
## Addresses
`Addresses` and `PubKey`s are both public information that identifies actors in the application. `Account` is used to store authentication information. The basic account implementation is provided by a `BaseAccount` object.
@ -108,14 +104,12 @@ For user interaction, addresses are formatted using [Bech32](https://en.bitcoin.
+++ https://github.com/cosmos/cosmos-sdk/blob/v0.42.1/types/address.go#L230-L244
| | Address Bech32 Prefix |
| ------------------ | --------------------- |
| Accounts | cosmos |
| Validator Operator | cosmosvaloper |
| Consensus Nodes | cosmosvalcons |
### Public Keys
Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since public keys are saved in a store, `cryptotypes.PubKey` extends the `proto.Message` interface:
@ -123,6 +117,7 @@ Public keys in Cosmos SDK are defined by `cryptotypes.PubKey` interface. Since p
+++ https://github.com/cosmos/cosmos-sdk/blob/v0.42.1/crypto/types/types.go#L8-L17
A compressed format is used for `secp256k1` and `secp256r1` serialization.
- The first byte is a `0x02` byte if the `y`-coordinate is the lexicographically largest of the two associated with the `x`-coordinate.
- Otherwise the first byte is a `0x03`.
@ -133,7 +128,6 @@ For user interactions, `PubKey` is formatted using Protobufs JSON ([ProtoMarshal
+++ https://github.com/cosmos/cosmos-sdk/blob/7568b66/crypto/keyring/output.go#L23-L39
## Keyring
A `Keyring` is an object that stores and manages accounts. In the Cosmos SDK, a `Keyring` implementation follows the `Keyring` interface:
@ -154,7 +148,6 @@ A few notes on the `Keyring` methods:
- `ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)` exports a private key in ASCII-armored encrypted format using the given passphrase. You can then either import the private key again into the keyring using the `ImportPrivKey(uid, armor, passphrase string)` function or decrypt it into a raw private key using the `UnarmorDecryptPrivKey(armorStr string, passphrase string)` function.
## Next {hide}
Learn about [gas and fees](./gas-fees.md) {hide}

View File

@ -71,9 +71,9 @@ Here are the main actions performed by this function:
- With the module manager, register the [application's modules' invariants](../building-modules/invariants.md). Invariants are variables (e.g. total supply of a token) that are evaluated at the end of each block. The process of checking invariants is done via a special module called the [`InvariantsRegistry`](../building-modules/invariants.md#invariant-registry). The value of the invariant should be equal to a predicted value defined in the module. Should the value be different than the predicted one, special logic defined in the invariant registry will be triggered (usually the chain is halted). This is useful to make sure no critical bug goes unnoticed and produces long-lasting effects that would be hard to fix.
- With the module manager, set the order of execution between the `InitGenesis`, `BeginBlocker` and `EndBlocker` functions of each of the [application's modules](#application-module-interface). Note that not all modules implement these functions.
- Set the remainer of application's parameters:
- [`InitChainer`](#initchainer): used to initialize the application when it is first started.
- [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endlbocker): called at the beginning and the end of every block).
- [`anteHandler`](../core/baseapp.md#antehandler): used to handle fees and signature verification.
- [`InitChainer`](#initchainer): used to initialize the application when it is first started.
- [`BeginBlocker`, `EndBlocker`](#beginblocker-and-endlbocker): called at the beginning and the end of every block).
- [`anteHandler`](../core/baseapp.md#antehandler): used to handle fees and signature verification.
- Mount the stores.
- Return the application.
@ -114,8 +114,8 @@ The `EncodingConfig` structure is the last important part of the `app.go` file.
Here are descriptions of what each of the four fields means:
- `InterfaceRegistry`: The `InterfaceRegistry` is used by the Protobuf codec to handle interfaces that are encoded and decoded (we also say "unpacked") using [`google.protobuf.Any`](https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/any.proto). `Any` could be thought as a struct that contains a `type_url` (name of a concrete type implementing the interface) and a `value` (its encoded bytes). `InterfaceRegistry` provides a mechanism for registering interfaces and implementations that can be safely unpacked from `Any`. Each of the application's modules implements the `RegisterInterfaces` method that can be used to register the module's own interfaces and implementations.
- You can read more about Any in [ADR-19](../architecture/adr-019-protobuf-state-encoding.md#usage-of-any-to-encode-interfaces).
- To go more into details, the SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/gogo/protobuf). By default, the [gogo protobuf implementation of `Any`](https://godoc.org/github.com/gogo/protobuf/types) uses [global type registration](https://github.com/gogo/protobuf/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could registry a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](../architecture/adr-019-protobuf-state-encoding.md).
- You can read more about Any in [ADR-19](../architecture/adr-019-protobuf-state-encoding.md#usage-of-any-to-encode-interfaces).
- To go more into details, the SDK uses an implementation of the Protobuf specification called [`gogoprotobuf`](https://github.com/gogo/protobuf). By default, the [gogo protobuf implementation of `Any`](https://godoc.org/github.com/gogo/protobuf/types) uses [global type registration](https://github.com/gogo/protobuf/blob/master/proto/properties.go#L540) to decode values packed in `Any` into concrete Go types. This introduces a vulnerability where any malicious module in the dependency tree could registry a type with the global protobuf registry and cause it to be loaded and unmarshaled by a transaction that referenced it in the `type_url` field. For more information, please refer to [ADR-019](../architecture/adr-019-protobuf-state-encoding.md).
- `Marshaler`: the default codec used throughout the SDK. It is composed of a `BinaryCodec` used to encode and decode state, and a `JSONCodec` used to output data to the users (for example in the [CLI](#cli)). By default, the SDK uses Protobuf as `Marshaler`.
- `TxConfig`: `TxConfig` defines an interface a client can utilize to generate an application-defined concrete transaction type. Currently, the SDK handles two transaction types: `SIGN_MODE_DIRECT` (which uses Protobuf binary as over-the-wire encoding) and `SIGN_MODE_LEGACY_AMINO_JSON` (which depends on Amino). Read more about transactions [here](../core/transactions.md).
- `Amino`: Some legacy parts of the SDK still use Amino for backwards-compatibility. Each module exposes a `RegisterLegacyAmino` method to register the module's specific types within Amino. This `Amino` codec should not be used by app developers anymore, and will be removed in future releases.
@ -255,7 +255,7 @@ This section is optional, as developers are free to choose their dependency mana
+++ https://github.com/cosmos/sdk-tutorials/blob/c6754a1e313eb1ed973c5c91dcc606f2fd288811/go.mod#L1-L18
For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`appd`](#node-client) and [`appd`](#application-interface). See an example of Makefile from the [nameservice tutorial]()
For building the application, a [Makefile](https://en.wikipedia.org/wiki/Makefile) is generally used. The Makefile primarily ensures that the `go.mod` is run before building the two entrypoints to the application, [`appd`](#node-client) and [`appd`](#application-interface). See an example of Makefile from the [nameservice tutorial](https://tutorials.cosmos.network/nameservice/tutorial/00-intro.html)
+++ https://github.com/cosmos/sdk-tutorials/blob/86a27321cf89cc637581762e953d0c07f8c78ece/nameservice/Makefile

View File

@ -12,19 +12,19 @@ order: 6
## BeginBlocker and EndBlocker
`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain.
`BeginBlocker` and `EndBlocker` are a way for module developers to add automatic execution of logic to their module. This is a powerful tool that should be used carefully, as complex automatic functions can slow down or even halt the chain.
When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`AppModule` interface](./module-manager.md#appmodule). The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`.
When needed, `BeginBlocker` and `EndBlocker` are implemented as part of the [`AppModule` interface](./module-manager.md#appmodule). The `BeginBlock` and `EndBlock` methods of the interface implemented in `module.go` generally defer to `BeginBlocker` and `EndBlocker` methods respectively, which are usually implemented in `abci.go`.
The actual implementation of `BeginBlocker` and `EndBlocker` in `abci.go` are very similar to that of a [`Msg` service](./msg-services.md):
- They generally use the [`keeper`](./keeper.md) and [`ctx`](../core/context.md) to retrieve information about the latest state.
- If needed, they use the `keeper` and `ctx` to trigger state-transitions.
- If needed, they can emit [`events`](../core/events.md) via the `ctx`'s `EventManager`.
- They generally use the [`keeper`](./keeper.md) and [`ctx`](../core/context.md) to retrieve information about the latest state.
- If needed, they use the `keeper` and `ctx` to trigger state-transitions.
- If needed, they can emit [`events`](../core/events.md) via the `ctx`'s `EventManager`.
A specificity of the `EndBlocker` is that it can return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://tendermint.com/docs/app-dev/abci-spec.html#validatorupdate). This is the preferred way to implement custom validator changes.
A specificity of the `EndBlocker` is that it can return validator updates to the underlying consensus engine in the form of an [`[]abci.ValidatorUpdates`](https://tendermint.com/docs/app-dev/abci-spec.html#validatorupdate). This is the preferred way to implement custom validator changes.
It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](./module-manager.md#manager).
It is possible for developers to define the order of execution between the `BeginBlocker`/`EndBlocker` functions of each of their application's modules via the module's manager `SetOrderBeginBlocker`/`SetOrderEndBlocker` methods. For more on the module manager, click [here](./module-manager.md#manager).
See an example implementation of `BeginBlocker` from the `distr` module:

View File

@ -11,15 +11,15 @@ Modules generally handle a subset of the state and, as such, they need to define
- [Module Manager](./module-manager.md) {prereq}
- [Keepers](./keeper.md) {prereq}
## Type Definition
## Type Definition
The subset of the genesis state defined from a given module is generally defined in a `genesis.proto` file ([more info](../core/encoding.md#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process.
The subset of the genesis state defined from a given module is generally defined in a `genesis.proto` file ([more info](../core/encoding.md#gogoproto) on how to define protobuf messages). The struct defining the module's subset of the genesis state is usually called `GenesisState` and contains all the module-related values that need to be initialized during the genesis process.
See an example of `GenesisState` protobuf message definition from the `auth` module:
+++ https://github.com/cosmos/cosmos-sdk/blob/a9547b54ffac9729fe1393651126ddfc0d236cff/proto/cosmos/auth/v1beta1/genesis.proto
Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications.
Next we present the main genesis-related methods that need to be implemented by module developers in order for their module to be used in Cosmos SDK applications.
### `DefaultGenesis`
@ -39,7 +39,7 @@ Other than the methods related directly to `GenesisState`, module developers are
### `InitGenesis`
The `InitGenesis` method is executed during [`InitChain`](../core/baseapp.md#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](./keeper.md) setter function on each parameter within the `GenesisState`.
The `InitGenesis` method is executed during [`InitChain`](../core/baseapp.md#initchain) when the application is first started. Given a `GenesisState`, it initializes the subset of the state managed by the module by using the module's [`keeper`](./keeper.md) setter function on each parameter within the `GenesisState`.
The [module manager](./module-manager.md#manager) of the application is responsible for calling the `InitGenesis` method of each of the application's modules in order. This order is set by the application developer via the manager's `SetOrderGenesisMethod`, which is called in the [application's constructor function](../basics/app-anatomy.md#constructor-function).
@ -49,7 +49,7 @@ See an example of `InitGenesis` from the `auth` module:
### `ExportGenesis`
The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork.
The `ExportGenesis` method is executed whenever an export of the state is made. It takes the latest known version of the subset of the state managed by the module and creates a new `GenesisState` out of it. This is mainly used when the chain needs to be upgraded via a hard fork.
See an example of `ExportGenesis` from the `auth` module.
@ -57,4 +57,4 @@ See an example of `ExportGenesis` from the `auth` module.
## Next {hide}
Learn about [modules interfaces](module-interfaces.md) {hide}
Learn about [modules interfaces](module-interfaces.md) {hide}

View File

@ -16,7 +16,7 @@ An `Invariant` is a function that checks for a particular invariant within a mod
+++ https://github.com/cosmos/cosmos-sdk/blob/7d7821b9af132b0f6131640195326aa02b6751db/types/invariant.go#L9
The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check.
The `string` return value is the invariant message, which can be used when printing logs, and the `bool` return value is the actual result of the invariant check.
In practice, each module implements `Invariant`s in a `./keeper/invariants.go` file within the module's folder. The standard is to implement one `Invariant` function per logical grouping of invariants with the following model:
@ -51,7 +51,6 @@ func AllInvariants(k Keeper) sdk.Invariant {
Finally, module developers need to implement the `RegisterInvariants` method as part of the [`AppModule` interface](./module-manager.md#appmodule). Indeed, the `RegisterInvariants` method of the module, implemented in the `module/module.go` file, typically only defers the call to a `RegisterInvariants` method implemented in the `keeper/invariants.go` file. The `RegisterInvariants` method registers a route for each `Invariant` function in the [`InvariantRegistry`](#invariant-registry):
```go
// RegisterInvariants registers all staking invariants
func RegisterInvariants(ir sdk.InvariantRegistry, k Keeper) {
@ -62,13 +61,13 @@ func RegisterInvariants(ir sdk.InvariantRegistry, k Keeper) {
}
```
For more, see an example of [`Invariant`s implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/7d7821b9af132b0f6131640195326aa02b6751db/x/staking/keeper/invariants.go).
For more, see an example of [`Invariant`s implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/7d7821b9af132b0f6131640195326aa02b6751db/x/staking/keeper/invariants.go).
## Invariant Registry
The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers.
The `InvariantRegistry` is a registry where the `Invariant`s of all the modules of an application are registered. There is only one `InvariantRegistry` per **application**, meaning module developers need not implement their own `InvariantRegistry` when building a module. **All module developers need to do is to register their modules' invariants in the `InvariantRegistry`, as explained in the section above**. The rest of this section gives more information on the `InvariantRegistry` itself, and does not contain anything directly relevant to module developers.
At its core, the `InvariantRegistry` is defined in the SDK as an interface:
At its core, the `InvariantRegistry` is defined in the SDK as an interface:
+++ https://github.com/cosmos/cosmos-sdk/blob/7d7821b9af132b0f6131640195326aa02b6751db/types/invariant.go#L14-L17

View File

@ -12,13 +12,13 @@ order: 7
## Motivation
The Cosmos SDK is a framework that makes it easy for developers to build complex decentralised applications from scratch, mainly by composing modules together. As the ecosystem of open source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developer.
The Cosmos SDK is a framework that makes it easy for developers to build complex decentralised applications from scratch, mainly by composing modules together. As the ecosystem of open source modules for the Cosmos SDK expands, it will become increasingly likely that some of these modules contain vulnerabilities, as a result of the negligence or malice of their developer.
The Cosmos SDK adopts an [object-capabilities-based approach](../core/ocap.md) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be thought of quite literally as the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](../core/store.md#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s).
The Cosmos SDK adopts an [object-capabilities-based approach](../core/ocap.md) to help developers better protect their application from unwanted inter-module interactions, and `keeper`s are at the core of this approach. A `keeper` can be thought of quite literally as the gatekeeper of a module's store(s). Each store (typically an [`IAVL` Store](../core/store.md#iavl-store)) defined within a module comes with a `storeKey`, which grants unlimited access to it. The module's `keeper` holds this `storeKey` (which should otherwise remain unexposed), and defines [methods](#implementing-methods) for reading and writing to the store(s).
The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](../basics/app-anatomy.md#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers.
The core idea behind the object-capabilities approach is to only reveal what is necessary to get the work done. In practice, this means that instead of handling permissions of modules through access-control lists, module `keeper`s are passed a reference to the specific instance of the other modules' `keeper`s that they need to access (this is done in the [application's constructor function](../basics/app-anatomy.md#constructor-function)). As a consequence, a module can only interact with the subset of state defined in another module via the methods exposed by the instance of the other module's `keeper`. This is a great way for developers to control the interactions that their own module can have with modules developed by external developers.
## Type Definition
## Type Definition
`keeper`s are generally implemented in a `/keeper/keeper.go` file located in the module's folder. By convention, the type `keeper` of a module is simply named `Keeper` and usually follows the following structure:
@ -38,17 +38,17 @@ For example, here is the type definition of the `keeper` from the `staking` modu
Let us go through the different parameters:
- An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself.
- `storeKey`s grant access to the store(s) of the [multistore](../core/store.md) managed by the module. They should always remain unexposed to external modules.
- An expected `keeper` is a `keeper` external to a module that is required by the internal `keeper` of said module. External `keeper`s are listed in the internal `keeper`'s type definition as interfaces. These interfaces are themselves defined in an `expected_keepers.go` file in the root of the module's folder. In this context, interfaces are used to reduce the number of dependencies, as well as to facilitate the maintenance of the module itself.
- `storeKey`s grant access to the store(s) of the [multistore](../core/store.md) managed by the module. They should always remain unexposed to external modules.
- `cdc` is the [codec](../core/encoding.md) used to marshall and unmarshall structs to/from `[]byte`. The `cdc` can be any of `codec.BinaryCodec`, `codec.JSONCodec` or `codec.Codec` based on your requirements. It can be either a proto or amino codec as long as they implement these interfaces.
Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](../basics/app-anatomy.md). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them.
Of course, it is possible to define different types of internal `keeper`s for the same module (e.g. a read-only `keeper`). Each type of `keeper` comes with its own constructor function, which is called from the [application's constructor function](../basics/app-anatomy.md). This is where `keeper`s are instantiated, and where developers make sure to pass correct instances of modules' `keeper`s to other modules that require them.
## Implementing Methods
## Implementing Methods
`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed via the `ValidateBasic()` method of the [`message`](./messages-and-queries.md#messages) and the [`Msg` server](./msg-services.md) when `keeper`s' methods are called.
`Keeper`s primarily expose getter and setter methods for the store(s) managed by their module. These methods should remain as simple as possible and strictly be limited to getting or setting the requested value, as validity checks should have already been performed via the `ValidateBasic()` method of the [`message`](./messages-and-queries.md#messages) and the [`Msg` server](./msg-services.md) when `keeper`s' methods are called.
Typically, a *getter* method will have the following signature
Typically, a *getter* method will have the following signature
```go
func (k Keeper) Get(ctx sdk.Context, key string) returnType
@ -57,20 +57,20 @@ func (k Keeper) Get(ctx sdk.Context, key string) returnType
and the method will go through the following steps:
1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. Then it's prefered to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety.
2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store.
2. If it exists, get the `[]byte` value stored at location `[]byte(key)` using the `Get(key []byte)` method of the store.
3. Unmarshall the retrieved value from `[]byte` to `returnType` using the codec `cdc`. Return the value.
Similarly, a *setter* method will have the following signature
Similarly, a *setter* method will have the following signature
```go
func (k Keeper) Set(ctx sdk.Context, key string, value valueType)
func (k Keeper) Set(ctx sdk.Context, key string, value valueType)
```
and the method will go through the following steps:
1. Retrieve the appropriate store from the `ctx` using the `storeKey`. This is done through the `KVStore(storeKey sdk.StoreKey)` method of the `ctx`. It's preferred to use the `prefix.Store` to access only the desired limited subset of the store for convenience and safety.
2. Marshal `value` to `[]byte` using the codec `cdc`.
3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store.
2. Marshal `value` to `[]byte` using the codec `cdc`.
3. Set the encoded value in the store at location `key` using the `Set(key []byte, value []byte)` method of the store.
For more, see an example of `keeper`'s [methods implementation from the `staking` module](https://github.com/cosmos/cosmos-sdk/blob/3bafd8255a502e5a9cee07391cf8261538245dfd/x/staking/keeper/keeper.go).

View File

@ -25,14 +25,15 @@ See an example of a `Msg` service definition from `x/bank` module:
+++ https://github.com/cosmos/cosmos-sdk/blob/v0.40.0-rc1/proto/cosmos/bank/v1beta1/tx.proto#L10-L17
Each `Msg` service method must have exactly one argument, which must implement the `sdk.Msg` interface, and a Protobuf response. The naming convention is to call the RPC argument `Msg<service-rpc-name>` and the RPC response `Msg<service-rpc-name>Response`. For example:
```
rpc Send(MsgSend) returns (MsgSendResponse);
```
`sdk.Msg` interface is a simplified version of the Amino `LegacyMsg` interface described [below](#legacy-amino-msgs) with only `ValidateBasic()` and `GetSigners()` methods. For backwards compatibility with [Amino `LegacyMsg`s](#legacy-amino-msgs), existing `LegacyMsg` types should be used as the request parameter for `service` RPC definitions. Newer `sdk.Msg` types, which only support `service` definitions, should use canonical `Msg...` name.
Cosmos SDK uses Protobuf definitions to generate client and server code:
* `MsgServer` interface defines the server API for the `Msg` service and its implementation is described as part of the [`Msg` services](./msg-services.md) documentation.
* Structures are generated for all RPC request and response types.

View File

@ -17,7 +17,7 @@ One of the main interfaces for an application is the [command-line interface](..
### Transaction Commands
In order to create messages that trigger state changes, end-users must create [transactions](../core/transactions.md) that wrap and deliver the messages. A transaction command creates a transaction that includes one or more messages.
Transaction commands typically have their own `tx.go` file that lives within the module's `./client/cli` folder. The commands are specified in getter functions and the name of the function should include the name of the command.
Here is an example from the `x/bank` module:
@ -29,14 +29,14 @@ In the example, `NewSendTxCmd()` creates and returns the transaction command for
In general, the getter function does the following:
- **Constructs the command:** Read the [Cobra Documentation](https://godoc.org/github.com/spf13/cobra) for more detailed information on how to create commands.
- **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments.
- **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`.
- **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag.
- **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction.
- The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`.
- If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed.
- A [message](./messages-and-queries.md) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call `msg.ValidateBasic()` after creating the message, which runs a sanity check on the provided arguments.
- Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`.
- **Use:** Specifies the format of the user input required to invoke the command. In the example above, `send` is the name of the transaction command and `[from_key_or_address]`, `[to_address]`, and `[amount]` are the arguments.
- **Args:** The number of arguments the user provides. In this case, there are exactly three: `[from_key_or_address]`, `[to_address]`, and `[amount]`.
- **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag.
- **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new transaction.
- The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientTxContext(cmd)`. The `clientCtx` contains information relevant to transaction handling, including information about the user. In this example, the `clientCtx` is used to retrieve the address of the sender by calling `clientCtx.GetFromAddress()`.
- If applicable, the command's arguments are parsed. In this example, the arguments `[to_address]` and `[amount]` are both parsed.
- A [message](./messages-and-queries.md) is created using the parsed arguments and information from the `clientCtx`. The constructor function of the message type is called directly. In this case, `types.NewMsgSend(fromAddr, toAddr, amount)`. Its good practice to call `msg.ValidateBasic()` after creating the message, which runs a sanity check on the provided arguments.
- Depending on what the user wants, the transaction is either generated offline or signed and broadcasted to the preconfigured node using `tx.GenerateOrBroadcastTxCLI(clientCtx, flags, msg)`.
- **Adds transaction flags:** All transaction commands must add a set of transaction [flags](#flags). The transaction flags are used to collect additional information from the user (e.g. the amount of fees the user is willing to pay). The transaction flags are added to the constructed command using `AddTxFlagsToCmd(cmd)`.
- **Returns the command:** Finally, the transaction command is returned.
@ -59,14 +59,14 @@ In the example, `GetAccountCmd()` creates and returns a query command that retur
In general, the getter function does the following:
- **Constructs the command:** Read the [Cobra Documentation](https://godoc.org/github.com/spf13/cobra) for more detailed information on how to create commands.
- **Use:** Specifies the format of the user input required to invoke the command. In the example above, `account` is the name of the query command and `[address]` is the argument.
- **Args:** The number of arguments the user provides. In this case, there is exactly one: `[address]`.
- **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag.
- **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new query.
- The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientQueryContext(cmd)`. The `clientCtx` contains information relevant to query handling.
- If applicable, the command's arguments are parsed. In this example, the argument `[address]` is parsed.
- A new `queryClient` is initialized using `NewQueryClient(clientCtx)`. The `queryClient` is then used to call the appropriate [query](./messages-and-queries.md#grpc-queries).
- The `clientCtx.PrintProto` method is used to format the `proto.Message` object so that the results can be printed back to the user.
- **Use:** Specifies the format of the user input required to invoke the command. In the example above, `account` is the name of the query command and `[address]` is the argument.
- **Args:** The number of arguments the user provides. In this case, there is exactly one: `[address]`.
- **Short and Long:** Descriptions for the command. A `Short` description is expected. A `Long` description can be used to provide additional information that is displayed when a user adds the `--help` flag.
- **RunE:** Defines a function that can return an error. This is the function that is called when the command is executed. This function encapsulates all of the logic to create a new query.
- The function typically starts by getting the `clientCtx`, which can be done with `client.GetClientQueryContext(cmd)`. The `clientCtx` contains information relevant to query handling.
- If applicable, the command's arguments are parsed. In this example, the argument `[address]` is parsed.
- A new `queryClient` is initialized using `NewQueryClient(clientCtx)`. The `queryClient` is then used to call the appropriate [query](./messages-and-queries.md#grpc-queries).
- The `clientCtx.PrintProto` method is used to format the `proto.Message` object so that the results can be printed back to the user.
- **Adds query flags:** All query commands must add a set of query [flags](#flags). The query flags are added to the constructed command using `AddQueryFlagsToCmd(cmd)`.
- **Returns the command:** Finally, the query command is returned.

View File

@ -39,7 +39,7 @@ Let us go through the methods:
- `DefaultGenesis(codec.JSONCodec)`: Returns a default [`GenesisState`](./genesis.md#genesisstate) for the module, marshalled to `json.RawMessage`. The default `GenesisState` need to be defined by the module developer and is primarily used for testing.
- `ValidateGenesis(codec.JSONCodec, client.TxEncodingConfig, json.RawMessage)`: Used to validate the `GenesisState` defined by a module, given in its `json.RawMessage` form. It will usually unmarshall the `json` before running a custom [`ValidateGenesis`](./genesis.md#validategenesis) function defined by the module developer.
- `RegisterRESTRoutes(client.Context, *mux.Router)`: Registers the REST routes for the module. These routes will be used to map REST request to the module in order to process them. See [../interfaces/rest.md] for more.
- `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module.
- `RegisterGRPCGatewayRoutes(client.Context, *runtime.ServeMux)`: Registers gRPC routes for the module.
- `GetTxCmd()`: Returns the root [`Tx` command](./module-interfaces.md#tx) for the module. The subcommands of this root command are used by end-users to generate new transactions containing [`message`s](./messages-and-queries.md#queries) defined in the module.
- `GetQueryCmd()`: Return the root [`query` command](./module-interfaces.md#query) for the module. The subcommands of this root command are used by end-users to generate new queries to the subset of the state defined by the module.
@ -76,7 +76,6 @@ Let us go through the methods of `AppModule`:
- `BeginBlock(sdk.Context, abci.RequestBeginBlock)`: This method gives module developers the option to implement logic that is automatically triggered at the beginning of each block. Implement empty if no logic needs to be triggered at the beginning of each block for this module.
- `EndBlock(sdk.Context, abci.RequestEndBlock)`: This method gives module developers the option to implement logic that is automatically triggered at the end of each block. This is also where the module can inform the underlying consensus engine of validator set changes (e.g. the `staking` module). Implement empty if no logic needs to be triggered at the end of each block for this module.
### Implementing the Application Module Interfaces
Typically, the various application module interfaces are implemented in a file called `module.go`, located in the module's folder (e.g. `./x/module/module.go`).
@ -135,7 +134,7 @@ The module manager is used throughout the application whenever an action on a co
- `SetOrderBeginBlockers(moduleNames ...string)`: Sets the order in which the `BeginBlock()` function of each module will be called at the beginning of each block. This function is generally called from the application's main [constructor function](../basics/app-anatomy.md#constructor-function).
- `SetOrderEndBlockers(moduleNames ...string)`: Sets the order in which the `EndBlock()` function of each module will be called at the end of each block. This function is generally called from the application's main [constructor function](../basics/app-anatomy.md#constructor-function).
- `RegisterInvariants(ir sdk.InvariantRegistry)`: Registers the [invariants](./invariants.md) of each module.
- `RegisterRoutes(router sdk.Router, queryRouter sdk.QueryRouter, legacyQuerierCdc *codec.LegacyAmino)`: Registers legacy [`Msg`](./messages-and-queries.md#messages) and [`querier`](./query-services.md#legacy-queriers) routes.
- `RegisterRoutes(router sdk.Router, queryRouter sdk.QueryRouter, legacyQuerierCdc *codec.LegacyAmino)`: Registers legacy [`Msg`](./messages-and-queries.md#messages) and [`querier`](./query-services.md#legacy-queriers) routes.
- `RegisterServices(cfg Configurator)`: Registers all module services.
- `InitGenesis(ctx sdk.Context, cdc codec.JSONCodec, genesisData map[string]json.RawMessage)`: Calls the [`InitGenesis`](./genesis.md#initgenesis) function of each module when the application is first started, in the order defined in `OrderInitGenesis`. Returns an `abci.ResponseInitChain` to the underlying consensus engine, which can contain validator updates.
- `ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec)`: Calls the [`ExportGenesis`](./genesis.md#exportgenesis) function of each module, in the order defined in `OrderExportGenesis`. The export constructs a genesis file from a previously existing state, and is mainly used when a hard-fork upgrade of the chain is required.

View File

@ -1,6 +1,7 @@
<!--
order: 4
-->
# `Msg` Services
A Protobuf `Msg` service processes [messages](./messages-and-queries.md#messages). Protobuf `Msg` services are specific to the module in which they are defined, and only process messages defined within the said module. They are called from `BaseApp` during [`DeliverTx`](../core/baseapp.md#delivertx). {synopsis}
@ -92,7 +93,6 @@ Then, a simple switch calls the appropriate `handler` based on the `LegacyMsg` t
In this regard, `handler`s functions need to be implemented for each module `LegacyMsg`. This will also involve manual handler registration of `LegacyMsg` types.
`handler`s functions should return a `*Result` and an `error`.
## Telemetry
New [telemetry metrics](../core/telemetry.md) can be created from `msgServer` methods when handling messages.

View File

@ -10,11 +10,11 @@ This document details how to define each module simulation functions to be
integrated with the application `SimulationManager`.
* [Simulation package](#simulation-package)
* [Store decoders](#store-decoders)
* [Randomized genesis](#randomized-genesis)
* [Randomized parameters](#randomized-parameters)
* [Random weighted operations](#random-weighted-operations)
* [Random proposal contents](#random-proposal-contents)
* [Store decoders](#store-decoders)
* [Randomized genesis](#randomized-genesis)
* [Randomized parameters](#randomized-parameters)
* [Random weighted operations](#random-weighted-operations)
* [Random proposal contents](#random-proposal-contents)
* [Registering the module simulation functions](#registering-simulation-functions)
* [App simulator manager](#app-simulator-manager)
* [Simulation tests](#simulation-tests)

View File

@ -83,16 +83,16 @@ x/{module_name}
- `simulation/`: The module's [simulation](./simulator.html) package defines functions used by the blockchain simulator application (`simapp`).
- `spec/`: The module's specification documents outlining important concepts, state storage structure, and message and event type definitions.
- The root directory includes type definitions for messages, events, and genesis state, including the type definitions generated by Protocol Buffers.
- `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined).
- `codec.go`: The module's registry methods for interface types.
- `errors.go`: The module's sentinel errors.
- `events.go`: The module's event types and constructors.
- `expected_keepers.go`: The module's [expected keeper](./keeper.html#type-definition) interfaces.
- `genesis.go`: The module's genesis state methods and helper functions.
- `keys.go`: The module's store keys and associated helper functions.
- `msgs.go`: The module's message type definitions and associated methods.
- `params.go`: The module's parameter type definitions and associated methods.
- `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above).
- `abci.go`: The module's `BeginBlocker` and `EndBlocker` implementations (this file is only required if `BeginBlocker` and/or `EndBlocker` need to be defined).
- `codec.go`: The module's registry methods for interface types.
- `errors.go`: The module's sentinel errors.
- `events.go`: The module's event types and constructors.
- `expected_keepers.go`: The module's [expected keeper](./keeper.html#type-definition) interfaces.
- `genesis.go`: The module's genesis state methods and helper functions.
- `keys.go`: The module's store keys and associated helper functions.
- `msgs.go`: The module's message type definitions and associated methods.
- `params.go`: The module's parameter type definitions and associated methods.
- `*.pb.go`: The module's type definitions generated by Protocol Buffers (as defined in the respective `*.proto` files above).
## Next {hide}

View File

@ -12,19 +12,19 @@ In-place store migrations allow your modules to upgrade to new versions that inc
## Consensus Version
Successful upgrades of existing modules require your `AppModule` to implement the function `ConsensusVersion() uint64`.
Successful upgrades of existing modules require your `AppModule` to implement the function `ConsensusVersion() uint64`.
- The versions must be hard-coded by the module developer.
- The initial version **must** be set to 1.
- The versions must be hard-coded by the module developer.
- The initial version **must** be set to 1.
Consensus versions serve as state-breaking versions of app modules and are incremented when the module is upgraded.
Consensus versions serve as state-breaking versions of app modules and are incremented when the module is upgraded.
## Registering Migrations
To register the functionality that takes place during a module upgrade, you must register which migrations we want to take place.
Migration registration takes place in the `Configurator` using the `RegisterMigration` method. The `AppModule` reference to the configurator is in the `RegisterServices` method.
Migration registration takes place in the `Configurator` using the `RegisterMigration` method. The `AppModule` reference to the configurator is in the `RegisterServices` method.
You can register one or more migrations. If you register more than one migration script, list the migrations in increasing order and ensure there are enough migrations that lead to the desired consensus version. For example, to migrate to version 3 of a module, register separate migrations for version 1 and version 2 as shown in the following example:
```golang

View File

@ -10,7 +10,6 @@ parent:
This documentation is not complete and it's outdated. Please use the English version.
:::
## 开始
- **[SDK 介绍](./intro/README.md)**Cosmos SDK 的总体概览

View File

@ -5,7 +5,6 @@
This documentation is not complete and it's outdated. Please use the English version.
:::
此目录包含对 cosmos sdk 的基础概念介绍
1. [SDK 应用解析](./app-anatomy.md)

View File

@ -67,9 +67,9 @@ Blockchain Node | | Consensus | |
- 使用模块管理器,在每个应用程序的模块 的 InitGenesisBegingBlocker 和 EndBlocker 函数之间设置执行顺序。 请注意,并非所有模块都实现这些功能。
- 模块实现这些功能。
- 设置其余的应用程序参数:
- `InitChainer` 于在应用程序首次启动时对其进行初始化。
- `BeginBlocker``EndBlocker`:在每个块的开始和结尾处调用。
- `anteHandler`:用于处理费用和签名验证。
- `InitChainer` 于在应用程序首次启动时对其进行初始化。
- `BeginBlocker``EndBlocker`:在每个块的开始和结尾处调用。
- `anteHandler`:用于处理费用和签名验证。
- 挂载存储.
- 返回应用实例.
@ -179,7 +179,7 @@ AppModule 在模块上公开了一组有用的方法,这些方法有助于将
`keeper` 类型定义通常包括:
- 多重存储中模块存储的`密钥`。
- 参考**其他模块的`keepers`**。 仅当 `keeper` 需要访问其他模块的存储(从它们读取或写入)时才需要。
- 参考**其他模块的`keepers`**。 仅当 `keeper` 需要访问其他模块的存储(从它们读取或写入)时才需要。
- 对应用程序的`编解码器`的引用。 `keeper` 需要它在存储结构之前序列化处理,或在检索它们时将反序列化处理,因为存储仅接受 `[]bytes` 作为值。
与类型定义一起keeper.go 文件的一个重要组成部分是 Keeper 的构造函数 NewKeeper。 该函数实例化上面定义的类型的新 `keeper`,并带有 `codec`,存储 `keys` 以及可能引用其他模块的 `keeper` 作为参数。从应用程序的构造函数中调用 `NewKeeper` 函数。文件的其余部分定义了 `keeper` 的方法,主要是 getter 和 setter。

View File

@ -111,10 +111,13 @@ Flags are added to commands directly (generally in the [module's CLI file](../bu
## Environment variables
Each flag is bound to it's respecteve named environment variable. Then name of the environment variable consist of two parts - capital case `basename` followed by flag name of the flag. `-` must be substituted with `_`. For example flag `--home` for application with basename `GAIA` is bound to `GAIA_HOME`. It allows to reduce amount of flags typed for routine operations. For example instead of:
```sh
gaia --home=./ --node=<node address> --chain-id="testchain-1" --keyring-backend=test tx ... --from=<key name>
```
this will be more convinient:
```sh
# define env variables in .env, .envrc etc
GAIA_HOME=<path to home>

View File

@ -56,10 +56,10 @@ explicitly pass a context `ctx` as the first argument of a process.
## Store branching
The `Context` contains a `MultiStore`, which allows for branchinig and caching functionality using `CacheMultiStore`
(queries in `CacheMultiStore` are cached to avoid future round trips).
The `Context` contains a `MultiStore`, which allows for branchinig and caching functionality using `CacheMultiStore`
(queries in `CacheMultiStore` are cached to avoid future round trips).
Each `KVStore` is branched in a safe and isolated ephemeral storage. Processes are free to write changes to
the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can
the `CacheMultiStore`. If a state-transition sequence is performed without issue, the store branch can
be committed to the underlying store at the end of the sequence or disregard them if something
goes wrong. The pattern of usage for a Context is as follows:

View File

@ -86,7 +86,7 @@ Another important use of Protobuf is the encoding and decoding of
[transactions](./transactions.md). Transactions are defined by the application or
the SDK but are then passed to the underlying consensus engine to be relayed to
other peers. Since the underlying consensus engine is agnostic to the application,
the consensus engine accepts only transactions in the form of raw bytes.
the consensus engine accepts only transactions in the form of raw bytes.
- The `TxEncoder` object performs the encoding.
- The `TxDecoder` object performs the decoding.

View File

@ -92,12 +92,12 @@ Independently from the Cosmos SDK, Tendermint also exposes a RPC server. This RP
Some Tendermint RPC endpoints are directly related to the Cosmos SDK:
- `/abci_query`: this endpoint will query the application for state. As the `path` parameter, you can send the following strings:
- any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.QueryAllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf.
- `/app/simulate`: this will simulate a transaction, and return some information such as gas used.
- `/app/version`: this will return the application's version.
- `/store/{path}`: this will query the store directly.
- `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port.
- `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID.
- any Protobuf fully-qualified service method, such as `/cosmos.bank.v1beta1.QueryAllBalances`. The `data` field should then include the method's request parameter(s) encoded as bytes using Protobuf.
- `/app/simulate`: this will simulate a transaction, and return some information such as gas used.
- `/app/version`: this will return the application's version.
- `/store/{path}`: this will query the store directly.
- `/p2p/filter/addr/{port}`: this will return a filtered list of the node's P2P peers by address port.
- `/p2p/filter/id/{id}`: this will return a filtered list of the node's P2P peers by ID.
- `/broadcast_tx_{aync,async,commit}`: these 3 endpoint will broadcast a transaction to other peers. CLI, gRPC and REST expose [a way to broadcast transations](./transactions.md#broadcasting-the-transaction), but they all use these 3 Tendermint RPCs under the hood.
## Comparison Table

View File

@ -31,9 +31,9 @@ foundation of an object capability system.
> These structural properties stem from the two rules governing
> access to existing objects:
>
> 1. An object A can send a message to B only if object A holds a
> 1. An object A can send a message to B only if object A holds a
> reference to B.
> 2. An object A can obtain a reference to C only
> 2. An object A can obtain a reference to C only
> if object A receives a message containing a reference to C. As a
> consequence of these two rules, an object can obtain a reference
> to another object only through a preexisting chain of references.

View File

@ -10,20 +10,18 @@ Read and understand all of the in-place store migration documentation before you
Upgrade your app modules smoothly with custom in-place migration logic. {synopsis}
The Cosmos SDK uses two methods to perform upgrades.
The Cosmos SDK uses two methods to perform upgrades.
- Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. See the [Chain Upgrade Guide](../migrations/chain-upgrade-guide-040.md#upgrade-procedure).
- Exporting the entire application state to a JSON file using the `export` CLI command, making changes, and then starting a new binary with the changed JSON file as the genesis file. See the [Chain Upgrade Guide](../migrations/chain-upgrade-guide-040.md#upgrade-procedure).
- Version v0.43 and later can perform upgrades in place to significantly decrease the upgrade time for chains with a larger state. Use the [Migration Upgrade Guide](../building-modules/upgrade.md) guide to set up your application modules to take advantage of in-place upgrades.
This document provides steps to use the In-Place Store Migrations upgrade method.
## Tracking Module Versions
Each module gets assigned a consensus version by the module developer. The consensus version serves as the breaking change version of the module. The SDK keeps track of all module consensus versions in the x/upgrade `VersionMap` store. During an upgrade, the difference between the old `VersionMap` stored in state and the new `VersionMap` is calculated by the Cosmos SDK. For each identified difference, the module-specific migrations are run and the respective consensus version of each upgraded module is incremented.
## Genesis State
When starting a new chain, the consensus version of each module must be saved to state during the application's genesis. To save the consensus version, add the following line to the `InitChainer` method in `app.go`:
@ -36,13 +34,15 @@ func (app *MyApp) InitChainer(ctx sdk.Context, req abci.RequestInitChain) abci.R
}
```
This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app.
This information is used by the Cosmos SDK to detect when modules with newer versions are introduced to the app.
### Consensus Version
The consensus version is defined on each app module by the module developer and serves as the breaking change version of the module. The consensus version informs the SDK on which modules need to be upgraded. For example, if the bank module was version 2 and an upgrade introduces bank module 3, the SDK upgrades the bank module and runs the "version 2 to 3" migration script.
### Version Map
The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted to state.
The version map is a mapping of module names to consensus versions. The map is persisted to x/upgrade's state for use during in-place migrations. When migrations finish, the updated version map is persisted to state.
## Upgrade Handlers
@ -89,7 +89,7 @@ The Cosmos SDK offers modules that the application developer can import in their
You can write your own `InitGenesis` function for an imported module. To do this, manually trigger your custom genesis function in the upgrade handler.
::: warning
You MUST manually set the consensus version in the version map passed to the `UpgradeHandler` function. Without this, the SDK will run the Module's existing `InitGenesis` code even if you triggered your custom function in the `UpgradeHandler`.
You MUST manually set the consensus version in the version map passed to the `UpgradeHandler` function. Without this, the SDK will run the Module's existing `InitGenesis` code even if you triggered your custom function in the `UpgradeHandler`.
:::
```go
@ -98,8 +98,8 @@ import foo "github.com/my/module/foo"
app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgradetypes.Plan, vm module.VersionMap) (module.VersionMap, error) {
// Register the consensus version in the version map
// to avoid the SDK from triggering the default
// InitGenesis function.
// to avoid the SDK from triggering the default
// InitGenesis function.
vm["foo"] = foo.AppModule{}.ConsensusVersion()
// Run custom InitGenesis for foo
@ -126,8 +126,8 @@ app.UpgradeKeeper.SetUpgradeHandler("my-plan", func(ctx sdk.Context, plan upgrad
## Syncing a Full Node to an Upgraded Blockchain
You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor
You can sync a full node to an existing blockchain which has been upgraded using Cosmovisor
In order to successfully sync, you must start with the initial binary that the blockchain started with at genesis. Cosmovisor will handle downloading and switching to the binaries associated with each sequential upgrade.
In order to successfully sync, you must start with the initial binary that the blockchain started with at genesis. Cosmovisor will handle downloading and switching to the binaries associated with each sequential upgrade.
To learn more about Cosmovisor, see the [Cosmovisor Quick Start](../run-node/cosmovisor.md).

View File

@ -84,7 +84,7 @@ OnChanOpenTry(
return err
}
}
// ... do custom initialization logic
// Use above arguments to determine if we want to abort handshake
@ -342,10 +342,10 @@ acknowledgement. An example of this technique is in the `ibc-transfer` module's
### Acknowledgements
Modules may commit an acknowledgement upon receiving and processing a packet in the case of synchronous packet processing.
In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement
In the case where a packet is processed at some later point after the packet has been received (asynchronous execution), the acknowledgement
will be written once the packet has been processed by the application which may be well after the packet receipt.
NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement
NOTE: Most blockchain modules will want to use the synchronous execution model in which the module processes and writes the acknowledgement
for a packet as soon as it has been received from the IBC module.
This acknowledgement can then be relayed back to the original sender chain, which can take action
@ -408,7 +408,7 @@ OnAcknowledgementPacket(
#### Timeout Packets
If the timeout for a packet is reached before the packet is successfully received or the
If the timeout for a packet is reached before the packet is successfully received or the
counterparty channel end is closed before the packet is successfully received, then the receiving
chain can no longer process it. Thus, the sending chain must process the timeout using
`OnTimeoutPacket` to handle this situation. Again the IBC module will verify that the timeout is

View File

@ -1,6 +1,6 @@
<!-- order: 1 -->
# IBC Overview
# IBC Overview
Learn what IBC is, its components, and use cases. {synopsis}

View File

@ -4,7 +4,7 @@ order: 5
# Governance Proposals
In uncommon situations, a highly valued client may become frozen due to uncontrollable
In uncommon situations, a highly valued client may become frozen due to uncontrollable
circumstances. A highly valued client might have hundreds of channels being actively used.
Some of those channels might have a significant amount of locked tokens used for ICS 20.
@ -12,26 +12,26 @@ If the one third of the validator set of the chain the client represents decides
they can sign off on two valid but conflicting headers each signed by the other one third
of the honest validator set. The light client can now be updated with two valid, but conflicting
headers at the same height. The light client cannot know which header is trustworthy and therefore
evidence of such misbehaviour is likely to be submitted resulting in a frozen light client.
evidence of such misbehaviour is likely to be submitted resulting in a frozen light client.
Frozen light clients cannot be updated under any circumstance except via a governance proposal.
Since a quorum of validators can sign arbitrary state roots which may not be valid executions
Since a quorum of validators can sign arbitrary state roots which may not be valid executions
of the state machine, a governance proposal has been added to ease the complexity of unfreezing
or updating clients which have become "stuck". Without this mechanism, validator sets would need
to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels
built upon that client. This may result in recovery of otherwise lost funds.
to construct a state root to unfreeze the client. Unfreezing clients, re-enables all of the channels
built upon that client. This may result in recovery of otherwise lost funds.
Tendermint light clients may become expired if the trusting period has passed since their
Tendermint light clients may become expired if the trusting period has passed since their
last update. This may occur if relayers stop submitting headers to update the clients.
An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty
chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator
set before the chain-id changes. In this situation, the validator set of the last valid update for the
light client is never expected to produce another valid header since the chain-id has changed, which will
An unplanned upgrade by the counterparty chain may also result in expired clients. If the counterparty
chain undergoes an unplanned upgrade, there may be no commitment to that upgrade signed by the validator
set before the chain-id changes. In this situation, the validator set of the last valid update for the
light client is never expected to produce another valid header since the chain-id has changed, which will
ultimately lead the on-chain light client to become expired.
In the case that a highly valued light client is frozen, expired, or rendered non-updateable, a
governance proposal may be submitted to update this client, known as the subject client. The
governance proposal may be submitted to update this client, known as the subject client. The
proposal includes the client identifier for the subject, the client identifier for a substitute
client, and an initial height to reference the substitute client from. Light client implementations
may implement custom updating logic, but in most cases, the subject will be updated with information
@ -39,4 +39,4 @@ from the substitute client, if the proposal passes. The substitute client is use
while the subject is on trial. It is best practice to create a substitute client *after* the subject
has become frozen to avoid the substitute from also becoming frozen. An active substitute client
allows headers to be submitted during the voting period to prevent accidental expiry once the proposal
passes.
passes.

View File

@ -17,20 +17,20 @@ Any message that uses IBC will emit events for the corresponding TAO logic execu
the [IBC events spec](https://github.com/cosmos/ibc-go/blob/main/modules/core/spec/06_events.md).
In the SDK, it can be assumed that for every message there is an event emitted with the type `message`,
attribute key `action`, and an attribute value representing the type of message sent
(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries
attribute key `action`, and an attribute value representing the type of message sent
(`channel_open_init` would be the attribute value for `MsgChannelOpenInit`). If a relayer queries
for transaction events, it can split message events using this event Type/Attribute Key pair.
The Event Type `message` with the Attribute Key `module` may be emitted multiple times for a single
message due to application callbacks. It can be assumed that any TAO logic executed will result in
message due to application callbacks. It can be assumed that any TAO logic executed will result in
a module event emission with the attribute value `ibc_<submodulename>` (02-client emits `ibc_client`).
### Subscribing with Tendermint
### Subscribing with Tendermint
Calling the Tendermint RPC method `Subscribe` via [Tendermint's Websocket](https://docs.tendermint.com/master/rpc/) will return events using
Tendermint's internal representation of them. Instead of receiving back a list of events as they
were emitted, Tendermint will return the type `map[string][]string` which maps a string in the
form `<event_type>.<attribute_key>` to `attribute_value`. This causes extraction of the event
form `<event_type>.<attribute_key>` to `attribute_value`. This causes extraction of the event
ordering to be non-trivial, but still possible.
A relayer should use the `message.action` key to extract the number of messages in the transaction

View File

@ -6,7 +6,7 @@ parent:
### Upgrading IBC Chains Overview
This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections.
This directory contains information on how to upgrade an IBC chain without breaking counterparty clients and connections.
IBC-connnected chains must be able to upgrade without breaking connections to other chains. Otherwise there would be a massive disincentive towards upgrading and disrupting high-value IBC connections, thus preventing chains in the IBC ecosystem from evolving and improving. Many chain upgrades may be irrelevant to IBC, however some upgrades could potentially break counterparty clients if not handled correctly. Thus, any IBC chain that wishes to perform a IBC-client-breaking upgrade must perform an IBC upgrade in order to allow counterparty clients to securely upgrade to the new light client.

View File

@ -13,4 +13,4 @@ This folder contains introduction material on the Cosmos SDK.
3. [Architecture of an SDK Application](./sdk-app-architecture.md)
4. [Cosmos SDK Design Overview](./sdk-design.md)
After reading the introduction material, head over to the [basics](../basics/README.md) to learn more.
After reading the introduction material, head over to the [basics](../basics/README.md) to learn more.

View File

@ -6,7 +6,7 @@ order: 1
## What is the SDK?
The [Cosmos-SDK](https://github.com/cosmos/cosmos-sdk) is an open-source framework for building multi-asset public Proof-of-Stake (PoS) <df value="blockchain">blockchains</df>, like the Cosmos Hub, as well as permissioned Proof-Of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**.
The [Cosmos-SDK](https://github.com/cosmos/cosmos-sdk) is an open-source framework for building multi-asset public Proof-of-Stake (PoS) <df value="blockchain">blockchains</df>, like the Cosmos Hub, as well as permissioned Proof-Of-Authority (PoA) blockchains. Blockchains built with the Cosmos SDK are generally referred to as **application-specific blockchains**.
The goal of the Cosmos SDK is to allow developers to easily create custom blockchains from scratch that can natively interoperate with other blockchains. We envision the SDK as the npm-like framework to build secure blockchain applications on top of [Tendermint](https://github.com/tendermint/tendermint). SDK-based blockchains are built out of composable [modules](../building-modules/intro.md), most of which are open source and readily available for any developers to use. Anyone can create a module for the Cosmos-SDK, and integrating already-built modules is as simple as importing them into your blockchain application. What's more, the Cosmos SDK is a capabilities-based system, which allows developers to better reason about the security of interactions between modules. For a deeper look at capabilities, jump to [this section](../core/ocap.md).
@ -14,7 +14,7 @@ The goal of the Cosmos SDK is to allow developers to easily create custom blockc
One development paradigm in the blockchain world today is that of virtual-machine blockchains like Ethereum, where development generally revolves around building a decentralised applications on top of an existing blockchain as a set of smart contracts. While smart contracts can be very good for some use cases like single-use applications (e.g. ICOs), they often fall short for building complex decentralised platforms. More generally, smart contracts can be limiting in terms of flexibility, sovereignty and performance.
Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance.
Application-specific blockchains offer a radically different development paradigm than virtual-machine blockchains. An application-specific blockchain is a blockchain customized to operate a single application: developers have all the freedom to make the design decisions required for the application to run optimally. They can also provide better sovereignty, security and performance.
Learn more about [application-specific blockchains](./why-app-specific.md).
@ -23,9 +23,9 @@ Learn more about [application-specific blockchains](./why-app-specific.md).
The Cosmos SDK is the most advanced framework for building custom application-specific blockchains today. Here are a few reasons why you might want to consider building your decentralised application with the Cosmos SDK:
- The default consensus engine available within the SDK is [Tendermint Core](https://github.com/tendermint/tendermint). Tendermint is the most (and only) mature BFT consensus engine in existence. It is widely used across the industry and is considered the gold standard consensus engine for building Proof-of-Stake systems.
- The SDK is open source and designed to make it easy to build blockchains out of composable [modules](../../x/). As the ecosystem of open source SDK modules grows, it will become increasingly easier to build complex decentralised platforms with it.
- The SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains.
- Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK.
- The SDK is open source and designed to make it easy to build blockchains out of composable [modules](../../x/). As the ecosystem of open source SDK modules grows, it will become increasingly easier to build complex decentralised platforms with it.
- The SDK is inspired by capabilities-based security, and informed by years of wrestling with blockchain state-machines. This makes the Cosmos SDK a very secure environment to build blockchains.
- Most importantly, the Cosmos SDK has already been used to build many application-specific blockchains that are already in production. Among others, we can cite [Cosmos Hub](https://hub.cosmos.network), [IRIS Hub](https://irisnet.org), [Binance Chain](https://docs.binance.org/), [Terra](https://terra.money/) or [Kava](https://www.kava.io/). [Many more](https://cosmos.network/ecosystem) are building on the Cosmos SDK.
## Getting started with the Cosmos SDK

View File

@ -38,7 +38,6 @@ The Cosmos SDK gives developers maximum flexibility to define the state of their
Thanks to the Cosmos SDK, developers just have to define the state machine, and [*Tendermint*](https://tendermint.com/docs/introduction/what-is-tendermint.html) will handle replication over the network for them.
```
^ +-------------------------------+ ^
| | | | Built with Cosmos SDK
@ -55,7 +54,6 @@ Blockchain node | | Consensus | |
v +-------------------------------+ v
```
[Tendermint](https://docs.tendermint.com/v0.34/introduction/what-is-tendermint.html) is an application-agnostic engine that is responsible for handling the *networking* and *consensus* layers of a blockchain. In practice, this means that Tendermint is responsible for propagating and ordering transaction bytes. Tendermint Core relies on an eponymous Byzantine-Fault-Tolerant (BFT) algorithm to reach consensus on the order of transactions.
The Tendermint [consensus algorithm](https://docs.tendermint.com/v0.34/introduction/what-is-tendermint.html#consensus-overview) works with a set of special nodes called *Validators*. Validators are responsible for adding blocks of transactions to the blockchain. At any given block, there is a validator set V. A validator in V is chosen by the algorithm to be the proposer of the next block. This block is considered valid if more than two thirds of V signed a *[prevote](https://docs.tendermint.com/v0.34/spec/consensus/consensus.html#prevote-step-height-h-round-r)* and a *[precommit](https://docs.tendermint.com/v0.34/spec/consensus/consensus.html#precommit-step-height-h-round-r)* on it, and if all the transactions that it contains are valid. The validator set can be changed by rules written in the state-machine.
@ -88,13 +86,12 @@ Here are the most important messages of the ABCI:
- `CheckTx`: When a transaction is received by Tendermint Core, it is passed to the application to check if a few basic requirements are met. `CheckTx` is used to protect the mempool of full-nodes against spam transactions. A special handler called the [`AnteHandler`](../basics/gas-fees.md#antehandler) is used to execute a series of validation steps such as checking for sufficient fees and validating the signatures. If the checks are valid, the transaction is added to the [mempool](https://docs.tendermint.com/v0.34/tendermint-core/mempool.html#mempool) and relayed to peer nodes. Note that transactions are not processed (i.e. no modification of the state occurs) with `CheckTx` since they have not been included in a block yet.
- `DeliverTx`: When a [valid block](https://docs.tendermint.com/v0.34/spec/blockchain/blockchain.html#validation) is received by Tendermint Core, each transaction in the block is passed to the application via `DeliverTx` in order to be processed. It is during this stage that the state transitions occur. The `AnteHandler` executes again along with the actual [`Msg` service](../building-modules/msg-services.md) RPC for each message in the transaction.
- `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transaction or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite.
- `BeginBlock`/`EndBlock`: These messages are executed at the beginning and the end of each block, whether the block contains transaction or not. It is useful to trigger automatic execution of logic. Proceed with caution though, as computationally expensive loops could slow down your blockchain, or even freeze it if the loop is infinite.
Find a more detailed view of the ABCI methods from the [Tendermint docs](https://docs.tendermint.com/v0.34/spec/abci/abci.html#overview).
Any application built on Tendermint needs to implement the ABCI interface in order to communicate with the underlying local Tendermint engine. Fortunately, you do not have to implement the ABCI interface. The Cosmos SDK provides a boilerplate implementation of it in the form of [baseapp](./sdk-design.md#baseapp).
## Next {hide}
Read about the [high-level design principles of the SDK](./sdk-design.md) {hide}

View File

@ -80,7 +80,7 @@ Here is a simplified view of how a transaction is processed by the application o
v
```
Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos-SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](../core/ocap.md). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities.
Each module can be seen as a little state-machine. Developers need to define the subset of the state handled by the module, as well as custom message types that modify the state (*Note:* `messages` are extracted from `transactions` by `baseapp`). In general, each module declares its own `KVStore` in the `multistore` to persist the subset of the state it defines. Most developers will need to access other 3rd party modules when building their own modules. Given that the Cosmos-SDK is an open framework, some of the modules may be malicious, which means there is a need for security principles to reason about inter-module interactions. These principles are based on [object-capabilities](../core/ocap.md). In practice, this means that instead of having each module keep an access control list for other modules, each module implements special objects called `keepers` that can be passed to other modules to grant a pre-defined set of capabilities.
SDK modules are defined in the `x/` folder of the SDK. Some core modules include:

View File

@ -2,13 +2,13 @@
order: 2
-->
# Application-Specific Blockchains
# Application-Specific Blockchains
This document explains what application-specific blockchains are, and why developers would want to build one as opposed to writing Smart Contracts. {synopsis}
## What are application-specific blockchains?
Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralised application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interract with the nodes.
Application-specific blockchains are blockchains customized to operate a single application. Instead of building a decentralised application on top of an underlying blockchain like Ethereum, developers build their own blockchain from the ground up. This means building a full-node client, a light-client, and all the necessary interfaces (CLI, REST, ...) to interract with the nodes.
```
^ +-------------------------------+ ^
@ -28,13 +28,13 @@ Blockchain node | | Consensus | |
## What are the shortcomings of Smart Contracts?
Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralised applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize.
Virtual-machine blockchains like Ethereum addressed the demand for more programmability back in 2014. At the time, the options available for building decentralised applications were quite limited. Most developers would build on top of the complex and limited Bitcoin scripting language, or fork the Bitcoin codebase which was hard to work with and customize.
Virtual-machine blockchains came in with a new value proposition. Their state-machine incorporates a virtual-machine that is able to interpret turing-complete programs called Smart Contracts. These Smart Contracts are very good for use cases like one-time events (e.g. ICOs), but they can fall short for building complex decentralised platforms. Here is why:
- Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails.
- Smart Contracts are generally developed with specific programming languages that can be interpreted by the underlying virtual-machine. These programming languages are often immature and inherently limited by the constraints of the virtual-machine itself. For example, the Ethereum Virtual Machine does not allow developers to implement automatic execution of code. Developers are also limited to the account-based system of the EVM, and they can only choose from a limited set of functions for their cryptographic operations. These are examples, but they hint at the lack of **flexibility** that a smart contract environment often entails.
- Smart Contracts are all run by the same virtual machine. This means that they compete for resources, which can severly restrain **performance**. And even if the state-machine were to be split in multiple subsets (e.g. via sharding), Smart Contracts would still need to be interpeted by a virtual machine, which would limit performance compared to a native application implemented at state-machine level (our benchmarks show an improvement on the order of x10 in performance when the virtual-machine is removed).
- Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralised application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it.
- Another issue with the fact that Smart Contracts share the same underlying environment is the resulting limitation in **sovereignty**. A decentralised application is an ecosystem that involves multiple players. If the application is built on a general-purpose virtual-machine blockchain, stakeholders have very limited sovereignty over their application, and are ultimately superseded by the governance of the underlying blockchain. If there is a bug in the application, very little can be done about it.
Application-Specific Blockchains are designed to address these shortcomings.
@ -48,33 +48,33 @@ Application-specific blockchains give maximum flexibility to developers:
- Developers can choose among multiple frameworks to build their state-machine. The most widely used today is the Cosmos SDK, but others exist (e.g. [Lotion](https://github.com/nomic-io/lotion), [Weave](https://github.com/iov-one/weave), ...). The choice will most of the time be done based on the programming language they want to use (Cosmos SDK and Weave are in Golang, Lotion is in Javascript, ...).
- The ABCI also allows developers to swap the consensus engine of their application-specific blockchain. Today, only Tendermint is production-ready, but in the future other consensus engines are expected to emerge.
- Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms.
- Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...).
- Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains.
- Even when they settle for a framework and consensus engine, developers still have the freedom to tweak them if they don't perfectly match their requirements in their pristine forms.
- Developers are free to explore the full spectrum of tradeoffs (e.g. number of validators vs transaction throughput, safety vs availability in asynchrony, ...) and design choices (DB or IAVL tree for storage, UTXO or account model, ...).
- Developers can implement automatic execution of code. In the Cosmos SDK, logic can be automatically triggered at the beginning and the end of each block. They are also free to choose the cryptographic library used in their application, as opposed to being constrained by what is made available by the underlying environment in the case of virtual-machine blockchains.
The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers.
The list above contains a few examples that show how much flexibility application-specific blockchains give to developers. The goal of Cosmos and the Cosmos SDK is to make developer tooling as generic and composable as possible, so that each part of the stack can be forked, tweaked and improved without losing compatibility. As the community grows, more alternatives for each of the core building blocks will emerge, giving more options to developers.
### Performance
Decentralised applications built with Smart Contracts are inherently capped in performance by the underlying environment. For a decentralised application to optimise performance, it needs to be built as an application-specific blockchains. Next are some of the benefits an application-specific blockchain brings in terms of performance:
- Developers of application-specific blockchains can choose to operate with a novel consensus engine such as Tendermint BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput.
- Developers of application-specific blockchains can choose to operate with a novel consensus engine such as Tendermint BFT. Compared to Proof-of-Work (used by most virtual-machine blockchains today), it offers significant gains in throughput.
- An application-specific blockchain only operates a single application, so that the application does not compete with others for computation and storage. This is the opposite of most non-sharded virtual-machine blockchains today, where smart contracts all compete for computation and storage.
- Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them.
- Even if a virtual-machine blockchain offered application-based sharding coupled with an efficient consensus algorithm, performance would still be limited by the virtual-machine itself. The real throughput bottleneck is the state-machine, and requiring transactions to be interpreted by a virtual-machine significantly increases the computational complexity of processing them.
### Security
### Security
Security is hard to quantify, and greatly varies from platform to platform. That said here are some important benefits an application-specific blockchain can bring in terms of security:
- Developers can choose proven programming languages like Golang when building their application-specific blockchains, as opposed to smart contract programming languages that are often more immature.
- Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries.
- Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application.
- Developers are not constrained by the cryptographic functions made available by the underlying virtual-machines. They can use their own custom cryptography, and rely on well-audited crypto libraries.
- Developers do not have to worry about potential bugs or exploitable mechanisms in the underlying virtual-machine, making it easier to reason about the security of the application.
### Sovereignty
One of the major benefits of application-specific blockchains is sovereignty. A decentralised application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on virtual-machine blockchain where many decentralised applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen.
One of the major benefits of application-specific blockchains is sovereignty. A decentralised application is an ecosystem that involves many actors: users, developers, third-party services, and more. When developers build on virtual-machine blockchain where many decentralised applications coexist, the community of the application is different than the community of the underlying blockchain, and the latter supersedes the former in the governance process. If there is a bug or if a new feature is needed, stakeholders of the application have very little leeway to upgrade the code. If the community of the underlying blockchain refuses to act, nothing can happen.
The fundamental issue here is that the governance of the application and the governance of the network are not aligned. This issue is solved by application-specific blockchains. Because application-specific blockchains specialize to operate a single application, stakeholders the application has full control over the entire chain. This ensures the community will not be stuck if a bug is discovered, and that it has the entire freedom to choose how it is going to evolve.
The fundamental issue here is that the governance of the application and the governance of the network are not aligned. This issue is solved by application-specific blockchains. Because application-specific blockchains specialize to operate a single application, stakeholders the application has full control over the entire chain. This ensures the community will not be stuck if a bug is discovered, and that it has the entire freedom to choose how it is going to evolve.
## Next {hide}

View File

@ -1,3 +1,3 @@
# CLI
> TODO: Rewrite this section to explain how CLI works for a generic SDK app.
> TODO: Rewrite this section to explain how CLI works for a generic SDK app.

View File

@ -4,7 +4,7 @@
## 소개
라이트 클라이언트는 핸드폰 같은 클라이언트에서 블록체인 상태(state)에 대한 증거(proof)를 풀노드로부터 전달받을 수 있게 합니다. 라이트 클라이언트는 전송받은 증거에 대한 검증을 자체적으로 수행할 수 있기 때문에 풀노드를 신뢰하지 않아도 되며, 풀노드의 거짓 정보 전달을 확인할 수 있다.
라이트 클라이언트는 핸드폰 같은 클라이언트에서 블록체인 상태(state)에 대한 증거(proof)를 풀노드로부터 전달받을 수 있게 합니다. 라이트 클라이언트는 전송받은 증거에 대한 검증을 자체적으로 수행할 수 있기 때문에 풀노드를 신뢰하지 않아도 되며, 풀노드의 거짓 정보 전달을 확인할 수 있다.
라이트 클라이언트는 대역폭(bandwidth), 컴퓨터 연산력 그리고 저장공간 측면에서 큰 리소스를 소모하지 않고도 풀노드와 동일한 보안을 제공할 수 있다. 또한 유저의 설정에 따라 모듈화 된 기능성을 제공할 수 있다. 이런 우수한 기능은 개발자들이 풀 블록체인 노드가 없이도 안전하고, 효율적이고, 사용성이 높은 모바일 애플리케이션, 웹사이트 등을 만들 수 있게 한다.

View File

@ -2,7 +2,6 @@
REST 서버를 가동하기 위해서는 다음과 같은 파라미터 값을 정의해야 합니다:
| 파라미터 | 형태 | 기본 값 | 필수/선택 | 설명 |
| ----------- | --------- | ----------------------- | -------- | ---------------------------------------------------- |
| chain-id | string | null | 필수 | 연결할 체인의 chain-id |

View File

@ -54,7 +54,6 @@ type KeyExistsProof struct {
존재 증거의 데이터 형식은 위와 같이 나열되어 있습니다. 존재 증거를 생성하고 검증하는 방식은 다음과 같습니다:
![Exist Proof](./pics/existProof.png)
증거 생성 절차:
@ -123,13 +122,12 @@ type KeyAbsentProof struct {
* 만약 우측 노드만 존재하는 경우, 존재 증거(exist proof)를 검증하여 최좌특 노드인지 확인한다
* 만약 우측 노드만 존재하는 경우, 존재 증거(exist proof)를 검증하여 최우측 노드인지 확인한다
* 만약 좌측 노드와 우측 노드가 동시에 존재하는 경우, 두 노드가 인접(adjacent)한지 확인한다
* 만약 좌측 노드와 우측 노드가 동시에 존재하는 경우, 두 노드가 인접(adjacent)한지 확인한다
### Substores 증거와 AppHash 증거 확인하기
IAVL 증거를 검증했다면 substore 증거와 AppHash를 비교하여 검증할 수 있습니다. 우선 MultiStoreCommitInfo를 반복(iterate)하여 proof StoreName을 이용해 서브스토어의 commitID를 찾을 수 있습니다. 여기에서 commitID의 해시가 RootHash의 proof와 동일하다는 것을 검증합니다. 만약 동일하지 않을 경우, 증거는 유효하지 않습니다. 이후 서브스토어 commitInfo 어레이를 서브스토어 이름의 해시 값으로 정렬합니다. 마지막으로, 모든 서브스토어 commitInfo 어레이를 기반으로 단순 머클 트리(simple Merkle tree)를 빌드하여 머클 루트 해시가 앱 해시와 동일한지 검증합니다.
![substore proof](./pics/substoreProof.png)
```go

View File

@ -8,7 +8,7 @@
다음 세가지 항목을 고려해야 합니다:
- 풀 노드(Full-nodes): 블록체인과의 인터랙션.
- 풀 노드(Full-nodes): 블록체인과의 인터랙션.
- REST 서버(Rest Server): HTTP 콜을 전달하는 역할.
- REST API: REST 서버의 활용 가능한 엔드포인트를 정의.
@ -36,7 +36,6 @@ gaiacli keys add <your_key_name>
이후 해당 키페어에 대한 비밀번호(최소 8글지)를 생성할 것을 요청받습니다. 커맨드는 다음 4개 정보를 리턴합니다:
- `NAME`: 키 이름
- `ADDRESS`: 주소 (토큰 전송을 받을때 이용)
- `PUBKEY`: 퍼블릭 키 (검증인들이 사용합니다)
@ -67,6 +66,7 @@ gaiacli send --amount=10faucetToken --chain-id=<name_of_testnet_chain> --from=<k
```
플래그:
- `--amount`: `<value|coinName>` 포맷의 코인 이름/코인 수량입니다.
- `--chain-id`: 이 플래그는 특정 체인의 ID를 설정할 수 있게 합니다. 앞으로 테스트넷 체인과 메인넷 체인은 각자 다른 아이디를 보유하게 됩니다.
- `--from`: 전송하는 계정의 키 이름.
@ -77,7 +77,7 @@ gaiacli send --amount=10faucetToken --chain-id=<name_of_testnet_chain> --from=<k
이 외의 기능을 이용하시려면 다음 명령어를 사용하세요:
```bash
gaiacli
gaiacli
```
사용 가능한 모든 명령어를 표기하며, 각 명령어 별로 `--help` 플래그를 사용하여 더 자세한 정보를 확인하실 수 있습니다.
@ -86,17 +86,17 @@ gaiacli
REST 서버는 풀노드와 프론트엔드 사이의 중계역할을 합니다. REST 서버는 풀노드와 다른 머신에서도 운영이 가능합니다.
REST 서버를 시작하시려면:
REST 서버를 시작하시려면:
```bash
gaiacli advanced rest-server --node=<full_node_address:full_node_port>
```
플래그:
- `--node`: 플노드의 주소와 포트를 입력하시면 됩니다. 만약 풀노드와 REST 서버가 동일한 머신에서 운영될 경우 주소 값은 `tcp://localhost:26657`로 설정하시면 됩니다.
- `--laddr`: REST 서버의 주소와 포트를 정하는 플래그입니다(기본 값 `1317`). 대다수의 경우에는 포트를 정하기 위해서 사용됩니다, 이 경우 주소는 "localhost"로 입력하시면 됩니다. 포맷은 <rest_server_address:port>입니다.
### 트랜잭션 수신 모니터링
추천하는 수신 트랜잭션을 모니터링하는 방식은 LCD의 다음 엔드포인트를 정기적으로 쿼리하는 것입니다:

View File

@ -39,15 +39,19 @@
`CheckTx``DeliverTx` 외에도 베이스앱은 다음과 같은 ABCI 메시지를 처리합니다.
### Info
TODO complete description (추후 업데이트 예정)
### SetOption
TODO complete description (추후 업데이트 예정)
### Query
TODO complete description (추후 업데이트 예정)
### InitChain
TODO complete description (추후 업데이트 예정)
체인 시동(chain initialization) 단계에서 `InitChain``CommitMultiStore`에 직접적으로 할당되어 있는 시동 로직을 실행합니다. check state와 deliver state는 정의된 ChainID로 시작됩니다.
@ -55,14 +59,16 @@ TODO complete description (추후 업데이트 예정)
참고할 것은 InitChain 이후에 커밋을 실행하지 않습니다. 그렇기 때문에 블록 1의 BeginBlock은 InitChain이 시작한대로 deliver state에서 시작됩니다.
### BeginBlock
TODO complete description (추후 업데이트 예정)
### EndBlock
TODO complete description (추후 업데이트 예정)
### Commit
TODO complete description (추후 업데이트 예정)
TODO complete description (추후 업데이트 예정)
## 가스 관리(Gas Management)
@ -72,7 +78,6 @@ InitChain 실행 단계에서 블록 가스 미터는 제네시스 트랜잭션
또한, InitChain의 리퀘스트 메시지에는 genesis.json 파일이 정의하는 ConsensusParams가 포함되어있습니다.
### 가스: BeginBlock
블록 가스 미터는 BeginBlock의 deliver state에서 리셋됩니다. 만약 베이스앱에서 최대 블록 가스가 설정되어있지 않은 경우, 가스 미터는 무한으로 설정됩니다. 최대 블록 가스가 설정되었을 경우, 가스 미터는 `ConsensusParam.BlockSize.MaxGas`를 통해 설정됩니다.
@ -82,4 +87,3 @@ InitChain 실행 단계에서 블록 가스 미터는 제네시스 트랜잭션
특정 트랜잭션이 실행되기 전, `BlockGasMeter`를 우선 확인하여 남은 가스가 있는지 확인합니다. 만약 남은 가스가 없다면 `DeliverTx`는 즉시 에러를 리턴합니다.
트랜잭션이 처리된 후, 사용된 가스는 (설정된 가스 리밋에 따라) `BlockGasMeter`에서 차감됩니다. 만약 잔류 가스가 가스 미터의 한도를 초과할 경우, `DeliverTx`는 에러를 리턴하고 해당 트랜잭션은 커밋되지 않습니다.

View File

@ -13,4 +13,4 @@ parent:
3. [SDK 애플리케이션의 아키텍쳐](./sdk-app-architecture.md)
4. [코스모스 SDK 디자인 소개](./sdk-design.md)
해당 기본 자료를 읽으신 후 [basics](../basics/README.md) 폴더에 있는 자료를 읽어보시는 것을 추천드립니다.
해당 기본 자료를 읽으신 후 [basics](../basics/README.md) 폴더에 있는 자료를 읽어보시는 것을 추천드립니다.

View File

@ -8,13 +8,14 @@
코스모스 SDK는 오브젝트-가능성 시스템의 토대가 됨으로 이런 문제점을 해결할 수 있습니다.
> 오브젝트 가능성 시스템의 구조적 속성은 코드 디자인의 모듈화와 안정적인 캡슐화(encapsulation)를 선호합니다.
> 오브젝트 가능성 시스템의 구조적 속성은 코드 디자인의 모듈화와 안정적인 캡슐화(encapsulation)를 선호합니다.
>
> 이런 구조적 속성은 오브젝트-가능 프로그램 또는 운영체제의 보안적 속성의 분석을 가능하게 합니다. 정보 플로우 속성(information flow properties) 같은 일부 속성은 특정 오브젝트의 행동을 결정하는 코드의 지식 또는 분석 없이도 오직 오브젝트 레퍼런스와 연결구조만으로 분석이 가능합니다.
>
> 그렇기 때문에, 악의적인 코드가 포함되있을 확률이 있는 새로운 오브젝트가 소개되더라도 보안적 속성은 지켜질 수 있습니다.
>
> 이런 구조적 속성은 해당 오브젝트를 통치하는 두가지 법칙에 의해 지켜질 수 있습니다:
>
> 1. 오브젝트 'A'는 'B'에 대한 레퍼런스를 보유하고 있을 경우에만 메시지를 전송할 수 있다.
> 2. 오브젝트 'A'가 'C'에 대한 레퍼런스를 가지고 싶다면 오브젝트 'A'는 'C'에 대한 레퍼런스가 포함된 메시지를 수신해야 한다.
>
@ -64,5 +65,3 @@ app.Router().
AddRoute(slashing.RouterKey, slashing.NewHandler(app.slashingKeeper)).
AddRoute(gov.RouterKey, gov.NewHandler(app.govKeeper))
```

View File

@ -2,14 +2,13 @@
order: 3
-->
# SDK 애플리케이션 아키텍쳐
## 상태 기계 (state machine)
블록체인 애플리케이션은 근본적으로 [결정론적 복제 상태 기계(replicated deterministic state machine)](https://ko.wikipedia.org/wiki/%EC%83%81%ED%83%9C_%EA%B8%B0%EA%B3%84_%EB%B3%B5%EC%A0%9C)입니다.
상태 기계는 특정 시점에 오직 하나의 상태를 유지하는 있는 컴퓨터 공학 개념입니다. 여기서 '상태 기계' 개념에는 시스템의 현 상태를 뜻하는 '상태(state)'가 있으며, 상태의 변경을 유발하는
상태 기계는 특정 시점에 오직 하나의 상태를 유지하는 있는 컴퓨터 공학 개념입니다. 여기서 '상태 기계' 개념에는 시스템의 현 상태를 뜻하는 '상태(state)'가 있으며, 상태의 변경을 유발하는
트랜잭션(transaction)'이 있습니다.
`S` 라는 상태와 `T` 라는 트랜잭션이 있는 경우, 상태 기계는 `S'`라는 새로운 상태를 리턴합니다.
@ -40,7 +39,6 @@ order: 3
개발자는 코스모스 SDK를 사용하여 상태 기계만을 정의하면 되며, 해당 상태를 네트워크에 복제하는 기능은 [*텐더민트*](https://tendermint.com/docs/introduction/introduction.html)가 제공합니다.
```
^ +-------------------------------+ ^
| | | | 코스모스 SDK로 개발
@ -63,7 +61,7 @@ order: 3
## ABCI
텐더민트는 ABCI라를 인터페이스를 사용해 트랜잭션을 애플리케이션에게 전달합니다. 이는 어플리케이션이 반드시 구현해야하는 부분입니다.
텐더민트는 ABCI라를 인터페이스를 사용해 트랜잭션을 애플리케이션에게 전달합니다. 이는 어플리케이션이 반드시 구현해야하는 부분입니다.
```
+---------------------+
@ -83,18 +81,18 @@ order: 3
+---------------------+
```
**텐더민트는 오직 거래의 bytes 값들만 취급하지 실제 그 bytes 들이 어떤 의미를 가지고 있는지는 파악하지 않습니다.** 텐더민트가 하는 일은 이 거래 bytes 들을 결정론적으로 나열하는 것 뿐입니다. 텐더민트는 이 bytes 들을 ABCI 를 통해서 어플리케이션에 넘겨주고, 그 메세지에 담겨있는 거래들이 잘 처리되었는지 안되었는지를 확인해주는 return code 를 기다립니다.
**텐더민트는 오직 거래의 bytes 값들만 취급하지 실제 그 bytes 들이 어떤 의미를 가지고 있는지는 파악하지 않습니다.** 텐더민트가 하는 일은 이 거래 bytes 들을 결정론적으로 나열하는 것 뿐입니다. 텐더민트는 이 bytes 들을 ABCI 를 통해서 어플리케이션에 넘겨주고, 그 메세지에 담겨있는 거래들이 잘 처리되었는지 안되었는지를 확인해주는 return code 를 기다립니다.
아래에 ABCI 의 메세지들 중 가장 중요한 것들을 나열해놓았습니다:
- `CheckTx`: 텐더민트 코어로부터 거래를 받게 될 때, 이 거래는 어플리케이션에 넘겨져서 몇 가지 기본 요건을 충족하는지 확인합니다. `CheckTx` 는 풀노드의 mempool을 스팸행위로 부터 보호하는데 사용됩니다. "Ante Handler" 라고 불리우는 특별한 handler 는 일련의 검증 과정을 실행하는데 사용됩니다. 예를 들면, 충분한 수수료가 있는지, 그리고 서명이 유효한지 확인합니다. 만약 검사 결과가 유효한 경우 되면, 해당 거래는 [mempool](https://tendermint.com/docs/spec/reactors/mempool/functionality.html#mempool-functionality)에 추가되고 피어 노드에게 전달됩니다. 참고로 트랜잭션이 블록에 추가되기 전까지는 `CheckTx` 과정이 진행되지 않습니다. (즉, 상태의 변경이 일어나지 않습니다.)
- `CheckTx`: 텐더민트 코어로부터 거래를 받게 될 때, 이 거래는 어플리케이션에 넘겨져서 몇 가지 기본 요건을 충족하는지 확인합니다. `CheckTx` 는 풀노드의 mempool을 스팸행위로 부터 보호하는데 사용됩니다. "Ante Handler" 라고 불리우는 특별한 handler 는 일련의 검증 과정을 실행하는데 사용됩니다. 예를 들면, 충분한 수수료가 있는지, 그리고 서명이 유효한지 확인합니다. 만약 검사 결과가 유효한 경우 되면, 해당 거래는 [mempool](https://tendermint.com/docs/spec/reactors/mempool/functionality.html#mempool-functionality)에 추가되고 피어 노드에게 전달됩니다. 참고로 트랜잭션이 블록에 추가되기 전까지는 `CheckTx` 과정이 진행되지 않습니다. (즉, 상태의 변경이 일어나지 않습니다.)
- `DeliverTx` : 텐더민트 코어가 [유효한 블록](https://tendermint.com/docs/spec/blockchain/blockchain.html#validation)을 전달받는 경우, 각 블록의 트래잭션은 `DeliverTx`를 통해 애플리케이션에 전달합니다. 이 단계에서 상태 변경이 일어납니다. `AnteHandler`는 트랜잭션에 포함된 각 메세지를 검증하기 위해 다시 실행됩니다.
- `DeliverTx` : 텐더민트 코어가 [유효한 블록](https://tendermint.com/docs/spec/blockchain/blockchain.html#validation)을 전달받는 경우, 각 블록의 트래잭션은 `DeliverTx`를 통해 애플리케이션에 전달합니다. 이 단계에서 상태 변경이 일어납니다. `AnteHandler`는 트랜잭션에 포함된 각 메세지를 검증하기 위해 다시 실행됩니다.
- `BeginBlock`/`EndBlock` : 해당 메세지는 블록내 트랜잭션 유뮤와는 별개로 블록 시작과 끝 단계에서 실행됩니다. 여기에서 로직의 자동 실행을 설정하는 것이 유용합니다. 하지만 복잡한 연산 또는 루프는 블록체인의 속도를 저하할 수 있으며, 무한 루프의 경우 블록체인을 멈출 수 있습니다.
ABCI 메소드와 타입에 대해서 더 자세하게 싶다면, [텐더민트 문서](https://tendermint.com/docs/spec/abci/abci.html#overview)를 참고하세요.
ABCI 메소드와 타입에 대해서 더 자세하게 싶다면, [텐더민트 문서](https://tendermint.com/docs/spec/abci/abci.html#overview)를 참고하세요.
텐더민트 위에 구현된 모든 어플리케이션은 하위 텐더민트 엔진과 소통하기 위해 ABCI 인터페이스를 구현해야만 합니다. 물론 코스모스 SDK를 사용하는 경우, 코스모스 SDK가 [baseapp](https://cosmos.network/docs/intro/sdk-design.html#baseapp) 의 형태로 일종의 템플릿를 제공합니다.
텐더민트 위에 구현된 모든 어플리케이션은 하위 텐더민트 엔진과 소통하기 위해 ABCI 인터페이스를 구현해야만 합니다. 물론 코스모스 SDK를 사용하는 경우, 코스모스 SDK가 [baseapp](https://cosmos.network/docs/intro/sdk-design.html#baseapp) 의 형태로 일종의 템플릿를 제공합니다.
### 다음은 [SDK 설계 원칙에 대해서 알아보세요](https://cosmos.network/docs/intro/sdk-design.html#baseapp)

View File

@ -61,7 +61,7 @@ order: 2
- 애플리케이션 특화 블록체인은 하나의 애플리케이션만을 실행하기 때문에 해당 애플리케이션은 다른 애플리케이션과 스토리지와 연산력에 대한 경쟁을 하지 않습니다. 이는 연산력과 스토리지를 위해 다른 애플리케이션과 경쟁해야하는 기존 (샤딩을 도입하지 않은) 버추얼 머신 기반 블록체인 시스템과 대치합니다.
- 만약 버추얼 머신 기바의 블록체인이 애플리케이션 기반 샤딩과 효율적인 컨센서스 알고리즘을 제공한다고 해도, 애플리케이션의 성능은 버추얼 머신 자체에 의해 제한됩니다. 처리량(throughput)에 대한 한계는 상태 기계이며, 트랜잭션이 버추얼 머신을 통해 처리되어야 하는 것 자체가 트랜잭션 처리의 연산 복잡성을 인상하게 됩니다.
### 보안
### 보안
보안을 수치화하는 것은 쉽지 않으며, 플랫폼의 특성마다 다를 수 있습니다. 다만, 애플리케이션 특화 블록체인이 보안적으로 제공하는 특정 장점은 존재합니다:

View File

@ -6,34 +6,36 @@ Documentation has been translated for **reference use only** and may contain typ
Please refer to the official english version of the documentation for the latest and accurate information.
## 코스모스 SDK 도큐멘테이션 번역 (한국어)
이 문서는 코스모스 공식 문서의 번역 작업 트래킹을 위한 문서입니다.
번역된 문서는 **참고용**으로 번역되었습니다. 다수의 오타, 오류가 존재할 수 있으며, 영문 업데이트보다 번역이 느리게 진행될 수 있다는 점을 인지하시기 바랍니다.
번역된 문서는 **참고용**으로 번역되었습니다. 다수의 오타, 오류가 존재할 수 있으며, 영문 업데이트보다 번역이 느리게 진행될 수 있다는 점을 인지하시기 바랍니다.
코스모스 관련 가장 정확한 정보를 확인하시기 위해서는 영어 원문을 참고하시기 바랍니다.
## Progress by directory
### [`concepts`](../concepts/)
- Synced until commit [14ebc65](https://github.com/cosmos/cosmos-sdk/commit/14ebc65daffd63e1adf17995c103aac9380207ef#diff-f874f370376bf359320af0543de53fcf)
### [`spec`](../spec/)
- Redacted from until completion
### [`gaia`](../gaia/)
- Synced until commit [288df6f](https://github.com/cosmos/cosmos-sdk/commit/288df6fe69dcef8fa95aca022039f92ba1e98c11#diff-3302fe357e01f0996ddb0f10adec85f0)
### [`intro`](../intro/)
- Synced until commit [0043912](https://github.com/cosmos/cosmos-sdk/commit/0043912548808b4cfd6ab84ec49ba73bd5f65b5b#diff-e518eaec0d99787e6f75682d54751821)
- Synced until commit [0043912](https://github.com/cosmos/cosmos-sdk/commit/0043912548808b4cfd6ab84ec49ba73bd5f65b5b#diff-e518eaec0d99787e6f75682d54751821)
### [`modules`](../modules/)
- Synced until commit [78a2135](https://github.com/cosmos/cosmos-sdk/commit/78a21353da978d6c2a9b711f29b3874ff9ca14ae#diff-449cc65858e8929d15f4a170950e7758)
### [`clients`](../clients/)
- Synced until Commit [857a65d](https://github.com/cosmos/cosmos-sdk/commit/857a65dc610cd736a47980b5d4778e5123206a3d#diff-93dd988c16d20a1bce170b86ad89425a)
- Synced until Commit [857a65d](https://github.com/cosmos/cosmos-sdk/commit/857a65dc610cd736a47980b5d4778e5123206a3d#diff-93dd988c16d20a1bce170b86ad89425a)

View File

@ -12,7 +12,6 @@
관련 스펙은 [여기](https://github.com/cosmos/cosmos-sdk/tree/master/docs/spec/staking)에서 확인하실 수 있습니다.
# Slashing
`x/slashing` 모듈은 코스모스 위임형 지분증명(Delegated-Proof-of-Stake) 시스템에서 사용됩니다.

View File

@ -1,9 +1,10 @@
<!--
order: 4
-->
# Keyring Migrate Quick Start
`keyring` is the Cosmos SDK mechanism to manage the public/private keypair. Cosmos SDK v0.42 (Stargate) introduced breaking changes in the keyring.
`keyring` is the Cosmos SDK mechanism to manage the public/private keypair. Cosmos SDK v0.42 (Stargate) introduced breaking changes in the keyring.
To upgrade your chain from v0.39 (Launchpad) and earlier to Stargate, you must migrate your keys inside the keyring to the latest version. For details on configuring and using the keyring, see [Setting up the keyring](../run-node/keyring.md).
@ -18,14 +19,15 @@ simd keys migrate <old_home_dir>
The migration process moves key information from the legacy db-based Keybase to the [keyring](https://github.com/99designs/keyring)-based Keyring. The legacy Keybase persists keys in a LevelDB database in a 'keys' sub-directory of the client application home directory (`old_home_dir`). For example, `$HOME/.gaiacli/keys/` for [Gaia](https://github.com/cosmos/gaia).
You can migrate or skip the migration for each key entry found in the specified `old_home_dir` directory. Each key migration requires a valid passphrase. If an invalid passphrase is entered, the command exits. Run the command again to restart the keyring migration.
You can migrate or skip the migration for each key entry found in the specified `old_home_dir` directory. Each key migration requires a valid passphrase. If an invalid passphrase is entered, the command exits. Run the command again to restart the keyring migration.
The `migrate` command takes the following flags:
- `--dry-run` boolean
- true - run the migration but do not persist changes to the new Keybase.
- false - run the migration and persist keys to the new Keybase.
Recommended: Use `--dry-run true` to test the migration without persisting changes before you migrate and persist keys.
- true - run the migration but do not persist changes to the new Keybase.
- false - run the migration and persist keys to the new Keybase.
Recommended: Use `--dry-run true` to test the migration without persisting changes before you migrate and persist keys.
- `--keyring-backend` string flag. It allows you to select a backend. For more detailed information about the available backends, you can read [the keyring guide](../run-node/keyring.md).

View File

@ -32,4 +32,4 @@ Cosmos SDK является наиболее полным фреймворком
- Прочитайте об [архитектуре приложения](./sdk-app-architecture.md), разрабатываемого с помощью SDK.
- Узнайте, как с нуля создать блокчейн для вашего приложения в [пошаговом примере](https://cosmos.network/docs/tutorial).
- Узнайте, как с нуля создать блокчейн для вашего приложения в [пошаговом примере](https://cosmos.network/docs/tutorial).

View File

@ -21,6 +21,7 @@ order: 3
```
На практике транзакции сгруппированы в блоки, что позволяет сделать процесс более эффективным. Принимая в качестве входа состояние `S` и блок транзакций `B`, автомат вернет новое состояние `S'`.
```
+--------+ +--------+
| | | |
@ -93,4 +94,4 @@ Tendermint передает приложению транзакции по се
Более подробный обзор методов и типов ABCI находится на [следующей странице](https://tendermint.com/docs/spec/abci/abci.html#overview).
Построенное на Tendermint приложение должно реализовать интерфейс ABCI для взаимодействия с локально запущенной программой Tendermint. К счастью, реализовывать интерфейс самостоятельно не нужно, потому что в составе Cosmos SDK уже есть его реализация в виде [baseapp](./sdk-design.md#baseapp).
Построенное на Tendermint приложение должно реализовать интерфейс ABCI для взаимодействия с локально запущенной программой Tendermint. К счастью, реализовывать интерфейс самостоятельно не нужно, потому что в составе Cosmos SDK уже есть его реализация в виде [baseapp](./sdk-design.md#baseapp).

View File

@ -20,7 +20,7 @@ Cosmos SDK - это фреймворк, который облегчает раз
## `baseapp`
`baseApp` — это базовая реализация ABCI в Cosmos SDK. Она поставляется с модулем `router` для маршрутизации транзакций в соответствующий модуль. Файл `app.go` вашего приложения будет определять ваш тип ` app`, который будет встраивать `baseapp`. Таким образом, ваш пользовательский тип `app` будет автоматически наследовать все ABCI-методы `baseapp`. Пример этого в [туториале по созданию приложения с помощью SDK] (https://github.com/cosmos/sdk-application-tutorial/blob/master/app.go#L27).
`baseApp` — это базовая реализация ABCI в Cosmos SDK. Она поставляется с модулем `router` для маршрутизации транзакций в соответствующий модуль. Файл `app.go` вашего приложения будет определять ваш тип `app`, который будет встраивать `baseapp`. Таким образом, ваш пользовательский тип `app` будет автоматически наследовать все ABCI-методы `baseapp`. Пример этого в [туториале по созданию приложения с помощью SDK] (https://github.com/cosmos/sdk-application-tutorial/blob/master/app.go#L27).
Цель `baseapp`: обеспечить безопасный интерфейс между хранилищем и расширяемым конечным автоматом, в то же время определяя как можно меньше о конечном компьютере (соответствуя ABCI).
@ -38,7 +38,6 @@ Cosmos SDK - это фреймворк, который облегчает раз
Вот упрощенное представление о том, как транзакция обрабатывается приложением каждого полной ноды, когда она получена в валидном блоке:
```
+
|
@ -96,5 +95,4 @@ Cosmos SDK - это фреймворк, который облегчает раз
В дополнение к уже существующим модулям в `x/`, которые каждый может использовать в своем приложении, SDK позволяет [создавать собственные модули](https://cosmos.network/docs/tutorial/keeper.html).
### Далее, узнайте больше о модели безопасности Cosmos SDK, [ocap](./ocap.md)
### Далее, узнайте больше о модели безопасности Cosmos SDK, [ocap](./ocap.md)

View File

@ -86,4 +86,4 @@ order: 2
- Узнайте больше об [архитектуре](./ sdk-app-Architecture) приложения, построенного с помощью SDK.
- Узнайте, как создать блокчейн для конкретного приложения с нуля, с помощью [SDK tutorial](https://cosmos.network/docs/tutorial)
- Узнайте, как создать блокчейн для конкретного приложения с нуля, с помощью [SDK tutorial](https://cosmos.network/docs/tutorial)

View File

@ -29,7 +29,7 @@ if there was an error.
## Data Folder Layout
`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and
`$DAEMON_HOME/cosmovisor` is expected to belong completely to `cosmovisor` and
subprocesses that are controlled by it. The folder content is organised as follows:
```
@ -67,6 +67,7 @@ directory layout:
## Usage
The system administrator admin is responsible for:
* installing the `cosmovisor` binary and configure the host's init system (e.g. `systemd`, `launchd`, etc) along with the environmental variables appropriately;
* installing the `genesis` folder manually;
* installing the `upgrades/<name>` folders manually.
@ -96,6 +97,7 @@ valid format to specify a download in such a message:
1. Store an os/architecture -> binary URI map in the upgrade plan info field
as JSON under the `"binaries"` key, eg:
```json
{
"binaries": {
@ -103,12 +105,13 @@ as JSON under the `"binaries"` key, eg:
}
}
```
2. Store a link to a file that contains all information in the above format (eg. if you want
to specify lots of binaries, changelog info, etc without filling up the blockchain).
e.g. `https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e`
This file contained in the link will be retrieved by [go-getter](https://github.com/hashicorp/go-getter)
This file contained in the link will be retrieved by [go-getter](https://github.com/hashicorp/go-getter)
and the `"binaries"` field will be parsed as above.
If there is no local binary, `DAEMON_ALLOW_DOWNLOAD_BINARIES=on`, and we can access a canonical url for the new binary,
@ -121,7 +124,7 @@ or hijacks the DNS. go-getter will always ensure the downloaded file matches the
is provided. go-getter will also handle unpacking archives into directories (so these download links should be
a zip of all data in the `bin` directory).
To properly create a checksum on linux, you can use the `sha256sum` utility. e.g.
To properly create a checksum on linux, you can use the `sha256sum` utility. e.g.
`sha256sum ./testdata/repo/zip_directory/autod.zip`
which should return `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`.
You can also use `sha512sum` if you like longer hashes, or `md5sum` if you like to use broken hashes.
@ -175,13 +178,13 @@ Submit a software upgrade proposal:
```
./build/simd tx gov submit-proposal software-upgrade test1 --title "upgrade-demo" --description "upgrade" --from validator --upgrade-height 100 --deposit 10000000stake --chain-id test --keyring-backend test -y
```
Query the proposal to ensure it was correctly broadcast and added to a block:
```
./build/simd query gov proposal 1
```
Submit a `Yes` vote for the upgrade proposal:
```

View File

@ -25,8 +25,8 @@ is a list of the most popular operating systems and their respective passwords m
- macOS (since Mac OS 8.6): [Keychain](https://support.apple.com/en-gb/guide/keychain-access/welcome/mac)
- Windows: [Credentials Management API](https://docs.microsoft.com/en-us/windows/win32/secauthn/credentials-management)
- GNU/Linux:
- [libsecret](https://gitlab.gnome.org/GNOME/libsecret)
- [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html)
- [libsecret](https://gitlab.gnome.org/GNOME/libsecret)
- [kwallet](https://api.kde.org/frameworks/kwallet/html/index.html)
GNU/Linux distributions that use GNOME as default desktop environment typically come with
[Seahorse](https://wiki.gnome.org/Apps/Seahorse). Users of KDE based distributions are
@ -78,7 +78,7 @@ passphrase expiration.
The password store must be set up prior to first use:
```sh
$ pass init <GPG_KEY_ID>
pass init <GPG_KEY_ID>
```
Replace `<GPG_KEY_ID>` with your GPG key ID. You can use your personal GPG key or an alternative

View File

@ -8,14 +8,15 @@ There are two ways in which you can customize and extend the implementation with
### Message extension
In order to make an `sdk.Msg` understandable by rosetta the only thing which is required is adding the methods to your message that satisfy the `rosetta.Msg` interface.
Examples on how to do so can be found in the staking types such as `MsgDelegate`, or in bank types such as `MsgSend`.
In order to make an `sdk.Msg` understandable by rosetta the only thing which is required is adding the methods to your message that satisfy the `rosetta.Msg` interface.
Examples on how to do so can be found in the staking types such as `MsgDelegate`, or in bank types such as `MsgSend`.
### Client interface override
In case more customization is required, it's possible to embed the Client type and override the methods which require customizations.
Example:
```go
package custom_client
import (
@ -56,7 +57,7 @@ Note: errors must be registered before cosmos-rosetta-gateway's `Server`.`Start`
## Integration in app.go
To integrate rosetta as a command in your application, in app.go, in your root command simply use the `server.RosettaCommand` method.
To integrate rosetta as a command in your application, in app.go, in your root command simply use the `server.RosettaCommand` method.
Example:
@ -78,4 +79,4 @@ func buildAppCommand(rootCmd *cobra.Command) {
A full implementation example can be found in `simapp` package.
NOTE: when using a customized client, the command cannot be used as the constructors required **may** differ, so it's required to create a new one. We intend to provide a way to init a customized client without writing extra code in the future.
NOTE: when using a customized client, the command cannot be used as the constructors required **may** differ, so it's required to create a new one. We intend to provide a way to init a customized client without writing extra code in the future.

View File

@ -1,3 +1,3 @@
# Cosmos ICS
- [ICS030 - Signed Messages](./ics-030-signed-messages.md)
- [ICS030 - Signed Messages](./ics-030-signed-messages.md)

View File

@ -19,7 +19,7 @@ Proposed.
## Abstract
Having the ability to sign messages off-chain has proven to be a fundamental aspect
of nearly any blockchain. The notion of signing messages off-chain has many
of nearly any blockchain. The notion of signing messages off-chain has many
added benefits such as saving on computational costs and reducing transaction
throughput and overhead. Within the context of the Cosmos, some of the major
applications of signing such data includes, but is not limited to, providing a
@ -42,13 +42,13 @@ This specification is only concerned with the rationale and the standardized
implementation of Cosmos signed messages. It does **not** concern itself with the
concept of replay attacks as that will be left up to the higher-level application
implementation. If you view signed messages in the means of authorizing some
action or data, then such an application would have to either treat this as
action or data, then such an application would have to either treat this as
idempotent or have mechanisms in place to reject known signed messages.
## Preliminary
The Cosmos message signing protocol will be parameterized with a cryptographic
secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains
secure hashing algorithm `SHA-256` and a signing algorithm `S` that contains
the operations `sign` and `verify` which provide a digital signature over a set
of bytes and verification of a signature respectively.
@ -85,7 +85,7 @@ in lexicographically ascending order.
For the purposes of signing Cosmos messages, the `@chain_id` field must correspond
to the Cosmos chain identifier. The user-agent should **refuse** signing if the
`@chain_id` field does not match the currently active chain! The `@type` field
must equal the constant `"message"`. The `@type` field corresponds to the type of
must equal the constant `"message"`. The `@type` field corresponds to the type of
structure the user will be signing in an application. For now, a user is only
allowed to sign bytes of valid ASCII text ([see here](https://github.com/tendermint/tendermint/blob/master/libs/common/string.go#L61-L74)).
However, this will change and evolve to support additional application-specific
@ -154,7 +154,6 @@ know exactly what they are signing (opposed to signing a bunch of arbitrary byte
Thus, in the future, the Cosmos signing message specification will be expected
to expand upon it's canonical JSON structure to include such functionality.
## API
Application developers and designers should formalize a standard set of APIs that

View File

@ -1,3 +1,3 @@
# Addresses spec
- [Bech32](./bech32.md)
- [Bech32](./bech32.md)

View File

@ -5,13 +5,12 @@ running network which maintains network liveness. This can be achieved through
selectively "pausing" functionality of specific modules on a running network.
The circuit breaker is intended to be enabled through either:
- governance
- for emergencies a special subset of accounts selected by the state machine
- a transaction which proves the expected behaviour is broken
- governance
- for emergencies a special subset of accounts selected by the state machine
- a transaction which proves the expected behaviour is broken
## Pause state
The basic pause state of any module simply disables all message routes to
that module. Beyond that, it may be a appropriate for different modules to
process begin-block/end-block in an altered "safe" way.
process begin-block/end-block in an altered "safe" way.

Some files were not shown because too many files have changed in this diff Show More