style: lint go and markdown (#10060)

## Description

+ fixing `x/bank/migrations/v44.migrateDenomMetadata` - we could potentially put a wrong data in a new key if the old keys have variable length.
+ linting the code

Putting in the same PR because i found the issue when running a linter.

Depends on: #10112

---

### Author Checklist

*All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.*

I have...

- [x] included the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title
- [x] added `!` to the type prefix if API or client breaking change
- [x] targeted the correct branch (see [PR Targeting](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#pr-targeting))
- [ ] provided a link to the relevant issue or specification
- [x] followed the guidelines for [building modules](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules)
- [ ] included the necessary unit and integration [tests](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#testing)
- [ ] added a changelog entry to `CHANGELOG.md`
- [ ] included comments for [documenting Go code](https://blog.golang.org/godoc)
- [ ] updated the relevant documentation or specification
- [ ] reviewed "Files changed" and left comments if necessary
- [ ] confirmed all CI checks have passed

### Reviewers Checklist

*All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.*

I have...

- [ ] confirmed the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title
- [ ] confirmed `!` in the type prefix if API or client breaking change
- [ ] confirmed all author checklist items have been addressed 
- [ ] reviewed state machine logic
- [ ] reviewed API design and naming
- [ ] reviewed documentation is accurate
- [ ] reviewed tests and test coverage
- [ ] manually tested (if applicable)
This commit is contained in:
Robert Zaremba 2021-10-30 14:43:04 +01:00 committed by GitHub
parent bc3cda69f8
commit 479485f95d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
59 changed files with 198 additions and 179 deletions

View File

@ -5,9 +5,9 @@ This document is an extension to [CONTRIBUTING](./CONTRIBUTING.md) and provides
## API & Design
+ Code must be well structured:
+ packages must have a limited responsibility (different concerns can go to different packages),
+ types must be easy to compose,
+ think about maintainbility and testability.
+ packages must have a limited responsibility (different concerns can go to different packages),
+ types must be easy to compose,
+ think about maintainbility and testability.
+ "Depend upon abstractions, [not] concretions".
+ Try to limit the number of methods you are exposing. It's easier to expose something later than to hide it.
+ Take advantage of `internal` package concept.
@ -19,20 +19,22 @@ This document is an extension to [CONTRIBUTING](./CONTRIBUTING.md) and provides
+ Limit third-party dependencies.
Performance:
+ Avoid unnecessary operations or memory allocations.
Security:
+ Pay proper attention to exploits involving:
+ gas usage
+ transaction verification and signatures
+ malleability
+ code must be always deterministic
+ Thread safety. If some functionality is not thread-safe, or uses something that is not thread-safe, then clearly indicate the risk on each level.
+ Pay proper attention to exploits involving:
+ gas usage
+ transaction verification and signatures
+ malleability
+ code must be always deterministic
+ Thread safety. If some functionality is not thread-safe, or uses something that is not thread-safe, then clearly indicate the risk on each level.
## Testing
Make sure your code is well tested:
+ Provide unit tests for every unit of your code if possible. Unit tests are expected to comprise 70%-80% of your tests.
+ Describe the test scenarios you are implementing for integration tests.
+ Create integration tests for queries and msgs.
@ -64,6 +66,7 @@ for tcIndex, tc := range cases {
## Quality Assurance
We are forming a QA team that will support the core Cosmos SDK team and collaborators by:
- Improving the Cosmos SDK QA Processes
- Improving automation in QA and testing
- Defining high-quality metrics
@ -83,5 +86,4 @@ Desired outcomes:
- Releases are more predictable.
- QA reports. Goal is to guide with new tasks and be one of the QA measures.
As a developer, you must help the QA team by providing instructions for User Experience (UX) and functional testing.

View File

@ -23,11 +23,13 @@ discussion or proposing code changes. To ensure a smooth workflow for all
contributors, the general procedure for contributing has been established:
1. Start by browsing [new issues](https://github.com/cosmos/cosmos-sdk/issues) and [discussions](https://github.com/cosmos/cosmos-sdk/discussions). If you are looking for something interesting or if you have something in your mind, there is a chance it was has been discussed.
- Looking for a good place to start contributing? How about checking out some [good first issues](https://github.com/cosmos/cosmos-sdk/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)?
- Looking for a good place to start contributing? How about checking out some [good first issues](https://github.com/cosmos/cosmos-sdk/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)?
2. Determine whether a GitHub issue or discussion is more appropriate for your needs:
1. If want to propose something new that requires specification or an additional design, or you would like to change a process, start with a [new discussion](https://github.com/cosmos/cosmos-sdk/discussions/new). With discussions, we can better handle the design process using discussion threads. A discussion usually leads to one or more issues.
2. If the issue you want addressed is a specific proposal or a bug, then open a [new issue](https://github.com/cosmos/cosmos-sdk/issues/new/choose).
3. Review existing [issues](https://github.com/cosmos/cosmos-sdk/issues) to find an issue you'd like to help with.
1. If want to propose something new that requires specification or an additional design, or you would like to change a process, start with a [new discussion](https://github.com/cosmos/cosmos-sdk/discussions/new). With discussions, we can better handle the design process using discussion threads. A discussion usually leads to one or more issues.
2. If the issue you want addressed is a specific proposal or a bug, then open a [new issue](https://github.com/cosmos/cosmos-sdk/issues/new/choose).
3. Review existing [issues](https://github.com/cosmos/cosmos-sdk/issues) to find an issue you'd like to help with.
3. Participate in thoughtful discussion on that issue.
4. If you would like to contribute:
1. Ensure that the proposal has been accepted.
@ -38,7 +40,7 @@ contributors, the general procedure for contributing has been established:
to begin work.
5. To submit your work as a contribution to the repository follow standard GitHub best practices. See [pull request guideline](#pull-requests) below.
**Note: ** For very small or blatantly obvious problems such as typos, you are
**Note:** For very small or blatantly obvious problems such as typos, you are
not required to an open issue to submit a PR, but be aware that for more complex
problems/features, if a PR is opened before an adequate design discussion has
taken place in a GitHub issue, that PR runs a high likelihood of being rejected.
@ -52,6 +54,7 @@ The developers are organized in working groups which are listed on a ["Working G
The important development announcements are shared on [Discord](https://discord.com/invite/cosmosnetwork) in the \#dev-announcements channel.
To synchronize we have few major meetings:
+ Architecture calls: bi-weekly on Fridays at 14:00 UTC (alternating with the grooming meeting below).
+ Grooming / Planning: bi-weekly on Fridays at 14:00 UTC (alternating with the architecture meeting above).
+ Cosmos Community SDK Development Call on the last Wednesday of every month at 17:00 UTC.
@ -59,7 +62,6 @@ To synchronize we have few major meetings:
If you would like to join one of those calls, then please contact us on [Discord](https://discord.com/invite/cosmosnetwork) or reach out directly to Cory Levinson from Regen Network (cory@regen.network).
## Architecture Decision Records (ADR)
When proposing an architecture decision for the Cosmos SDK, please start by opening an [issue](https://github.com/cosmos/cosmos-sdk/issues/new/choose) or a [discussion](https://github.com/cosmos/cosmos-sdk/discussions/new) with a summary of the proposal. Once the proposal has been discussed and there is rough alignment on a high-level approach to the design, the [ADR creation process](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/PROCESS.md) can begin. We are following this process to ensure all involved parties are in agreement before any party begins coding the proposed implementation. If you would like to see examples of how these are written, please refer to the current [ADRs](https://github.com/cosmos/cosmos-sdk/tree/master/docs/architecture).
@ -70,11 +72,11 @@ When proposing an architecture decision for the Cosmos SDK, please start by open
- `master` must never fail `make lint test test-race`.
- No `--force` onto `master` (except when reverting a broken commit, which should seldom happen).
- Create a branch to start a wok:
- Fork the repo (core developers must create a branch directly in the Cosmos SDK repo),
- Fork the repo (core developers must create a branch directly in the Cosmos SDK repo),
branch from the HEAD of `master`, make some commits, and submit a PR to `master`.
- For core developers working within the `cosmos-sdk` repo, follow branch name conventions to ensure a clear
- For core developers working within the `cosmos-sdk` repo, follow branch name conventions to ensure a clear
ownership of branches: `{moniker}/{issue#}-branch-name`.
- See [Branching Model](#branching-model-and-release) for more details.
- See [Branching Model](#branching-model-and-release) for more details.
- Be sure to run `make format` before every commit. The easiest way
to do this is have your editor run it for you upon saving a file (most of the editors
will do it anyway using a pre-configured setup of the programming language mode).
@ -92,10 +94,12 @@ Tests can be executed by running `make test` at the top level of the Cosmos SDK
### Pull Requests
Before submitting a pull request:
- merge the latest master `git merge origin/master`,
- run `make lint test` to ensure that all checks and tests pass.
Then:
1. If you have something to show, **start with a `Draft` PR**. It's good to have early validation of your work and we highly recommend this practice. A Draft PR also indicates to the community that the work is in progress.
Draft PRs also helps the core team provide early feedback and ensure the work is in the right direction.
2. When the code is complete, change your PR from `Draft` to `Ready for Review`.

View File

@ -7,16 +7,17 @@ This document outlines the process for releasing a new version of Cosmos SDK, wh
A _major release_ is an increment of the first number (eg: `v1.2``v2.0.0`) or the _point number_ (eg: `v1.1 → v1.2.0`, also called _point release_). Each major release opens a _stable release series_ and receives updates outlined in the [Major Release Maintenance](#major-release-maintenance)_section.
Before making a new _major_ release we do beta and release candidate releases. For example, for release 1.0.0:
```
v1.0.0-beta1 → v1.0.0-beta2 → ... → v1.0.0-rc1 → v1.0.0-rc2 → ... → v1.0.0
```
- Release a first beta version on the `master` branch and freeze `master` from receiving any new features. After beta is released, we focus on releasing the release candidate:
- finish audits and reviews
- kick off a large round of simulation testing (e.g. 400 seeds for 2k blocks)
- perform functional tests
- add more tests
- release new beta version as the bugs are discovered and fixed.
- finish audits and reviews
- kick off a large round of simulation testing (e.g. 400 seeds for 2k blocks)
- perform functional tests
- add more tests
- release new beta version as the bugs are discovered and fixed.
- After the team feels that the `master` works fine we create a `release/vY` branch (going forward known a release branch), where `Y` is the version number, with the patch part substituted to `x` (eg: 0.42.x, 1.0.x). Ensure the release branch is protected so that pushes against the release branch are permitted only by the release manager or release coordinator.
- **PRs targeting this branch can be merged _only_ when exceptional circumstances arise**
- update the GitHub mergify integration by adding instructions for automatically backporting commits from `master` to the `release/vY` using the `backport/Y` label.
@ -24,16 +25,17 @@ v1.0.0-beta1 → v1.0.0-beta2 → ... → v1.0.0-rc1 → v1.0.0-rc2 → ... →
- All links must be link-ified: `$ python ./scripts/linkify_changelog.py CHANGELOG.md`
- Copy the entries into a `RELEASE_CHANGELOG.md`, this is needed so the bot knows which entries to add to the release page on GitHub.
- Create a new annotated git tag for a release candidate (eg: `git tag -a v1.1.0-rc1`) in the release branch.
- from this point we unfreeze master.
- the SDK teams collaborate and do their best to run testnets in order to validate the release.
- when bugs are found, create a PR for `master`, and backport fixes to the release branch.
- create new release candidate tags after bugs are fixed.
- from this point we unfreeze master.
- the SDK teams collaborate and do their best to run testnets in order to validate the release.
- when bugs are found, create a PR for `master`, and backport fixes to the release branch.
- create new release candidate tags after bugs are fixed.
- After the team feels the release branch is stable and everything works, create a full release:
- update `CHANGELOG.md`.
- create a new annotated git tag (eg `git -a v1.1.0`) in the release branch.
- Create a GitHub release.
- update `CHANGELOG.md`.
- create a new annotated git tag (eg `git -a v1.1.0`) in the release branch.
- Create a GitHub release.
Following _semver_ philosophy, point releases after `v1.0`:
- must not break API
- can break consensus
@ -54,11 +56,11 @@ Lastly, it is core team's responsibility to ensure that the PR meets all the SRU
Point Release must follow the [Stable Release Policy](#stable-release-policy).
After the release branch has all commits required for the next patch release:
- update `CHANGELOG.md`.
- create a new annotated git tag (eg `git -a v1.1.0`) in the release branch.
- Create a GitHub release.
## Major Release Maintenance
Major Release series continue to receive bug fixes (released as a Patch Release) until they reach **End Of Life**.
@ -70,7 +72,6 @@ Only the following major release series have a stable release status:
* **0.42 «Stargate»** will be supported until 6 months after **0.43.0** is published. A fairly strict **bugfix-only** rule applies to pull requests that are requested to be included into a stable point-release.
* **0.44** is the latest major release.
## Stable Release Policy
### Patch Releases

View File

@ -20,6 +20,7 @@ Contains the required files to set up rosetta cli and make it work against its w
Contains the files for a deterministic network, with fixed keys and some actions on there, to test parsing of msgs and historical balances. This image is used to run a simapp node and to run the rosetta server.
## Rosetta-cli
The docker image for ./rosetta-cli/Dockerfile is on [docker hub](https://hub.docker.com/r/tendermintdev/rosetta-cli). Whenever rosetta-cli releases a new version, rosetta-cli/Dockerfile should be updated to reflect the new version and pushed to docker hub.
## Notes

View File

@ -5,6 +5,7 @@
#### Design
Cosmovisor is designed to be used as a wrapper for a `Cosmos SDK` app:
* it will pass arguments to the associated app (configured by `DAEMON_NAME` env variable).
Running `cosmovisor run arg1 arg2 ....` will run `app arg1 arg2 ...`;
* it will manage an app by restarting and upgrading if needed;
@ -47,6 +48,7 @@ git checkout cosmovisor/vx.x.x
cd cosmovisor
make
```
This will build cosmovisor in your current directory. Afterwards you may want to put it into your machine's PATH like as follows:
```
@ -58,6 +60,7 @@ cp cosmovisor ~/go/bin/cosmovisor
### Command Line Arguments And Environment Variables
The first argument passed to `cosmovisor` is the action for `cosmovisor` to take. Options are:
* `help`, `--help`, or `-h` - Output `cosmovisor` help information and check your `cosmovisor` configuration.
* `run` - Run the configured binary using the rest of the provided arguments.
* `version`, or `--version` - Output the `cosmovisor` version and also run the binary with the `version` argument.
@ -124,12 +127,14 @@ The `DAEMON` specific code and operations (e.g. tendermint config, the applicati
`cosmovisor` is polling the `$DAEMON_HOME/data/upgrade-info.json` file for new upgrade instructions. The file is created by the x/upgrade module in `BeginBlocker` when an upgrade is detected and the blockchain reaches the upgrade height.
The following heuristic is applied to detect the upgrade:
+ When starting, `cosmovisor` doesn't know much about currently running upgrade, except the binary which is `current/bin/`. It tries to read the `current/update-info.json` file to get information about the current upgrade name.
+ If neither `cosmovisor/current/upgrade-info.json` nor `data/upgrade-info.json` exist, then `cosmovisor` will wait for `data/upgrade-info.json` file to trigger an upgrade.
+ If `cosmovisor/current/upgrade-info.json` doesn't exist but `data/upgrade-info.json` exists, then `cosmovisor` assumes that whatever is in `data/upgrade-info.json` is a valid upgrade request. In this case `cosmovisor` tries immediately to make an upgrade according to the `name` attribute in `data/upgrade-info.json`.
+ Otherwise, `cosmovisor` waits for changes in `upgrade-info.json`. As soon as a new upgrade name is recorded in the file, `cosmovisor` will trigger an upgrade mechanism.
When the upgrade mechanism is triggered, `cosmovisor` will:
1. if `DAEMON_ALLOW_DOWNLOAD_BINARIES` is enabled, start by auto-downloading a new binary into `cosmovisor/<name>/bin` (where `<name>` is the `upgrade-info.json:name` attribute);
2. update the `current` symbolic link to point to the new directory and save `data/upgrade-info.json` to `cosmovisor/current/upgrade-info.json`.

View File

@ -496,7 +496,7 @@ func (ks keystore) List() ([]*Record, error) {
return nil, err
}
var res []*Record
var res []*Record //nolint:prealloc
sort.Strings(keys)
for _, key := range keys {
if strings.Contains(key, addressSuffix) {

View File

@ -64,7 +64,7 @@ func tmToProto(tmPk tmMultisig) (*LegacyAminoPubKey, error) {
}
// MarshalAminoJSON overrides amino JSON unmarshaling.
func (m LegacyAminoPubKey) MarshalAminoJSON() (tmMultisig, error) { //nolint:golint
func (m LegacyAminoPubKey) MarshalAminoJSON() (tmMultisig, error) { //nolint:revive
return protoToTm(&m)
}

View File

@ -1,3 +1,4 @@
//go:build !libsecp256k1
// +build !libsecp256k1
package secp256k1

View File

@ -1,4 +1,6 @@
//go:build !cgo || !ledger
// +build !cgo !ledger
// test_ledger_mock
package ledger

View File

@ -9,23 +9,25 @@ The database interface types consist of objects to encapsulate the singular conn
### `DBConnection`
This interface represents a connection to a versioned key-value database. All versioning operations are performed using methods on this type.
* The `Versions` method returns a `VersionSet` which represents an immutable view of the version history at the current state.
* Version history is modified via the `{Save,Delete}Version` methods.
* Operations on version history do not modify any database contents.
* The `Versions` method returns a `VersionSet` which represents an immutable view of the version history at the current state.
* Version history is modified via the `{Save,Delete}Version` methods.
* Operations on version history do not modify any database contents.
### `DBReader`, `DBWriter`, and `DBReadWriter`
These types represent transactions on the database contents. Their methods provide CRUD operations as well as iteration.
* Writeable transactions call `Commit` flushes operations to the source DB.
* All open transactions must be closed with `Discard` or `Commit` before a new version can be saved on the source DB.
* The maximum number of safely concurrent transactions is dependent on the backend implementation.
* A single transaction object is not safe for concurrent use.
* Write conflicts on concurrent transactions will cause an error at commit time (optimistic concurrency control).
* Writeable transactions call `Commit` flushes operations to the source DB.
* All open transactions must be closed with `Discard` or `Commit` before a new version can be saved on the source DB.
* The maximum number of safely concurrent transactions is dependent on the backend implementation.
* A single transaction object is not safe for concurrent use.
* Write conflicts on concurrent transactions will cause an error at commit time (optimistic concurrency control).
#### `Iterator`
* An iterator is invalidated by any writes within its `Domain` to the source transaction while it is open.
* An iterator must call `Close` before its source transaction is closed.
* An iterator is invalidated by any writes within its `Domain` to the source transaction while it is open.
* An iterator must call `Close` before its source transaction is closed.
### `VersionSet`
@ -36,7 +38,8 @@ This represents a self-contained and immutable view of a database's version hist
### In-memory DB
The in-memory DB in the `db/memdb` package cannot be persisted to disk. It is implemented using the Google [btree](https://pkg.go.dev/github.com/google/btree) library.
* This currently does not perform write conflict detection, so it only supports a single open write-transaction at a time. Multiple and concurrent read-transactions are supported.
* This currently does not perform write conflict detection, so it only supports a single open write-transaction at a time. Multiple and concurrent read-transactions are supported.
### BadgerDB
@ -54,6 +57,7 @@ err := tx2.Commit() // err is non-nil
```
But this will not:
```go
tx1, tx2 := db.Writer(), db.ReadWriter()
key := []byte("key")
@ -62,6 +66,7 @@ tx2.Set(key, []byte("b"))
tx1.Commit() // ok
tx2.Commit() // ok
```
### RocksDB
A [RocksDB](https://github.com/facebook/rocksdb)-based backend. Internally this uses [`OptimisticTransactionDB`](https://github.com/facebook/rocksdb/wiki/Transactions#optimistictransactiondb) to allow concurrent transactions with write conflict detection. Historical versioning is internally implemented with [Checkpoints](https://github.com/facebook/rocksdb/wiki/Checkpoints).

View File

@ -44,6 +44,4 @@ footer:
aside: false
-->
# 404 - Lost in space, this is just an empty void...
# 404 - Lost in space, this is just an empty void

View File

@ -6,14 +6,14 @@ If you want to open a PR in Cosmos SDK to update the documentation, please follo
- Translations for documentation live in a `docs/<locale>/` folder, where `<locale>` is the language code for a specific language. For example, `zh` for Chinese, `ko` for Korean, `ru` for Russian, etc.
- Each `docs/<locale>/` folder must follow the same folder structure within `docs/`, but only content in the following folders needs to be translated and included in the respective `docs/<locale>/` folder:
- `docs/basics/`
- `docs/building-modules/`
- `docs/core/`
- `docs/ibc/`
- `docs/intro/`
- `docs/migrations/`
- `docs/run-node/`
- Each `docs/<locale>/` folder must also have a `README.md` that includes a translated version of both the layout and content within the root-level [`README.md`](https://github.com/cosmos/cosmos-sdk/tree/master/docs/README.md). The layout defined in the `README.md` is used to build the homepage.
- `docs/basics/`
- `docs/building-modules/`
- `docs/core/`
- `docs/ibc/`
- `docs/intro/`
- `docs/migrations/`
- `docs/run-node/`
- Each `docs/<locale>/` folder must also have a `README.md` that includes a translated version of both the layout and content within the root-level [`README.md`](https://github.com/cosmos/cosmos-sdk/tree/master/docs/README.md). The layout defined in the `README.md` is used to build the homepage.
- Always translate content living on `master` unless you are revising documentation for a specific release. Translated documentation like the root-level documentation is semantically versioned.
- For additional configuration options, please see [VuePress Internationalization](https://vuepress.vuejs.org/guide/i18n.html).

View File

@ -278,10 +278,6 @@ Cons:
1. Decorator pattern may have a deeply nested structure that is hard to understand, this is mitigated by having the decorator order explicitly listed in the `ChainAnteDecorators` function.
2. Does not make use of the ModuleManager design. Since this is already being used for BeginBlocker/EndBlocker, this proposal seems unaligned with that design pattern.
## Status
> Accepted Simple Decorators approach
## Consequences
Since pros and cons are written for each approach, it is omitted from this section

View File

@ -192,10 +192,6 @@ func (app *BaseApp) AddRunTxRecoveryHandler(handlers ...RecoveryHandler) {
This method would prepend handlers to an existing chain.
## Status
Proposed
## Consequences
### Positive

View File

@ -211,23 +211,23 @@ and relays ABCI requests and responses so that the service can group the state c
```go
// ABCIListener interface used to hook into the ABCI message processing of the BaseApp
type ABCIListener interface {
// ListenBeginBlock updates the streaming service with the latest BeginBlock messages
ListenBeginBlock(ctx types.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error
// ListenEndBlock updates the steaming service with the latest EndBlock messages
ListenEndBlock(ctx types.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) error
// ListenDeliverTx updates the steaming service with the latest DeliverTx messages
// ListenBeginBlock updates the streaming service with the latest BeginBlock messages
ListenBeginBlock(ctx types.Context, req abci.RequestBeginBlock, res abci.ResponseBeginBlock) error
// ListenEndBlock updates the steaming service with the latest EndBlock messages
ListenEndBlock(ctx types.Context, req abci.RequestEndBlock, res abci.ResponseEndBlock) error
// ListenDeliverTx updates the steaming service with the latest DeliverTx messages
ListenDeliverTx(ctx types.Context, req abci.RequestDeliverTx, res abci.ResponseDeliverTx) error
}
// StreamingService interface for registering WriteListeners with the BaseApp and updating the service with the ABCI messages using the hooks
type StreamingService interface {
// Stream is the streaming service loop, awaits kv pairs and writes them to some destination stream or file
// Stream is the streaming service loop, awaits kv pairs and writes them to some destination stream or file
Stream(wg *sync.WaitGroup) error
// Listeners returns the streaming service's listeners for the BaseApp to register
Listeners() map[types.StoreKey][]store.WriteListener
// ABCIListener interface for hooking into the ABCI messages from inside the BaseApp
ABCIListener
// Closer interface
// Listeners returns the streaming service's listeners for the BaseApp to register
Listeners() map[types.StoreKey][]store.WriteListener
// ABCIListener interface for hooking into the ABCI messages from inside the BaseApp
ABCIListener
// Closer interface
io.Closer
}
```
@ -598,10 +598,10 @@ func NewSimApp(
// configure state listening capabilities using AppOptions
listeners := cast.ToStringSlice(appOpts.Get("store.streamers"))
for _, listenerName := range listeners {
// get the store keys allowed to be exposed for this streaming service
// get the store keys allowed to be exposed for this streaming service
exposeKeyStrs := cast.ToStringSlice(appOpts.Get(fmt.Sprintf("streamers.%s.keys", streamerName)))
var exposeStoreKeys []sdk.StoreKey
if exposeAll(exposeKeyStrs) { // if list contains `*`, expose all StoreKeys
if exposeAll(exposeKeyStrs) { // if list contains `*`, expose all StoreKeys
exposeStoreKeys = make([]sdk.StoreKey, 0, len(keys))
for _, storeKey := range keys {
exposeStoreKeys = append(exposeStoreKeys, storeKey)
@ -614,7 +614,7 @@ func NewSimApp(
}
}
}
if len(exposeStoreKeys) == 0 { // short circuit if we are not exposing anything
if len(exposeStoreKeys) == 0 { // short circuit if we are not exposing anything
continue
}
// get the constructor for this listener name

View File

@ -119,7 +119,6 @@ We need to be able to process transactions and roll-back state updates if a tran
We identified use-cases, where modules will need to save an object commitment without storing an object itself. Sometimes clients are receiving complex objects, and they have no way to prove a correctness of that object without knowing the storage layout. For those use cases it would be easier to commit to the object without storing it directly.
### Refactor MultiStore
The Stargate `/store` implementation (store/v1) adds an additional layer in the SDK store construction - the `MultiStore` structure. The multistore exists to support the modularity of the Cosmos SDK - each module is using its own instance of IAVL, but in the current implementation, all instances share the same database. The latter indicates, however, that the implementation doesn't provide true modularity. Instead it causes problems related to race condition and atomic DB commits (see: [\#6370](https://github.com/cosmos/cosmos-sdk/issues/6370) and [discussion](https://github.com/cosmos/cosmos-sdk/discussions/8297#discussioncomment-757043)).
@ -192,6 +191,7 @@ The presented workaround can be used until the IBC module is fully upgraded to s
### Optimization: compress module key prefixes
We consider a compression of prefix keys by creating a mapping from module key to an integer, and serializing the integer using varint coding. Varint coding assures that different values don't have common byte prefix. For Merkle Proofs we can't use prefix compression - so it should only apply for the `SS` keys. Moreover, the prefix compression should be only applied for the module namespace. More precisely:
+ each module has it's own namespace;
+ when accessing a module namespace we create a KVStore with embedded prefix;
+ that prefix will be compressed only when accessing and managing `SS`.

View File

@ -88,9 +88,11 @@ message NFT {
- `class_id` is the identifier of the NFT class where the NFT belongs; _required_
- `id` is an alphanumeric identifier of the NFT, unique within the scope of its class. It is specified by the creator of the NFT and may be expanded to use DID in the future. `class_id` combined with `id` uniquely identifies an NFT and is used as the primary index for storing the NFT; _required_
```
{class_id}/{id} --> NFT (bytes)
```
- `uri` is a URL pointing to an off-chain JSON file that contains metadata about this NFT (Ref: [ERC721 standard and OpenSea extension](https://docs.opensea.io/docs/metadata-standards)); _required_
- `uri_hash` is a hash of the `uri`;
- `data` is a field that CAN be used by composing modules to specify additional properties for the NFT; _optional_
@ -291,7 +293,6 @@ message QueryClassesResponse {
}
```
### Interoperability
Interoperability is all about reusing assets between modules and chains. The former one is achieved by ADR-33: Protobuf client - server communication. At the time of writing ADR-33 is not finalized. The latter is achieved by IBC. Here we will focus on the IBC side.
@ -299,7 +300,6 @@ IBC is implemented per module. Here, we aligned that NFTs will be recorded and m
For IBC interoperability, NFT custom modules MUST use the NFT object type understood by the IBC client. So, for x/nft interoperability, custom NFT implementations (example: x/cryptokitty) should use the canonical x/nft module and proxy all NFT balance keeping functionality to x/nft or else re-implement all functionality using the NFT object type understood by the IBC client. In other words: x/nft becomes the standard NFT registry for all Cosmos NFTs (example: x/cryptokitty will register a kitty NFT in x/nft and use x/nft for book keeping). This was [discussed](https://github.com/cosmos/cosmos-sdk/discussions/9065#discussioncomment-873206) in the context of using x/bank as a general asset balance book. Not using x/nft will require implementing another module for IBC.
## Consequences
### Backward Compatibility

View File

@ -37,7 +37,7 @@ On top of Buf's recommendations we add the following guidelines that are specifi
### Updating Protobuf Definition Without Bumping Version
#### 1. `Msg`s MUST NOT have new fields.
#### 1. `Msg`s MUST NOT have new fields
When processing `Msg`s, the Cosmos SDK's antehandlers are strict and don't allow unknown fields in `Msg`s. This is checked by the unknown field rejection in the [`codec/unknownproto` package](https://github.com/cosmos/cosmos-sdk/blob/master/codec/unknownproto).
@ -47,11 +47,11 @@ For this reason, module developers MUST NOT add new fields to existing `Msg`s.
It is worth mentioning that this does not limit adding fields to a `Msg`, but also to all nested structs and `Any`s inside a `Msg`.
#### 2. Non-`Msg`-related Protobuf definitions MAY have new fields.
#### 2. Non-`Msg`-related Protobuf definitions MAY have new fields
On the other hand, module developers MAY add new fields to Protobuf definitions related to the `Query` service or to objects which are saved in the store. This recommendation follows the Protobuf specification, but is added in this document for clarity.
#### 3. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields.
#### 3. Fields MAY be marked as `deprecated`, and nodes MAY implement a protocol-breaking change for handling these fields
Protobuf supports the [`deprecated` field option](https://developers.google.com/protocol-buffers/docs/proto#options), and this option MAY be used on any field, including `Msg` fields. If a node handles a Protobuf message with a non-empty deprecated field, the node MAY change its behavior upon processing it, even in a protocol-breaking way. When possible, the node MUST handle backwards compatibility without breaking the consensus (unless we increment the proto version).
@ -60,7 +60,7 @@ As an example, the Cosmos SDK v0.42 to v0.43 update contained two Protobuf-break
- The Cosmos SDK recently removed support for [time-based software upgrades](https://github.com/cosmos/cosmos-sdk/pull/8849). As such, the `time` field has been marked as deprecated in `cosmos.upgrade.v1beta1.Plan`. Moreover, the node will reject any proposal containing an upgrade Plan whose `time` field is non-empty.
- The Cosmos SDK now supports [governance split votes](./adr-037-gov-split-vote.md). When querying for votes, the returned `cosmos.gov.v1beta1.Vote` message has its `option` field (used for 1 vote option) deprecated in favor of its `options` field (allowing multiple vote options). Whenever possible, the SDK still populates the deprecated `option` field, that is, if and only if the `len(options) == 1` and `options[0].Weight == 1.0`.
#### 4. Fields MUST NOT be renamed.
#### 4. Fields MUST NOT be renamed
Whereas the official Protobuf recommendations do not prohibit renaming fields, as it does not break the Protobuf binary representation, the SDK explicitly forbids renaming fields in Protobuf structs. The main reason for this choice is to avoid introducing breaking changes for clients, which often rely on hard-coded fields from generated types. Moreover, renaming fields will lead to client-breaking JSON representations of Protobuf definitions, used in REST endpoints and in the CLI.
@ -70,7 +70,7 @@ TODO, needs architecture review. Some topics:
- Bumping versions frequency
- When bumping versions, should the Cosmos SDK support both versions?
- i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions?
- i.e. v1beta1 -> v1, should we have two folders in the Cosmos SDK, and handlers for both versions?
- mention ADR-023 Protobuf naming
## Consequences

View File

@ -43,9 +43,9 @@ We will build off of the alignment of `x/gov` and `x/authz` work per
[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810). Namely, module developers
will create one or more unique parameter data structures that must be serialized
to state. The Param data structures must implement `sdk.Msg` interface with respective
Protobuf Msg service method which will validate and update the parameters with all
necessary changes. The `x/gov` module via the work done in
[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param
Protobuf Msg service method which will validate and update the parameters with all
necessary changes. The `x/gov` module via the work done in
[#9810](https://github.com/cosmos/cosmos-sdk/pull/9810), will dispatch Param
messages, which will be handled by Protobuf Msg services.
Note, it is up to developers to decide how to structure their parameters and
@ -134,11 +134,11 @@ message QueryParamsResponse {
As a result of implementing the module parameter methodology, we gain the ability
for module parameter changes to be stateful and extensible to fit nearly every
application's use case. We will be able to emit events (and trigger hooks registered
application's use case. We will be able to emit events (and trigger hooks registered
to that events using the work proposed in [even hooks](https://github.com/cosmos/cosmos-sdk/discussions/9656)),
call other Msg service methods or perform migration.
In addition, there will be significant gains in performance when it comes to reading
and writing parameters from and to state, especially if a specific set of parameters
In addition, there will be significant gains in performance when it comes to reading
and writing parameters from and to state, especially if a specific set of parameters
are read on a consistent basis.
However, this methodology will require developers to implement more types and

View File

@ -10,7 +10,6 @@ Before the application binary is upgraded, Cosmovisor calls a `pre-upgrade` comm
The `pre-upgrade` command does not take in any command-line arguments and is expected to terminate with the following exit codes:
| Exit status code | How it is handled in Cosmosvisor |
|------------------|---------------------------------------------------------------------------------------------------------------------|
| `0` | Assumes `pre-upgrade` command executed successfully and continues the upgrade. |
@ -18,7 +17,6 @@ The `pre-upgrade` command does not take in any command-line arguments and is exp
| `30` | `pre-upgrade` command was executed but failed. This fails the entire upgrade. |
| `31` | `pre-upgrade` command was executed but failed. But the command is retried until exit code `1` or `30` are returned. |
## Sample
Here is a sample structure of the `pre-upgrade` command:
@ -46,8 +44,8 @@ func preUpgradeCommand() *cobra.Command {
}
```
Ensure that the pre-upgrade command has been registered in the application:
```go
rootCmd.AddCommand(
// ..

View File

@ -82,4 +82,4 @@ Following the Protocol Buffers migration in v0.40, Cosmos SDK has been set to ta
## Migrating to gRPC
Instead of hitting REST endpoints as described above, the Cosmos SDK also exposes a gRPC server. Any client can use gRPC instead of REST to interact with the node. An overview of different ways to communicate with a node can be found [here](../core/grpc_rest.md), and a concrete tutorial for setting up a gRPC client can be found [here](../run-node/txs.md#programmatically-with-go).
Instead of hitting REST endpoints as described above, the Cosmos SDK also exposes a gRPC server. Any client can use gRPC instead of REST to interact with the node. An overview of different ways to communicate with a node can be found [here](../core/grpc_rest.md), and a concrete tutorial for setting up a gRPC client can be found [here](../run-node/txs.md#programmatically-with-go).

View File

@ -1,3 +1,3 @@
# Cosmos SDK Documentation (Russian)
A Russian translation of the Cosmos SDK documentation is not available for this version. If you would like to help with translating, please see [Internationalization](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md#internationalization). A `v0.39` version of the documentation can be found [here](https://github.com/cosmos/cosmos-sdk/tree/v0.39.3/docs/ru).
A Russian translation of the Cosmos SDK documentation is not available for this version. If you would like to help with translating, please see [Internationalization](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md#internationalization). A `v0.39` version of the documentation can be found [here](https://github.com/cosmos/cosmos-sdk/tree/v0.39.3/docs/ru).

View File

@ -8,7 +8,7 @@ The `rosetta` package implements Coinbase's [Rosetta API](https://www.rosetta-ap
## Add Rosetta Command
The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK.
The Rosetta API server is a stand-alone server that connects to a node of a chain developed with Cosmos SDK.
To enable Rosetta API support, it's required to add the `RosettaCommand` to your application's root command file (e.g. `appd/cmd/root.go`).

View File

@ -42,6 +42,7 @@ The `~/.simapp` folder has the following structure:
## Updating Some Default Settings
If you want to change any field values in configuration files (for ex: genesis.json) you can use `jq` ([installation](https://stedolan.github.io/jq/download/) & [docs](https://stedolan.github.io/jq/manual/#Assignment)) & `sed` commands to do that. Few examples are listed here.
```bash
# to change the chain-id
jq '.chain_id = "testing"' genesis.json > temp.json && mv temp.json genesis.json
@ -57,6 +58,7 @@ jq '.app_state.mint.minter.inflation = "0.300000000000000000"' genesis.json > te
```
## Adding Genesis Accounts
Before starting the chain, you need to populate the state with at least one account. To do so, first [create a new account in the keyring](./keyring.md#adding-keys-to-the-keyring) named `my_validator` under the `test` keyring backend (feel free to choose another name and another backend).
Now that you have created a local account, go ahead and grant it some `stake` tokens in your chain's genesis file. Doing so will also make sure your chain is aware of this account's existence:

View File

@ -6,7 +6,7 @@ order: 7
The `simd testnet` subcommand makes it easy to initialize and start a simulated test network for testing purposes. {synopsis}
In addition to the commands for [running a node](./run-node.html), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process.
In addition to the commands for [running a node](./run-node.html), the `simd` binary also includes a `testnet` command that allows you to start a simulated test network in-process or to initialize files for a simulated test network that runs in a separate process.
## Initialize Files

2
go.mod
View File

@ -92,7 +92,7 @@ require (
github.com/jmhodges/levigo v1.0.0 // indirect
github.com/keybase/go-keychain v0.0.0-20190712205309-48d3d31d256d // indirect
github.com/klauspost/compress v1.12.3 // indirect
github.com/lib/pq v1.2.0 // indirect
github.com/lib/pq v1.10.2 // indirect
github.com/libp2p/go-buffer-pool v0.0.2 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/mimoo/StrobeGo v0.0.0-20181016162300-f8f6d4d2b643 // indirect

3
go.sum
View File

@ -545,8 +545,9 @@ github.com/lazyledger/smt v0.2.1-0.20210709230900-03ea40719554 h1:nDOkLO7klmnEw1
github.com/lazyledger/smt v0.2.1-0.20210709230900-03ea40719554/go.mod h1:9+Pb2/tg1PvEgW7aFx4bFhDE4bvbI03zuJ8kb7nJ9Jc=
github.com/leodido/go-urn v1.2.0 h1:hpXL4XnriNwQ/ABnpepYM/1vCLWNDfUNts8dX3xTG6Y=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/lib/pq v1.2.0 h1:LXpIM/LZ5xGFhOpXAQUIMM1HdyqzVYM13zNdjCEEcA0=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/libp2p/go-buffer-pool v0.0.2 h1:QNK2iAFa8gjAe1SPz6mHSMuCcjs+X1wlHzeOSqcmlfs=
github.com/libp2p/go-buffer-pool v0.0.2/go.mod h1:MvaB6xw5vOrDl8rYZGLFdKAuk/hRoRZd1Vi32+RXyFM=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=

View File

@ -15,11 +15,11 @@ execute_mod_tests() {
# TODO: in the future we will need to disable it once we go into multi module setup, because
# we will have cross module dependencies.
if [ -n "$GIT_DIFF" ] && ! grep $mod_dir <<< $GIT_DIFF; then
echo "ignoring module $mod_dir - no changes in the module";
echo ">>> ignoring module $mod_dir - no changes in the module";
return;
fi;
echo "executing $go_mod tests"
echo ">>> running $go_mod tests"
cd $mod_dir;
go test -mod=readonly -timeout 30m -coverprofile=${root_dir}/${coverage_file}.tmp -covermode=atomic -tags='norace ledger test_ledger_mock' ./...
local ret=$?
@ -31,12 +31,14 @@ execute_mod_tests() {
return $ret;
}
GIT_DIFF=`git status --porcelain`
# GIT_DIFF=`git status --porcelain`
echo "GIT_DIFF: " $GIT_DIFF
coverage_file=coverage-go-submod-profile.out
return_val=0;
for f in $(find -name go.mod -not -path "./go.mod") "./container/go.mod"; do
for f in $(find -name go.mod -not -path "./go.mod"); do
execute_mod_tests $f;
if [[ $? -ne 0 ]] ; then
return_val=2;

View File

@ -507,7 +507,7 @@ func extractInitialHeightFromGenesisChunk(genesisChunk string) (int64, error) {
return 0, err
}
re, err := regexp.Compile("\"initial_height\":\"(\\d+)\"")
re, err := regexp.Compile("\"initial_height\":\"(\\d+)\"") //nolint:gocritic
if err != nil {
return 0, err
}

View File

@ -51,7 +51,7 @@ func (o OnlineNetwork) AccountCoins(_ context.Context, _ *types.AccountCoinsRequ
// networkOptionsFromClient builds network options given the client
func networkOptionsFromClient(client crgtypes.Client, genesisBlock *types.BlockIdentifier) *types.NetworkOptionsResponse {
var tsi *int64 = nil
var tsi *int64
if genesisBlock != nil {
tsi = &(genesisBlock.Index)
}

View File

@ -39,7 +39,6 @@ contain valid denominations. Accounts may optionally be supplied with vesting pa
Args: cobra.ExactArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
clientCtx := client.GetClientContextFromCmd(cmd)
serverCtx := server.GetServerContextFromCmd(cmd)
config := serverCtx.Config

View File

@ -215,7 +215,7 @@ metadata in turn to the local application via the `OfferSnapshot` ABCI call.
`BaseApp.OfferSnapshot()` attempts to start a restore operation by calling
`snapshots.Manager.Restore()`. This may fail, e.g. if the snapshot format is
unknown (it may have been generated by a different version of the Cosmos SDK),
in which case Tendermint will offer other discovered snapshots.
in which case Tendermint will offer other discovered snapshots.
If the snapshot is accepted, `Manager.Restore()` will record that a restore
operation is in progress, and spawn a separate goroutine that runs a synchronous

View File

@ -1,4 +1,5 @@
# State Streaming Service
This package contains the constructors for the `StreamingService`s used to write state changes out from individual KVStores to a
file or stream, as described in [ADR-038](../../docs/architecture/adr-038-state-listening.md) and defined in [types/streaming.go](../../baseapp/streaming.go).
The child directories contain the implementations for specific output destinations.
@ -24,7 +25,6 @@ The `StreamingService` is configured from within an App using the `AppOptions` l
`store.streamers` contains a list of the names of the `StreamingService` implementations to employ which are used by `ServiceTypeFromString`
to return the `ServiceConstructor` for that particular implementation:
```go
listeners := cast.ToStringSlice(appOpts.Get("store.streamers"))
for _, listenerName := range listeners {

View File

@ -1,4 +1,5 @@
# File Streaming Service
This pkg contains an implementation of the [StreamingService](../../../baseapp/streaming.go) that writes
the data stream out to files on the local filesystem. This process is performed synchronously with the message processing
of the state machine.
@ -23,7 +24,8 @@ The `file.StreamingService` is configured from within an App using the `AppOptio
We turn the service on by adding its name, "file", to `store.streamers`- the list of streaming services for this App to employ.
In `streamers.file` we include three configuration parameters for the file streaming service:
1. `streamers.x.keys` contains the list of `StoreKey` names for the KVStores to expose using this service.
1. `streamers.x.keys` contains the list of `StoreKey` names for the KVStores to expose using this service.
In order to expose *all* KVStores, we can include `*` in this list. An empty list is equivalent to turning the service off.
2. `streamers.file.write_dir` contains the path to the directory to write the files to.
3. `streamers.file.prefix` contains an optional prefix to prepend to the output files to prevent potential collisions

View File

@ -102,7 +102,7 @@ func NewStore(db dbm.DBConnection, opts StoreConfig) (ret *Store, err error) {
}
// Version sets of each DB must match
if !versions.Equal(mversions) {
err = fmt.Errorf("Storage and Merkle DB have different version history")
err = fmt.Errorf("Storage and Merkle DB have different version history") //nolint:stylecheck
return
}
err = opts.MerkleDB.Revert()
@ -424,12 +424,12 @@ func (s *Store) Query(req abci.RequestQuery) (res abci.ResponseQuery) {
return sdkerrors.QueryResult(err, false)
}
if root == nil {
return sdkerrors.QueryResult(errors.New("Merkle root hash not found"), false)
return sdkerrors.QueryResult(errors.New("Merkle root hash not found"), false) //nolint:stylecheck
}
merkleStore := loadSMT(dbm.ReaderAsReadWriter(merkleView), root)
res.ProofOps, err = merkleStore.GetProof(res.Key)
if err != nil {
return sdkerrors.QueryResult(fmt.Errorf("Merkle proof creation failed for key: %v", res.Key), false)
return sdkerrors.QueryResult(fmt.Errorf("Merkle proof creation failed for key: %v", res.Key), false) //nolint:stylecheck
}
case "/subspace":
@ -466,14 +466,14 @@ func loadSMT(merkleTxn dbm.DBReadWriter, root []byte) *smt.Store {
return smt.LoadStore(merkleNodes, merkleValues, root)
}
func (st *Store) CacheWrap() types.CacheWrap {
return cachekv.NewStore(st)
func (s *Store) CacheWrap() types.CacheWrap {
return cachekv.NewStore(s)
}
func (st *Store) CacheWrapWithTrace(w io.Writer, tc types.TraceContext) types.CacheWrap {
return cachekv.NewStore(tracekv.NewStore(st, w, tc))
func (s *Store) CacheWrapWithTrace(w io.Writer, tc types.TraceContext) types.CacheWrap {
return cachekv.NewStore(tracekv.NewStore(s, w, tc))
}
func (st *Store) CacheWrapWithListeners(storeKey types.StoreKey, listeners []types.WriteListener) types.CacheWrap {
return cachekv.NewStore(listenkv.NewStore(st, storeKey, listeners))
func (s *Store) CacheWrapWithListeners(storeKey types.StoreKey, listeners []types.WriteListener) types.CacheWrap {
return cachekv.NewStore(listenkv.NewStore(s, storeKey, listeners))
}

View File

@ -15,8 +15,8 @@ var (
)
var (
errKeyEmpty error = errors.New("key is empty or nil")
errValueNil error = errors.New("value is nil")
errKeyEmpty = errors.New("key is empty or nil")
errValueNil = errors.New("value is nil")
)
// Store Implements types.KVStore and CommitKVStore.

View File

@ -9,7 +9,7 @@ import (
var denomUnits = map[string]Dec{}
// baseDenom is the denom of smallest unit registered
var baseDenom string = ""
var baseDenom string
// RegisterDenom registers a denomination with a corresponding unit. If the
// denomination is already registered, an error will be returned.

View File

@ -295,7 +295,7 @@ func (cgts consumeTxSizeGasTxHandler) simulateSigGasCost(ctx context.Context, tx
return nil
}
func (cgts consumeTxSizeGasTxHandler) consumeTxSizeGas(ctx context.Context, tx sdk.Tx, txBytes []byte, simulate bool) error {
func (cgts consumeTxSizeGasTxHandler) consumeTxSizeGas(ctx context.Context, _ sdk.Tx, txBytes []byte, simulate bool) error {
sdkCtx := sdk.UnwrapSDKContext(ctx)
params := cgts.ak.GetParams(sdkCtx)
sdkCtx.GasMeter().ConsumeGas(params.TxSizeCostPerByte*sdk.Gas(len(txBytes)), "txSize")

View File

@ -429,7 +429,7 @@ func OnlyLegacyAminoSigners(sigData signing.SignatureData) bool {
}
}
func (svm sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, isReCheckTx, simulate bool) error {
func (svd sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, isReCheckTx, simulate bool) error {
sdkCtx := sdk.UnwrapSDKContext(ctx)
// no need to verify signatures on recheck tx
if isReCheckTx {
@ -455,7 +455,7 @@ func (svm sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, is
}
for i, sig := range sigs {
acc, err := GetSignerAcc(sdkCtx, svm.ak, signerAddrs[i])
acc, err := GetSignerAcc(sdkCtx, svd.ak, signerAddrs[i])
if err != nil {
return err
}
@ -491,7 +491,7 @@ func (svm sigVerificationTxHandler) sigVerify(ctx context.Context, tx sdk.Tx, is
}
if !simulate {
err := authsigning.VerifySignature(pubKey, signerData, sig.Data, svm.signModeHandler, tx)
err := authsigning.VerifySignature(pubKey, signerData, sig.Data, svd.signModeHandler, tx)
if err != nil {
var errMsg string
if OnlyLegacyAminoSigners(sig.Data) {

View File

@ -7,6 +7,7 @@ order: 1
**Note:** The auth module is different from the [authz module](../modules/authz/).
The differences are:
* `auth` - authentication of accounts and transactions for Cosmos SDK applications and is responsible for specifying the base transaction and account types.
* `authz` - authorization for accounts to perform actions on behalf of other accounts and enables a granter to grant authorizations to a grantee that allows the grantee to execute messages on behalf of the granter.

View File

@ -621,4 +621,4 @@ all coins at a given time.
- PeriodicVestingAccount: A vesting account implementation that vests coins
according to a custom vesting schedule.
- PermanentLockedAccount: It does not ever release coins, locking them indefinitely.
Coins in this account can still be used for delegating and for governance votes even while locked.
Coins in this account can still be used for delegating and for governance votes even while locked.

View File

@ -320,7 +320,6 @@ Example Output:
}
```
### Params
The `params` endpoint allow users to query the current auth parameters.
@ -419,4 +418,4 @@ Example:
```bash
simd tx vesting create-vesting-account cosmos1.. 100stake 2592000
```
```

View File

@ -169,4 +169,4 @@ Example Output:
],
"pagination": null
}
```
```

View File

@ -102,4 +102,4 @@ The available permissions are:
5. **[Parameters](05_params.md)**
6. **[Client](06_client.md)**
- [CLI](06_client.md#cli)
- [gRPC](06_client.md#grpc)
- [gRPC](06_client.md#grpc)

View File

@ -1,24 +1,31 @@
<!--
order: 5
-->
# Client
## CLI
A user can query and interact with the `crisis` module using the CLI.
### Transactions
The `tx` commands allow users to interact with the `crisis` module.
```bash
simd tx crisis --help
```
#### invariant-broken
The `invariant-broken` command submits proof when an invariant was broken to halt the chain
```bash
simd tx crisis invariant-broken [module-name] [invariant-route] [flags]
```
Example:
```bash
simd tx crisis invariant-broken bank total-supply --from=[keyname or address]
```
```

View File

@ -19,7 +19,7 @@ func MigratePrefixAddress(store sdk.KVStore, prefixBz []byte) {
for ; oldStoreIter.Valid(); oldStoreIter.Next() {
addr := oldStoreIter.Key()
var newStoreKey []byte = prefixBz
var newStoreKey = prefixBz
newStoreKey = append(newStoreKey, address.MustLengthPrefix(addr)...)
// Set new key on store. Values don't change.

View File

@ -103,4 +103,4 @@ to set up a script to periodically withdraw and rebond rewards.
7. **[Parameters](07_params.md)**
8. **[Parameters](07_params.md)**
- [CLI](08_client.md#cli)
- [gRPC](08_client.md#grpc)
- [gRPC](08_client.md#grpc)

View File

@ -107,7 +107,7 @@ func (k Keeper) GetEpochMsg(ctx sdk.Context, epochNumber int64, actionID uint64)
// GetEpochActions get all actions
func (k Keeper) GetEpochActions(ctx sdk.Context) []sdk.Msg {
actions := []sdk.Msg{}
iterator := sdk.KVStorePrefixIterator(ctx.KVStore(k.storeKey), []byte(EpochActionQueuePrefix))
iterator := k.GetEpochActionsIterator(ctx)
defer iterator.Close()
for ; iterator.Valid(); iterator.Next() {
@ -122,13 +122,13 @@ func (k Keeper) GetEpochActions(ctx sdk.Context) []sdk.Msg {
// GetEpochActionsIterator returns iterator for EpochActions
func (k Keeper) GetEpochActionsIterator(ctx sdk.Context) db.Iterator {
return sdk.KVStorePrefixIterator(ctx.KVStore(k.storeKey), []byte(EpochActionQueuePrefix))
return sdk.KVStorePrefixIterator(ctx.KVStore(k.storeKey), EpochActionQueuePrefix)
}
// DequeueEpochActions dequeue all the actions store on epoch
func (k Keeper) DequeueEpochActions(ctx sdk.Context) {
store := ctx.KVStore(k.storeKey)
iterator := sdk.KVStorePrefixIterator(store, []byte(EpochActionQueuePrefix))
iterator := sdk.KVStorePrefixIterator(store, EpochActionQueuePrefix)
defer iterator.Close()
for ; iterator.Valid(); iterator.Next() {

View File

@ -29,7 +29,7 @@ In this case, unbonding should start instantly.
// — BufferedMsgCreateValidatorQueue, BufferedMsgEditValidatorQueue
// — BufferedMsgUnjailQueue, BufferedMsgDelegateQueue, BufferedMsgRedelegationQueue, BufferedMsgUndelegateQueue
// Write epoch related tests with new scenarios
// — Simulation test is important for finding bugs [Ask Dev for questions)
// — Simulation test is important for finding bugs [Ask Dev for questions)
// — Can easily add a simulator check to make sure all delegation amounts in queue add up to the same amount thats in the EpochUnbondedPool
// — Id like it added as an invariant test for the simulator
// — the simulator should check that the sum of all the queued delegations always equals the amount kept track in the data

View File

@ -11,6 +11,7 @@ The `query` commands allows users to query `evidence` state.
```bash
simd query evidence --help
```
### evidence
The `evidence` command allows users to list all evidence or evidence by hash.
@ -20,6 +21,7 @@ Usage:
```bash
simd query evidence [flags]
```
To query evidence by hash
Example:
@ -45,6 +47,7 @@ Example:
```bash
simd query evidence
```
Example Output:
```bash
@ -75,6 +78,7 @@ Example:
```bash
curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence/DF0C23E8634E480F84B9D5674A7CDC9816466DEC28A3358F73260F68D28D7660"
```
Example Output:
```bash
@ -101,6 +105,7 @@ Example:
```bash
curl -X GET "http://localhost:1317/cosmos/evidence/v1beta1/evidence"
```
Example Output:
```bash
@ -163,6 +168,7 @@ Example:
```bash
grpcurl -plaintext localhost:9090 cosmos.evidence.v1beta1.Query/AllEvidence
```
Example Output:
```bash
@ -179,4 +185,4 @@ Example Output:
"total": "1"
}
}
```
```

View File

@ -32,4 +32,4 @@ This module allows accounts to grant fee allowances and to use fees from their a
- [Exec fee allowance](04_events.md#exec-fee-allowance)
5. **[Client](05_client.md)**
- [CLI](05_client.md#cli)
- [gRPC](05_client.md#grpc)
- [gRPC](05_client.md#grpc)

View File

@ -69,7 +69,7 @@ The deposit is kept in escrow and held by the governance `ModuleAccount` until t
When a proposal is finalized, the coins from the deposit are either refunded or burned, according to the final tally of the proposal:
- If the proposal is approved or rejected but _not_ vetoed, each deposit will be automatically refunded to its respective depositor (transferred from the governance `ModuleAccount`).
- If the proposal is approved or rejected but _not_ vetoed, each deposit will be automatically refunded to its respective depositor (transferred from the governance `ModuleAccount`).
- When the proposal is vetoed with a supermajority, deposits will be burned from the governance `ModuleAccount` and the proposal information along with its deposit information will be removed from state.
- All refunded or burned deposits are removed from the state. Events are issued when burning or refunding a deposit.
- NOTE: The proposals which completed the voting period, cannot return the deposits when queried.

View File

@ -666,7 +666,7 @@ Example Output:
}
```
### Deposits
### deposits
The `Deposits` endpoint allows users to query all deposits for a given proposal.
@ -739,7 +739,7 @@ Example Output:
A user can query the `gov` module using REST endpoints.
### proposals
### proposal
The `proposals` endpoint allows users to query a given proposal.
@ -787,7 +787,7 @@ Example Output:
### proposals
The `proposals` endpoint allows users to query all proposals with optional filters.
The `proposals` endpoint also allows users to query all proposals with optional filters.
```bash
/cosmos/gov/v1beta1/proposals
@ -858,7 +858,7 @@ Example Output:
}
```
### votes
### voter vote
The `votes` endpoint allows users to query a vote for a given proposal.
@ -995,7 +995,7 @@ Example Output:
}
```
### deposits
### proposal deposits
The `deposits` endpoint allows users to query all deposits for a given proposal.
@ -1057,4 +1057,4 @@ Example Output:
"no_with_veto": "0"
}
}
```
```

View File

@ -8,10 +8,11 @@ In the prefix store, entities should be stored by an unique identifier called `R
Regular CRUD operations can be performed on a table, these methods take a `sdk.KVStore` as parameter to get the table prefix store.
The `table` struct does not:
- enforce uniqueness of the `RowID`
- enforce prefix uniqueness of keys, i.e. not allowing one key to be a prefix
- enforce uniqueness of the `RowID`
- enforce prefix uniqueness of keys, i.e. not allowing one key to be a prefix
of another
- optimize Gas usage conditions
- optimize Gas usage conditions
The `table` struct is private, so that we only have custom tables built on top of it, that do satisfy these requirements.
## AutoUInt64Table
@ -31,8 +32,9 @@ The model provided for creating a `PrimaryKeyTable` should implement the `Primar
+++ https://github.com/cosmos/cosmos-sdk/blob/9f78f16ae75cc42fc5fe636bde18a453ba74831f/x/group/internal/orm/primary_key.go#L28-L41
`PrimaryKeyFields()` method returns the list of key parts for a given object.
The primary key parts can be []byte, string, and `uint64` types.
The primary key parts can be []byte, string, and `uint64` types.
Key parts, except the last part, follow these rules:
- []byte is encoded with a single byte length prefix
- strings are null-terminated
- `uint64` are encoded using 8 byte big endian.
- []byte is encoded with a single byte length prefix
- strings are null-terminated
- `uint64` are encoded using 8 byte big endian.

View File

@ -4,7 +4,6 @@ order: 6
# Client
## CLI
A user can query and interact with the `mint` module using the CLI.
@ -28,7 +27,7 @@ simd query mint annual-provisions [flags]
Example:
```
simd query mint annual-provisions
simd query mint annual-provisions
```
Example Output:
@ -37,7 +36,6 @@ Example Output:
22268504368893.612100895088410693
```
#### inflation
The `inflation` command allow users to query the current minting inflation value
@ -49,7 +47,7 @@ simd query mint inflation [flags]
Example:
```
simd query mint inflation
simd query mint inflation
```
Example Output:
@ -62,7 +60,6 @@ Example Output:
The `params` command allow users to query the current minting parameters
```
simd query mint params [flags]
```
@ -90,7 +87,7 @@ The `AnnualProvisions` endpoint allow users to query the current minting annual
/cosmos.mint.v1beta1.Query/AnnualProvisions
```
Example:
Example:
```
grpcurl -plaintext localhost:9090 cosmos.mint.v1beta1.Query/AnnualProvisions
@ -130,7 +127,6 @@ Example Output:
The `Params` endpoint allow users to query the current minting parameters
```
/cosmos.mint.v1beta1.Query/Params
```
@ -166,7 +162,7 @@ A user can query the `mint` module using REST endpoints.
/cosmos/mint/v1beta1/annual_provisions
```
Example:
Example:
```
curl "localhost:1317/cosmos/mint/v1beta1/annual_provisions"

View File

@ -38,7 +38,6 @@ slash_fraction_double_sign: "0.050000000000000000"
slash_fraction_downtime: "0.010000000000000000"
```
#### signing-info
The `signing-info` command allows users to query signing-info of the validator using consensus public key.
@ -209,7 +208,6 @@ Example Output:
A user can query the `slashing` module using REST endpoints.
### Params
```bash

View File

@ -46,4 +46,4 @@ This module will be used by the Cosmos Hub, the first hub in the Cosmos ecosyste
9. **[Client](09_client.md)**
- [CLI](09_client.md#cli)
- [gRPC](09_client.md#grpc)
- [REST](09_client.md#rest)
- [REST](09_client.md#rest)

View File

@ -915,7 +915,7 @@ Example Output:
```bash
{
"delegation_response":
"delegation_response":
{
"delegation":
{
@ -1348,7 +1348,6 @@ The `DelegtaorDelegations` REST endpoint queries all delegations of a given dele
/cosmos/staking/v1beta1/delegations/{delegatorAddr}
```
Example:
```bash
@ -1886,7 +1885,7 @@ Example Output:
### ValidatorDelegations
The `ValidatorDelegations` REST endpoint queries delegate information for given validator.
The `ValidatorDelegations` REST endpoint queries delegate information for given validator.
```bash
/cosmos/staking/v1beta1/validators/{validatorAddr}/delegations
@ -2037,7 +2036,7 @@ Example Output:
### ValidatorUnbondingDelegations
The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator.
The `ValidatorUnbondingDelegations` REST endpoint queries unbonding delegations of a validator.
```bash
/cosmos/staking/v1beta1/validators/{validatorAddr}/unbonding_delegations
@ -2087,4 +2086,3 @@ Example Output:
}
}
```

View File

@ -16,7 +16,7 @@ The `query` commands allow users to query `upgrade` state.
simd query upgrade --help
```
#### applied
#### applied
The `applied` command allows users to query the block header for height at which a completed upgrade was applied.
@ -72,7 +72,7 @@ Example Output:
}
```
#### module versions
#### module versions
The `module_versions` command gets a list of module names and their respective consensus versions.
@ -82,6 +82,7 @@ that module's information.
```bash
simd query upgrade module_versions [optional module_name] [flags]
```
Example:
```bash
@ -89,6 +90,7 @@ simd query upgrade module_versions
```
Example Output:
```bash
module_versions:
- name: auth
@ -167,7 +169,6 @@ time: "0001-01-01T00:00:00Z"
upgraded_client_state: null
```
## REST
A user can query the `upgrade` module using REST endpoints.
@ -216,7 +217,6 @@ Example Output:
}
```
### Module versions
`ModuleVersions` queries the list of module versions from state.
@ -363,7 +363,6 @@ Example Output:
}
```
### Module versions
`ModuleVersions` queries the list of module versions from state.
@ -458,5 +457,3 @@ Example Output:
]
}
```