merged in master
This commit is contained in:
commit
1f99aa3fb2
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
name: Module Readiness Checklist
|
||||
about: Pre-flight checklist that modules must pass in order to be included in a release of the Cosmos SDK
|
||||
labels: 'module-readiness-checklist'
|
||||
---
|
||||
|
||||
## x/{MODULE_NAME} Module Readiness Checklist
|
||||
|
||||
This checklist is to be used for tracking the final internal audit of new Cosmos SDK modules prior to inclusion in a published release.
|
||||
|
||||
### Release Candidate Checklist
|
||||
|
||||
The following checklist should be gone through once the module has been fully implemented. This audit should be performed directly on `master`, or preferably on a `alpha` or `beta` release tag that includes the module.
|
||||
|
||||
The module **should not** be included in any Release Candidate tag until it has passed this checklist.
|
||||
|
||||
- [ ] API audit (at least 1 person) (@assignee)
|
||||
- [ ] Are Msg and Query methods and types well-named and organized?
|
||||
- [ ] Is everything well documented (inline godoc as well as [`/spec/` folder](https://github.com/cosmos/cosmos-sdk/blob/master/docs/spec/SPEC-SPEC.md) in module directory)
|
||||
- [ ] State machine audit (at least 2 people) (@assignee1, @assignee2)
|
||||
- [ ] Read through MsgServer code and verify correctness upon visual inspection
|
||||
- [ ] Ensure all state machine code which could be confusing is properly commented
|
||||
- [ ] Make sure state machine logic matches Msg method documentation
|
||||
- [ ] Ensure that all state machine edge cases are covered with tests and that test coverage is sufficient (at least 90% coverage on module code)
|
||||
- [ ] Assess potential threats for each method including spam attacks and ensure that threats have been addressed sufficiently. This should be done by writing up threat assessment for each method
|
||||
- [ ] Assess potential risks of any new third party dependencies and decide whether a dependency audit is needed
|
||||
- [ ] Completeness audit, fully implemented with tests (at least 1 person) (@assignee)
|
||||
- [ ] Genesis import and export of all state
|
||||
- [ ] Query services
|
||||
- [ ] CLI methods
|
||||
- [ ] All necessary migration scripts are present (if this is an upgrade of existing module)
|
||||
|
||||
### Published Release Checklist
|
||||
|
||||
After the above checks have been audited and the module is included in a tagged Release Candidate, the following additional checklist should be undertaken for live testing, and potentially a 3rd party audit (if deemed necessary):
|
||||
|
||||
- [ ] Testnet / devnet testing (2-3 people) (@assignee1, @assignee2, @assignee3)
|
||||
- [ ] All Msg methods have been tested especially in light of any potential threats identified
|
||||
- [ ] Genesis import and export has been tested
|
||||
- [ ] Nice to have (and needed in some cases if threats could be high): Official 3rd party audit
|
|
@ -0,0 +1,58 @@
|
|||
name: Atlas
|
||||
# Atlas checks if a modules atlas manifest has been touched, if so it publishes the updated version
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- "x/**/atlas/*"
|
||||
pull_request:
|
||||
paths:
|
||||
- "x/**/atlas/*"
|
||||
|
||||
jobs:
|
||||
auth:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
id: git_diff
|
||||
with:
|
||||
PATTERNS: |
|
||||
x/auth/atlas/**
|
||||
- uses: marbar3778/atlas_action@main
|
||||
with:
|
||||
token: ${{ secrets.ATLAS_TOKEN }}
|
||||
path: ./x/auth/atlas/atlas.toml
|
||||
dry-run: ${{ github.event_name != 'pull_request' }}
|
||||
if: env.GIT_DIFF
|
||||
bank:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
id: git_diff
|
||||
with:
|
||||
PATTERNS: |
|
||||
x/bank/atlas/**
|
||||
- uses: marbar3778/atlas_action@main
|
||||
with:
|
||||
token: ${{ secrets.ATLAS_TOKEN }}
|
||||
path: ./x/bank/atlas/atlas.toml
|
||||
dry-run: ${{ github.event_name != 'pull_request' }}
|
||||
if: env.GIT_DIFF
|
||||
evidence:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
id: git_diff
|
||||
with:
|
||||
PATTERNS: |
|
||||
x/evidence/atlas/**
|
||||
- uses: marbar3778/atlas_action@main
|
||||
with:
|
||||
token: ${{ secrets.ATLAS_TOKEN }}
|
||||
path: ./x/evidence/atlas/manifest.toml
|
||||
dry-run: ${{ github.event_name != 'pull_request' }}
|
||||
if: env.GIT_DIFF
|
|
@ -24,7 +24,7 @@ jobs:
|
|||
make build-docs LEDGER_ENABLED=false
|
||||
|
||||
- name: Deploy 🚀
|
||||
uses: JamesIves/github-pages-deploy-action@3.7.1
|
||||
uses: JamesIves/github-pages-deploy-action@4.1.0
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
BRANCH: gh-pages
|
||||
|
|
|
@ -23,7 +23,7 @@ jobs:
|
|||
- uses: golangci/golangci-lint-action@master
|
||||
with:
|
||||
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
|
||||
version: v1.28
|
||||
version: v1.37
|
||||
args: --timeout 10m
|
||||
github-token: ${{ secrets.github_token }}
|
||||
if: env.GIT_DIFF
|
||||
|
|
|
@ -27,6 +27,12 @@ jobs:
|
|||
fi
|
||||
TAGS="${DOCKER_IMAGE}:${VERSION}"
|
||||
echo ::set-output name=tags::${TAGS}
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@master
|
||||
with:
|
||||
platforms: all
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
|
||||
|
@ -42,5 +48,6 @@ jobs:
|
|||
with:
|
||||
context: ./contrib/devtools
|
||||
file: ./contrib/devtools/dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.prep.outputs.tags }}
|
||||
|
|
|
@ -3,19 +3,28 @@ name: Protobuf
|
|||
# This workflow is only run when a .proto file has been changed
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "**.proto"
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- uses: actions/checkout@master
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
with:
|
||||
PATTERNS: |
|
||||
**/**.proto
|
||||
- name: lint
|
||||
run: make proto-lint
|
||||
if: env.GIT_DIFF
|
||||
breakage:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@master
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
with:
|
||||
PATTERNS: |
|
||||
**/**.proto
|
||||
- name: check-breakage
|
||||
run: make proto-check-breaking
|
||||
if: env.GIT_DIFF
|
||||
|
|
|
@ -30,7 +30,7 @@ jobs:
|
|||
- name: install runsim
|
||||
run: |
|
||||
export GO111MODULE="on" && go get github.com/cosmos/tools/cmd/runsim@v1.0.0
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
@ -40,7 +40,7 @@ jobs:
|
|||
needs: [build, install-runsim]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
|
|
@ -23,7 +23,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- run: make build
|
||||
|
@ -34,12 +34,12 @@ jobs:
|
|||
steps:
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- name: Install runsim
|
||||
run: export GO111MODULE="on" && go get github.com/cosmos/tools/cmd/runsim@v1.0.0
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
@ -51,7 +51,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
|
@ -60,7 +60,7 @@ jobs:
|
|||
**/**.go
|
||||
go.mod
|
||||
go.sum
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
@ -77,7 +77,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
|
@ -88,7 +88,7 @@ jobs:
|
|||
go.sum
|
||||
SET_ENV_NAME_INSERTIONS: 1
|
||||
SET_ENV_NAME_LINES: 1
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
@ -105,7 +105,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
|
@ -116,7 +116,7 @@ jobs:
|
|||
go.sum
|
||||
SET_ENV_NAME_INSERTIONS: 1
|
||||
SET_ENV_NAME_LINES: 1
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
@ -133,7 +133,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
|
@ -144,7 +144,7 @@ jobs:
|
|||
go.sum
|
||||
SET_ENV_NAME_INSERTIONS: 1
|
||||
SET_ENV_NAME_LINES: 1
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-runsim-binary
|
||||
|
|
|
@ -15,7 +15,7 @@ jobs:
|
|||
- name: Install Go
|
||||
uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Unshallow
|
||||
run: git fetch --prune --unshallow
|
||||
- name: Create release
|
||||
|
|
|
@ -20,13 +20,13 @@ jobs:
|
|||
steps:
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- name: install tparse
|
||||
run: |
|
||||
export GO111MODULE="on" && go get github.com/mfridman/tparse@v0.8.3
|
||||
- uses: actions/cache@v2.1.3
|
||||
- uses: actions/cache@v2.1.4
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-tparse-binary
|
||||
|
@ -40,7 +40,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
id: git_diff
|
||||
with:
|
||||
|
@ -57,7 +57,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- name: Display go version
|
||||
run: go version
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
|
@ -110,7 +110,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
with:
|
||||
PATTERNS: |
|
||||
|
@ -172,7 +172,7 @@ jobs:
|
|||
sed -i.bak "/$(echo $filename | sed 's/\//\\\//g')/d" coverage.txt
|
||||
done
|
||||
if: env.GIT_DIFF
|
||||
- uses: codecov/codecov-action@v1.2.1
|
||||
- uses: codecov/codecov-action@v1.3.1
|
||||
with:
|
||||
file: ./coverage.txt
|
||||
if: env.GIT_DIFF
|
||||
|
@ -188,7 +188,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
with:
|
||||
PATTERNS: |
|
||||
|
@ -201,7 +201,7 @@ jobs:
|
|||
if: env.GIT_DIFF
|
||||
- name: test & coverage report creation
|
||||
run: |
|
||||
cat pkgs.txt.part.${{ matrix.part }} | xargs go test -mod=readonly -json -timeout 30m -race -tags='cgo ledger test_ledger_mock' > ${{ matrix.part }}-race-output.txt
|
||||
xargs --arg-file=pkgs.txt.part.${{ matrix.part }} go test -mod=readonly -timeout 30m -race -tags='cgo ledger test_ledger_mock'
|
||||
if: env.GIT_DIFF
|
||||
- uses: actions/upload-artifact@v2
|
||||
with:
|
||||
|
@ -225,44 +225,6 @@ jobs:
|
|||
make test-rosetta
|
||||
# if: env.GIT_DIFF
|
||||
|
||||
race-detector-report:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [test-race, install-tparse]
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
id: git_diff
|
||||
with:
|
||||
PATTERNS: |
|
||||
**/**.go
|
||||
go.mod
|
||||
go.sum
|
||||
- uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: "${{ github.sha }}-00-race-output"
|
||||
if: env.GIT_DIFF
|
||||
- uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: "${{ github.sha }}-01-race-output"
|
||||
if: env.GIT_DIFF
|
||||
- uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: "${{ github.sha }}-02-race-output"
|
||||
if: env.GIT_DIFF
|
||||
- uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: "${{ github.sha }}-03-race-output"
|
||||
if: env.GIT_DIFF
|
||||
- uses: actions/cache@v2.1.3
|
||||
with:
|
||||
path: ~/go/bin
|
||||
key: ${{ runner.os }}-go-tparse-binary
|
||||
if: env.GIT_DIFF
|
||||
- name: Generate test report (go test -race)
|
||||
run: cat ./*-race-output.txt | ~/go/bin/tparse
|
||||
if: env.GIT_DIFF
|
||||
|
||||
liveness-test:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
|
@ -270,7 +232,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2.1.3
|
||||
with:
|
||||
go-version: 1.15
|
||||
go-version: 1.16
|
||||
- uses: technote-space/get-diff-action@v4
|
||||
id: git_diff
|
||||
with:
|
||||
|
|
16
.mergify.yml
16
.mergify.yml
|
@ -8,3 +8,19 @@ pull_request_rules:
|
|||
merge:
|
||||
method: squash
|
||||
strict: true
|
||||
- name: backport patches to v0.42.x branch
|
||||
conditions:
|
||||
- base=master
|
||||
- label=backport/0.42.x (Stargate)
|
||||
actions:
|
||||
backport:
|
||||
branches:
|
||||
- release/v0.42.x
|
||||
- name: backport patches to v0.39.x branch
|
||||
conditions:
|
||||
- base=master
|
||||
- label=backport/0.39.x (Launchpad)
|
||||
actions:
|
||||
backport:
|
||||
branches:
|
||||
- launchpad/backports
|
||||
|
|
121
CHANGELOG.md
121
CHANGELOG.md
|
@ -36,36 +36,137 @@ Ref: https://keepachangelog.com/en/1.0.0/
|
|||
|
||||
## [Unreleased]
|
||||
|
||||
## Features
|
||||
|
||||
* [\#8559](https://github.com/cosmos/cosmos-sdk/pull/8559) Added Protobuf compatible secp256r1 ECDSA signatures.
|
||||
* [\#8786](https://github.com/cosmos/cosmos-sdk/pull/8786) Enabled secp256r1 in x/auth.
|
||||
* (rosetta) [\#8729](https://github.com/cosmos/cosmos-sdk/pull/8729) Data API fully supports balance tracking. Construction API can now construct any message supported by the application.
|
||||
|
||||
### Client Breaking Changes
|
||||
|
||||
* [\#8363](https://github.com/cosmos/cosmos-sdk/issues/8363) Addresses no longer have a fixed 20-byte length. From the SDK modules' point of view, any 1-255 bytes-long byte array is a valid address.
|
||||
* [\#8363](https://github.com/cosmos/cosmos-sdk/pull/8363) Addresses no longer have a fixed 20-byte length. From the SDK modules' point of view, any 1-255 bytes-long byte array is a valid address.
|
||||
* [\#8346](https://github.com/cosmos/cosmos-sdk/pull/8346) All CLI `tx` commands generate ServiceMsgs by default. Graceful Amino support has been added to ServiceMsgs to support signing legacy Msgs.
|
||||
* (crypto/ed25519) [\#8690] Adopt zip1215 ed2559 verification rules.
|
||||
* [\#8849](https://github.com/cosmos/cosmos-sdk/pull/8849) Upgrade module no longer supports time based upgrades.
|
||||
|
||||
|
||||
### API Breaking Changes
|
||||
|
||||
* (keyring) [#\8662](https://github.com/cosmos/cosmos-sdk/pull/8662) `NewMnemonic` now receives an additional `passphrase` argument to secure the key generated by the bip39 mnemonic.
|
||||
* (x/bank) [\#8473](https://github.com/cosmos/cosmos-sdk/pull/8473) Bank keeper does not expose unsafe balance changing methods such as `SetBalance`, `SetSupply` etc.
|
||||
* (x/staking) [\#8473](https://github.com/cosmos/cosmos-sdk/pull/8473) On genesis init, if non bonded pool and bonded pool balance, coming from the bank module, does not match what is saved in the staking state, the initialization will panic.
|
||||
* (x/gov) [\#8473](https://github.com/cosmos/cosmos-sdk/pull/8473) On genesis init, if the gov module account balance, coming from bank module state, does not match the one in gov module state, the initialization will panic.
|
||||
* (x/distribution) [\#8473](https://github.com/cosmos/cosmos-sdk/pull/8473) On genesis init, if the distribution module account balance, coming from bank module state, does not match the one in distribution module state, the initialization will panic.
|
||||
* (client/keys) [\#8500](https://github.com/cosmos/cosmos-sdk/pull/8500) `InfoImporter` interface is removed from legacy keybase.
|
||||
* [\#8629](https://github.com/cosmos/cosmos-sdk/pull/8629) Deprecated `SetFullFundraiserPath` from `Config` in favor of `SetPurpose` and `SetCoinType`.
|
||||
* (x/upgrade) [\#8673](https://github.com/cosmos/cosmos-sdk/pull/8673) Remove IBC logic from x/upgrade. Deprecates IBC fields in an Upgrade Plan. IBC upgrade logic moved to 02-client and an IBC UpgradeProposal is added.
|
||||
* (x/bank) [\#8517](https://github.com/cosmos/cosmos-sdk/pull/8517) `SupplyI` interface and `Supply` are removed and uses `sdk.Coins` for supply tracking
|
||||
* (x/upgrade) [\#8743](https://github.com/cosmos/cosmos-sdk/pull/8743) `UpgradeHandler` includes a new argument `VersionMap` which helps facilitate in-place migrations.
|
||||
* (x/auth) [\#8129](https://github.com/cosmos/cosmos-sdk/pull/8828) Updated `SigVerifiableTx.GetPubKeys` method signature to return error.
|
||||
|
||||
|
||||
### State Machine Breaking
|
||||
|
||||
* (x/{bank,distrib,gov,slashing,staking}) [\#8363](https://github.com/cosmos/cosmos-sdk/issues/8363) Store keys have been modified to allow for variable-length addresses.
|
||||
* (x/ibc) [\#8266](https://github.com/cosmos/cosmos-sdk/issues/8266) Add amino JSON for IBC messages in order to support Ledger text signing.
|
||||
* (x/evidence) [\#8502](https://github.com/cosmos/cosmos-sdk/pull/8502) `HandleEquivocationEvidence` persists the evidence to state.
|
||||
* (x/gov) [\#7733](https://github.com/cosmos/cosmos-sdk/pull/7733) ADR 037 Implementation: Governance Split Votes
|
||||
* (x/bank) [\#8656](https://github.com/cosmos/cosmos-sdk/pull/8656) balance and supply are now correctly tracked via `coin_spent`, `coin_received`, `coinbase` and `burn` events.
|
||||
* (x/bank) [\#8517](https://github.com/cosmos/cosmos-sdk/pull/8517) Supply is now stored and tracked as `sdk.Coins`
|
||||
* (store) [\#8790](https://github.com/cosmos/cosmos-sdk/pull/8790) Reduce gas costs by 10x for transient store operations.
|
||||
|
||||
### Improvements
|
||||
|
||||
* (x/bank) [\#8614](https://github.com/cosmos/cosmos-sdk/issues/8614) Add `Name` and `Symbol` fields to denom metadata
|
||||
* (x/auth) [\#8522](https://github.com/cosmos/cosmos-sdk/pull/8522) Allow to query all stored accounts
|
||||
* (crypto/types) [\#8600](https://github.com/cosmos/cosmos-sdk/pull/8600) `CompactBitArray`: optimize the `NumTrueBitsBefore` method and add an `Equal` method.
|
||||
* (grpc) [\#8815](https://github.com/cosmos/cosmos-sdk/pull/8815) Add orderBy parameter to `TxsByEvents` endpoint.
|
||||
* (x/upgrade) [\#8743](https://github.com/cosmos/cosmos-sdk/pull/8743) Add tracking module versions as per ADR-041
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* (keyring) [#\8635](https://github.com/cosmos/cosmos-sdk/issues/8635) Remove hardcoded default passphrase value on `NewMnemonic`
|
||||
* (x/bank) [\#8434](https://github.com/cosmos/cosmos-sdk/pull/8434) Fix legacy REST API `GET /bank/total` and `GET /bank/total/{denom}` in swagger
|
||||
* (x/slashing) [\#8427](https://github.com/cosmos/cosmos-sdk/pull/8427) Fix query signing infos command
|
||||
* (server) [\#8399](https://github.com/cosmos/cosmos-sdk/pull/8399) fix gRPC-web flag default value
|
||||
* (crypto) [\#8841](https://github.com/cosmos/cosmos-sdk/pull/8841) Fix legacy multisig amino marshaling, allowing migrations to work between v0.39 and v0.40+.
|
||||
|
||||
## [v0.42.0](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.42.0) - 2021-03-08
|
||||
|
||||
**IMPORTANT**: This release contains an important security fix for all non Cosmos Hub chains running Stargate version of the Cosmos SDK (>0.40). Non-hub chains should not be using any version of the SDK in the v0.40.x or v0.41.x release series. See [#8461](https://github.com/cosmos/cosmos-sdk/pull/8461) for more details.
|
||||
|
||||
### Improvements
|
||||
|
||||
* (x/ibc) [\#8624](https://github.com/cosmos/cosmos-sdk/pull/8624) Emit full header in IBC UpdateClient message.
|
||||
* (x/crisis) [\#8621](https://github.com/cosmos/cosmos-sdk/issues/8621) crisis invariants names now print to loggers.
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* (x/evidence) [\#8461](https://github.com/cosmos/cosmos-sdk/pull/8461) Fix bech32 prefix in evidence validator address conversion
|
||||
* (x/gov) [\#8806](https://github.com/cosmos/cosmos-sdk/issues/8806) Fix q gov proposals command's mishandling of the --status parameter's values.
|
||||
|
||||
## [v0.41.4](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.41.3) - 2021-03-02
|
||||
|
||||
**IMPORTANT**: Due to a bug in the v0.41.x series with how evidence handles validator consensus addresses #8461, SDK based chains that are not using the default bech32 prefix (cosmos, aka all chains except for t
|
||||
he Cosmos Hub) should not use this release or any release in the v0.41.x series. Please see #8668 for tracking & timeline for the v0.42.0 release, which will include a fix for this issue.
|
||||
|
||||
### Features
|
||||
|
||||
* [\#7787](https://github.com/cosmos/cosmos-sdk/pull/7787) Add multisign-batch command.
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* [\#8730](https://github.com/cosmos/cosmos-sdk/pull/8730) Allow REST endpoint to query txs with multisig addresses.
|
||||
* [\#8680](https://github.com/cosmos/cosmos-sdk/issues/8680) Fix missing timestamp in GetTxsEvent response [\#8732](https://github.com/cosmos/cosmos-sdk/pull/8732).
|
||||
* [\#8681](https://github.com/cosmos/cosmos-sdk/issues/8681) Fix missing error message when calling GetTxsEvent [\#8732](https://github.com/cosmos/cosmos-sdk/pull/8732)
|
||||
* (server) [\#8641](https://github.com/cosmos/cosmos-sdk/pull/8641) Fix Tendermint and application configuration reading from file
|
||||
* (client/keys) [\#8639] (https://github.com/cosmos/cosmos-sdk/pull/8639) Fix keys migrate for mulitisig, offline, and ledger keys. The migrate command now takes a positional old_home_dir argument.
|
||||
|
||||
### Improvements
|
||||
|
||||
* (store/cachekv), (x/bank/types) [\#8719](https://github.com/cosmos/cosmos-sdk/pull/8719) algorithmically fix pathologically slow code
|
||||
* [\#8701](https://github.com/cosmos/cosmos-sdk/pull/8701) Upgrade tendermint v0.34.8.
|
||||
* [\#8714](https://github.com/cosmos/cosmos-sdk/pull/8714) Allow accounts to have a balance of 0 at genesis.
|
||||
|
||||
## [v0.41.3](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.41.3) - 2021-02-18
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* [\#8617](https://github.com/cosmos/cosmos-sdk/pull/8617) Fix build failures caused by a small API breakage introduced in tendermint v0.34.7.
|
||||
|
||||
## [v0.41.2](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.41.2) - 2021-02-18
|
||||
|
||||
### Improvements
|
||||
|
||||
* Bump tendermint dependency to v0.34.7.
|
||||
|
||||
## [v0.41.1](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.41.1) - 2021-02-17
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* (grpc) [\#8549](https://github.com/cosmos/cosmos-sdk/pull/8549) Make gRPC requests go through ABCI and disallow concurrency.
|
||||
* (x/staking) [\#8546](https://github.com/cosmos/cosmos-sdk/pull/8546) Fix caching bug where concurrent calls to GetValidator could cause a node to crash
|
||||
* (server) [\#8481](https://github.com/cosmos/cosmos-sdk/pull/8481) Don't create files when running `{appd} tendermint show-*` subcommands.
|
||||
* (client/keys) [\#8436](https://github.com/cosmos/cosmos-sdk/pull/8436) Fix keybase->keyring keys migration.
|
||||
* (crypto/hd) [\#8607](https://github.com/cosmos/cosmos-sdk/pull/8607) Make DerivePrivateKeyForPath error and not panic on trailing slashes.
|
||||
|
||||
### Improvements
|
||||
|
||||
* (x/ibc) [\#8458](https://github.com/cosmos/cosmos-sdk/pull/8458) Add `packet_connection` attribute to ibc events to enable relayer filtering
|
||||
* (x/bank) [\#8479](https://github.com/cosmos/cosmos-sdk/pull/8479) Adittional client denom metadata validation for `base` and `display` denoms.
|
||||
* (x/ibc) [\#8404](https://github.com/cosmos/cosmos-sdk/pull/8404) Reorder IBC `ChanOpenAck` and `ChanOpenConfirm` handler execution to perform core handler first, followed by application callbacks.
|
||||
* [\#8396](https://github.com/cosmos/cosmos-sdk/pull/8396) Add support for ARM platform
|
||||
* (x/bank) [\#8479](https://github.com/cosmos/cosmos-sdk/pull/8479) Aditional client denom metadata validation for `base` and `display` denoms.
|
||||
* (codec/types) [\#8605](https://github.com/cosmos/cosmos-sdk/pull/8605) Avoid unnecessary allocations for NewAnyWithCustomTypeURL on error.
|
||||
|
||||
## [v0.41.0](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.41.0) - 2021-01-26
|
||||
|
||||
### State Machine Breaking
|
||||
|
||||
* (x/ibc) [\#8266](https://github.com/cosmos/cosmos-sdk/issues/8266) Add amino JSON support for IBC MsgTransfer in order to support Ledger text signing transfer transactions.
|
||||
* (x/ibc) [\#8404](https://github.com/cosmos/cosmos-sdk/pull/8404) Reorder IBC `ChanOpenAck` and `ChanOpenConfirm` handler execution to perform core handler first, followed by application callbacks.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* (x/evidence) [#8461](https://github.com/cosmos/cosmos-sdk/pull/8461) Fix bech32 prefix in evidence validator address conversion
|
||||
* (x/slashing) [\#8427](https://github.com/cosmos/cosmos-sdk/pull/8427) Fix query signing infos command
|
||||
* (simapp) [\#8418](https://github.com/cosmos/cosmos-sdk/pull/8418) Add balance coin to supply when adding a new genesis account
|
||||
* (x/bank) [\#8417](https://github.com/cosmos/cosmos-sdk/pull/8417) Validate balances and coin denom metadata on genesis
|
||||
* (server) [\#8399](https://github.com/cosmos/cosmos-sdk/pull/8399) fix gRPC-web flag default value
|
||||
* (client/keys) [\#8436](https://github.com/cosmos/cosmos-sdk/pull/8436) Fix key migration issue
|
||||
* (server) [\#8481](https://github.com/cosmos/cosmos-sdk/pull/8481) Don't create
|
||||
files when running `{appd} tendermint show-*` subcommands
|
||||
|
||||
## [v0.40.1](https://github.com/cosmos/cosmos-sdk/releases/tag/v0.40.1) - 2021-01-19
|
||||
|
||||
|
|
12
Makefile
12
Makefile
|
@ -3,6 +3,7 @@
|
|||
PACKAGES_NOSIMULATION=$(shell go list ./... | grep -v '/simulation')
|
||||
PACKAGES_SIMTEST=$(shell go list ./... | grep '/simulation')
|
||||
VERSION := $(shell echo $(shell git describe --always) | sed 's/^v//')
|
||||
TMVERSION := $(shell go list -m github.com/tendermint/tendermint | sed 's:.* ::')
|
||||
COMMIT := $(shell git log -1 --format='%H')
|
||||
LEDGER_ENABLED ?= true
|
||||
BINDIR ?= $(GOPATH)/bin
|
||||
|
@ -44,8 +45,6 @@ endif
|
|||
ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS)))
|
||||
build_tags += gcc
|
||||
endif
|
||||
build_tags += $(BUILD_TAGS)
|
||||
build_tags := $(strip $(build_tags))
|
||||
|
||||
whitespace :=
|
||||
whitespace += $(whitespace)
|
||||
|
@ -58,7 +57,8 @@ ldflags = -X github.com/cosmos/cosmos-sdk/version.Name=sim \
|
|||
-X github.com/cosmos/cosmos-sdk/version.AppName=simd \
|
||||
-X github.com/cosmos/cosmos-sdk/version.Version=$(VERSION) \
|
||||
-X github.com/cosmos/cosmos-sdk/version.Commit=$(COMMIT) \
|
||||
-X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)"
|
||||
-X "github.com/cosmos/cosmos-sdk/version.BuildTags=$(build_tags_comma_sep)" \
|
||||
-X github.com/tendermint/tendermint/version.TMCoreSemVer=$(TMVERSION)
|
||||
|
||||
# DB backend selection
|
||||
ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS)))
|
||||
|
@ -66,6 +66,7 @@ ifeq (cleveldb,$(findstring cleveldb,$(COSMOS_BUILD_OPTIONS)))
|
|||
endif
|
||||
ifeq (badgerdb,$(findstring badgerdb,$(COSMOS_BUILD_OPTIONS)))
|
||||
ldflags += -X github.com/cosmos/cosmos-sdk/types.DBBackend=badgerdb
|
||||
BUILD_TAGS += badgerdb
|
||||
endif
|
||||
# handle rocksdb
|
||||
ifeq (rocksdb,$(findstring rocksdb,$(COSMOS_BUILD_OPTIONS)))
|
||||
|
@ -85,6 +86,9 @@ endif
|
|||
ldflags += $(LDFLAGS)
|
||||
ldflags := $(strip $(ldflags))
|
||||
|
||||
build_tags += $(BUILD_TAGS)
|
||||
build_tags := $(strip $(build_tags))
|
||||
|
||||
BUILD_FLAGS := -tags "$(build_tags)" -ldflags '$(ldflags)'
|
||||
# check for nostrip option
|
||||
ifeq (,$(findstring nostrip,$(COSMOS_BUILD_OPTIONS)))
|
||||
|
@ -369,7 +373,7 @@ proto-all: proto-format proto-lint proto-gen
|
|||
|
||||
proto-gen:
|
||||
@echo "Generating Protobuf files"
|
||||
$(DOCKER) run --rm -v $(CURDIR):/workspace --workdir /workspace tendermintdev/sdk-proto-gen sh ./scripts/protocgen.sh
|
||||
$(DOCKER) run --rm -v $(CURDIR):/workspace --workdir /workspace tendermintdev/sdk-proto-gen:v0.1 sh ./scripts/protocgen.sh
|
||||
|
||||
proto-format:
|
||||
@echo "Formatting Protobuf files"
|
||||
|
|
|
@ -63,6 +63,10 @@ For more, please go to the [Cosmos SDK Docs](./docs/).
|
|||
|
||||
The Cosmos Hub application, `gaia`, has moved to its [own repository](https://github.com/cosmos/gaia). Go there to join the Cosmos Hub mainnet and more.
|
||||
|
||||
## Interblockchain Communication (IBC)
|
||||
|
||||
The IBC module for the SDK has moved to its [own repository](https://github.com/cosmos/ibc-go). Go there to build and integrate with the IBC module.
|
||||
|
||||
## Starport
|
||||
|
||||
If you are starting a new app or a new module you can use [Starport](https://github.com/tendermint/starport) to help you get started and speed up development. If you have any questions or find a bug, feel free to open an issue in the repo.
|
||||
|
|
|
@ -2,6 +2,7 @@ package baseapp
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
||||
gogogrpc "github.com/gogo/protobuf/grpc"
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
|
@ -12,13 +13,19 @@ import (
|
|||
"github.com/cosmos/cosmos-sdk/client/grpc/reflection"
|
||||
codectypes "github.com/cosmos/cosmos-sdk/codec/types"
|
||||
sdk "github.com/cosmos/cosmos-sdk/types"
|
||||
sdkerrors "github.com/cosmos/cosmos-sdk/types/errors"
|
||||
)
|
||||
|
||||
var protoCodec = encoding.GetCodec(proto.Name)
|
||||
|
||||
// GRPCQueryRouter routes ABCI Query requests to GRPC handlers
|
||||
type GRPCQueryRouter struct {
|
||||
routes map[string]GRPCQueryHandler
|
||||
routes map[string]GRPCQueryHandler
|
||||
// returnTypes is a map of FQ method name => its return type. It is used
|
||||
// for cache purposes: the first time a method handler is run, we save its
|
||||
// return type in this map. Then, on subsequent method handler calls, we
|
||||
// decode the ABCI response bytes using the cached return type.
|
||||
returnTypes map[string]reflect.Type
|
||||
interfaceRegistry codectypes.InterfaceRegistry
|
||||
serviceData []serviceData
|
||||
}
|
||||
|
@ -34,7 +41,8 @@ var _ gogogrpc.Server = &GRPCQueryRouter{}
|
|||
// NewGRPCQueryRouter creates a new GRPCQueryRouter
|
||||
func NewGRPCQueryRouter() *GRPCQueryRouter {
|
||||
return &GRPCQueryRouter{
|
||||
routes: map[string]GRPCQueryHandler{},
|
||||
returnTypes: map[string]reflect.Type{},
|
||||
routes: map[string]GRPCQueryHandler{},
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -89,8 +97,17 @@ func (qrt *GRPCQueryRouter) RegisterService(sd *grpc.ServiceDesc, handler interf
|
|||
if qrt.interfaceRegistry != nil {
|
||||
return codectypes.UnpackInterfaces(i, qrt.interfaceRegistry)
|
||||
}
|
||||
|
||||
return nil
|
||||
}, nil)
|
||||
|
||||
// If it's the first time we call this handler, then we save
|
||||
// the return type of the handler in the `returnTypes` map.
|
||||
// The return type will be used for decoding subsequent requests.
|
||||
if _, found := qrt.returnTypes[fqName]; !found {
|
||||
qrt.returnTypes[fqName] = reflect.TypeOf(res)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return abci.ResponseQuery{}, err
|
||||
}
|
||||
|
@ -127,3 +144,16 @@ func (qrt *GRPCQueryRouter) SetInterfaceRegistry(interfaceRegistry codectypes.In
|
|||
reflection.NewReflectionServiceServer(interfaceRegistry),
|
||||
)
|
||||
}
|
||||
|
||||
// returnTypeOf returns the return type of a gRPC method handler. With the way the
|
||||
// `returnTypes` cache map is set up, the return type of a method handler is
|
||||
// guaranteed to be found if it's retrieved **after** the method handler ran at
|
||||
// least once. If not, then a logic error is return.
|
||||
func (qrt *GRPCQueryRouter) returnTypeOf(method string) (reflect.Type, error) {
|
||||
returnType, found := qrt.returnTypes[method]
|
||||
if !found {
|
||||
return nil, sdkerrors.Wrapf(sdkerrors.ErrLogic, "cannot find %s return type", method)
|
||||
}
|
||||
|
||||
return returnType, nil
|
||||
}
|
||||
|
|
|
@ -2,67 +2,78 @@ package baseapp
|
|||
|
||||
import (
|
||||
"context"
|
||||
"strconv"
|
||||
"reflect"
|
||||
|
||||
gogogrpc "github.com/gogo/protobuf/grpc"
|
||||
grpcmiddleware "github.com/grpc-ecosystem/go-grpc-middleware"
|
||||
grpcrecovery "github.com/grpc-ecosystem/go-grpc-middleware/recovery"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/metadata"
|
||||
"google.golang.org/grpc/status"
|
||||
|
||||
sdk "github.com/cosmos/cosmos-sdk/types"
|
||||
"github.com/cosmos/cosmos-sdk/client"
|
||||
sdkerrors "github.com/cosmos/cosmos-sdk/types/errors"
|
||||
grpctypes "github.com/cosmos/cosmos-sdk/types/grpc"
|
||||
"github.com/cosmos/cosmos-sdk/types/tx"
|
||||
)
|
||||
|
||||
// GRPCQueryRouter returns the GRPCQueryRouter of a BaseApp.
|
||||
func (app *BaseApp) GRPCQueryRouter() *GRPCQueryRouter { return app.grpcQueryRouter }
|
||||
|
||||
// RegisterGRPCServer registers gRPC services directly with the gRPC server.
|
||||
func (app *BaseApp) RegisterGRPCServer(server gogogrpc.Server) {
|
||||
// Define an interceptor for all gRPC queries: this interceptor will create
|
||||
// a new sdk.Context, and pass it into the query handler.
|
||||
interceptor := func(grpcCtx context.Context, req interface{}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
|
||||
// If there's some metadata in the context, retrieve it.
|
||||
md, ok := metadata.FromIncomingContext(grpcCtx)
|
||||
if !ok {
|
||||
return nil, status.Error(codes.Internal, "unable to retrieve metadata")
|
||||
func (app *BaseApp) RegisterGRPCServer(clientCtx client.Context, server gogogrpc.Server) {
|
||||
// Define an interceptor for all gRPC queries: this interceptor will route
|
||||
// the query through the `clientCtx`, which itself queries Tendermint.
|
||||
interceptor := func(grpcCtx context.Context, req interface{}, info *grpc.UnaryServerInfo, _ grpc.UnaryHandler) (interface{}, error) {
|
||||
// Two things can happen here:
|
||||
// 1. either we're broadcasting a Tx, in which case we call Tendermint's broadcast endpoint directly,
|
||||
// 2. or we are querying for state, in which case we call ABCI's Query.
|
||||
|
||||
// Case 1. Broadcasting a Tx.
|
||||
if reqProto, ok := req.(*tx.BroadcastTxRequest); ok {
|
||||
if !ok {
|
||||
return nil, sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "expected %T, got %T", (*tx.BroadcastTxRequest)(nil), req)
|
||||
}
|
||||
|
||||
return client.TxServiceBroadcast(grpcCtx, clientCtx, reqProto)
|
||||
}
|
||||
|
||||
// Get height header from the request context, if present.
|
||||
var height int64
|
||||
if heightHeaders := md.Get(grpctypes.GRPCBlockHeightHeader); len(heightHeaders) > 0 {
|
||||
height, err = strconv.ParseInt(heightHeaders[0], 10, 64)
|
||||
if err != nil {
|
||||
return nil, sdkerrors.Wrapf(
|
||||
sdkerrors.ErrInvalidRequest,
|
||||
"Baseapp.RegisterGRPCServer: invalid height header %q: %v", grpctypes.GRPCBlockHeightHeader, err)
|
||||
}
|
||||
if err := checkNegativeHeight(height); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// Create the sdk.Context. Passing false as 2nd arg, as we can't
|
||||
// actually support proofs with gRPC right now.
|
||||
sdkCtx, err := app.createQueryContext(height, false)
|
||||
// Case 2. Querying state.
|
||||
inMd, _ := metadata.FromIncomingContext(grpcCtx)
|
||||
abciRes, outMd, err := client.RunGRPCQuery(clientCtx, grpcCtx, info.FullMethod, req, inMd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Attach the sdk.Context into the gRPC's context.Context.
|
||||
grpcCtx = context.WithValue(grpcCtx, sdk.SdkContextKey, sdkCtx)
|
||||
|
||||
// Add relevant gRPC headers
|
||||
if height == 0 {
|
||||
height = sdkCtx.BlockHeight() // If height was not set in the request, set it to the latest
|
||||
// We need to know the return type of the grpc method for
|
||||
// unmarshalling abciRes.Value.
|
||||
//
|
||||
// When we call each method handler for the first time, we save its
|
||||
// return type in the `returnTypes` map (see the method handler in
|
||||
// `grpcrouter.go`). By this time, the method handler has already run
|
||||
// at least once (in the RunGRPCQuery call), so we're sure the
|
||||
// returnType maps is populated for this method. We're retrieving it
|
||||
// for decoding.
|
||||
returnType, err := app.GRPCQueryRouter().returnTypeOf(info.FullMethod)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
md = metadata.Pairs(grpctypes.GRPCBlockHeightHeader, strconv.FormatInt(height, 10))
|
||||
grpc.SetHeader(grpcCtx, md)
|
||||
|
||||
return handler(grpcCtx, req)
|
||||
// returnType is a pointer to a struct. Here, we're creating res which
|
||||
// is a new pointer to the underlying struct.
|
||||
res := reflect.New(returnType.Elem()).Interface()
|
||||
|
||||
err = protoCodec.Unmarshal(abciRes.Value, res)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Send the metadata header back. The metadata currently includes:
|
||||
// - block height.
|
||||
err = grpc.SendHeader(grpcCtx, outMd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// Loop through all services and methods, add the interceptor, and register
|
||||
|
|
2
buf.yaml
2
buf.yaml
|
@ -26,6 +26,8 @@ lint:
|
|||
breaking:
|
||||
use:
|
||||
- FILE
|
||||
except:
|
||||
- FIELD_NO_DELETE
|
||||
ignore:
|
||||
- tendermint
|
||||
- gogoproto
|
||||
|
|
|
@ -99,6 +99,11 @@ func ReadPersistentCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Cont
|
|||
clientCtx = clientCtx.WithHomeDir(homeDir)
|
||||
}
|
||||
|
||||
if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) {
|
||||
dryRun, _ := flagSet.GetBool(flags.FlagDryRun)
|
||||
clientCtx = clientCtx.WithSimulation(dryRun)
|
||||
}
|
||||
|
||||
if clientCtx.KeyringDir == "" || flagSet.Changed(flags.FlagKeyringDir) {
|
||||
keyringDir, _ := flagSet.GetString(flags.FlagKeyringDir)
|
||||
|
||||
|
@ -191,11 +196,6 @@ func readTxCommandFlags(clientCtx Context, flagSet *pflag.FlagSet) (Context, err
|
|||
clientCtx = clientCtx.WithGenerateOnly(genOnly)
|
||||
}
|
||||
|
||||
if !clientCtx.Simulate || flagSet.Changed(flags.FlagDryRun) {
|
||||
dryRun, _ := flagSet.GetBool(flags.FlagDryRun)
|
||||
clientCtx = clientCtx.WithSimulation(dryRun)
|
||||
}
|
||||
|
||||
if !clientCtx.Offline || flagSet.Changed(flags.FlagOffline) {
|
||||
offline, _ := flagSet.GetBool(flags.FlagOffline)
|
||||
clientCtx = clientCtx.WithOffline(offline)
|
||||
|
|
|
@ -27,6 +27,7 @@ type Context struct {
|
|||
InterfaceRegistry codectypes.InterfaceRegistry
|
||||
Input io.Reader
|
||||
Keyring keyring.Keyring
|
||||
KeyringOptions []keyring.Option
|
||||
Output io.Writer
|
||||
OutputFormat string
|
||||
Height int64
|
||||
|
@ -56,6 +57,12 @@ func (ctx Context) WithKeyring(k keyring.Keyring) Context {
|
|||
return ctx
|
||||
}
|
||||
|
||||
// WithKeyringOptions returns a copy of the context with an updated keyring.
|
||||
func (ctx Context) WithKeyringOptions(opts ...keyring.Option) Context {
|
||||
ctx.KeyringOptions = opts
|
||||
return ctx
|
||||
}
|
||||
|
||||
// WithInput returns a copy of the context with an updated input.
|
||||
func (ctx Context) WithInput(r io.Reader) Context {
|
||||
ctx.Input = r
|
||||
|
@ -324,9 +331,9 @@ func GetFromFields(kr keyring.Keyring, from string, genOnly bool) (sdk.AccAddres
|
|||
}
|
||||
|
||||
func newKeyringFromFlags(ctx Context, backend string) (keyring.Keyring, error) {
|
||||
if ctx.GenerateOnly {
|
||||
return keyring.New(sdk.KeyringServiceName(), keyring.BackendMemory, ctx.KeyringDir, ctx.Input)
|
||||
if ctx.GenerateOnly || ctx.Simulate {
|
||||
return keyring.New(sdk.KeyringServiceName(), keyring.BackendMemory, ctx.KeyringDir, ctx.Input, ctx.KeyringOptions...)
|
||||
}
|
||||
|
||||
return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input)
|
||||
return keyring.New(sdk.KeyringServiceName(), backend, ctx.KeyringDir, ctx.Input, ctx.KeyringOptions...)
|
||||
}
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
package statik
|
||||
|
||||
//This just for fixing the error in importing empty github.com/cosmos/cosmos-sdk/client/docs/statik
|
||||
// This just for fixing the error in importing empty github.com/cosmos/cosmos-sdk/client/docs/statik
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
|
@ -446,6 +446,45 @@ paths:
|
|||
description: Invalid request
|
||||
500:
|
||||
description: Server internal error
|
||||
/bank/total:
|
||||
get:
|
||||
deprecated: true
|
||||
summary: Total supply of coins in the chain
|
||||
tags:
|
||||
- Bank
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
schema:
|
||||
$ref: "#/definitions/Supply"
|
||||
500:
|
||||
description: Internal Server Error
|
||||
/bank/total/{denomination}:
|
||||
parameters:
|
||||
- in: path
|
||||
name: denomination
|
||||
description: Coin denomination
|
||||
required: true
|
||||
type: string
|
||||
x-example: uatom
|
||||
get:
|
||||
deprecated: true
|
||||
summary: Total supply of a single coin denomination
|
||||
tags:
|
||||
- Bank
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
schema:
|
||||
type: string
|
||||
400:
|
||||
description: Invalid coin denomination
|
||||
500:
|
||||
description: Internal Server Error
|
||||
/auth/accounts/{address}:
|
||||
get:
|
||||
deprecated: true
|
||||
|
@ -1940,45 +1979,6 @@ paths:
|
|||
type: string
|
||||
500:
|
||||
description: Internal Server Error
|
||||
/supply/total:
|
||||
get:
|
||||
deprecated: true
|
||||
summary: Total supply of coins in the chain
|
||||
tags:
|
||||
- Supply
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
schema:
|
||||
$ref: "#/definitions/Supply"
|
||||
500:
|
||||
description: Internal Server Error
|
||||
/supply/total/{denomination}:
|
||||
parameters:
|
||||
- in: path
|
||||
name: denomination
|
||||
description: Coin denomination
|
||||
required: true
|
||||
type: string
|
||||
x-example: uatom
|
||||
get:
|
||||
deprecated: true
|
||||
summary: Total supply of a single coin denomination
|
||||
tags:
|
||||
- Supply
|
||||
produces:
|
||||
- application/json
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
schema:
|
||||
type: string
|
||||
400:
|
||||
description: Invalid coin denomination
|
||||
500:
|
||||
description: Internal Server Error
|
||||
definitions:
|
||||
CheckTxResult:
|
||||
type: object
|
||||
|
|
|
@ -110,7 +110,7 @@ func AddTxFlagsToCmd(cmd *cobra.Command) {
|
|||
cmd.Flags().Bool(FlagGenerateOnly, false, "Build an unsigned transaction and write it to STDOUT (when enabled, the local Keybase is not accessible)")
|
||||
cmd.Flags().Bool(FlagOffline, false, "Offline mode (does not allow any online functionality")
|
||||
cmd.Flags().BoolP(FlagSkipConfirmation, "y", false, "Skip tx broadcasting prompt confirmation")
|
||||
cmd.Flags().String(FlagKeyringBackend, DefaultKeyringBackend, "Select keyring's backend (os|file|kwallet|pass|test)")
|
||||
cmd.Flags().String(FlagKeyringBackend, DefaultKeyringBackend, "Select keyring's backend (os|file|kwallet|pass|test|memory)")
|
||||
cmd.Flags().String(FlagSignMode, "", "Choose sign mode (direct|amino-json), this is an advanced feature")
|
||||
cmd.Flags().Uint64(FlagTimeoutHeight, 0, "Set a block timeout height to prevent the tx from being committed past a certain height")
|
||||
cmd.Flags().String(FlagFeeAccount, "", "Fee account pays fees for the transaction instead of deducting from the signer")
|
||||
|
|
|
@ -234,9 +234,9 @@ func RegisterReflectionServiceHandlerClient(ctx context.Context, mux *runtime.Se
|
|||
}
|
||||
|
||||
var (
|
||||
pattern_ReflectionService_ListAllInterfaces_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4}, []string{"cosmos", "base", "reflection", "v1beta1", "interfaces"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_ReflectionService_ListAllInterfaces_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4}, []string{"cosmos", "base", "reflection", "v1beta1", "interfaces"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
|
||||
pattern_ReflectionService_ListImplementations_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 1, 0, 4, 1, 5, 5, 2, 6}, []string{"cosmos", "base", "reflection", "v1beta1", "interfaces", "interface_name", "implementations"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_ReflectionService_ListImplementations_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 1, 0, 4, 1, 5, 5, 2, 6}, []string{"cosmos", "base", "reflection", "v1beta1", "interfaces", "interface_name", "implementations"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
)
|
||||
|
||||
var (
|
||||
|
|
|
@ -538,17 +538,17 @@ func RegisterServiceHandlerClient(ctx context.Context, mux *runtime.ServeMux, cl
|
|||
}
|
||||
|
||||
var (
|
||||
pattern_Service_GetNodeInfo_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4}, []string{"cosmos", "base", "tendermint", "v1beta1", "node_info"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_Service_GetNodeInfo_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4}, []string{"cosmos", "base", "tendermint", "v1beta1", "node_info"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
|
||||
pattern_Service_GetSyncing_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4}, []string{"cosmos", "base", "tendermint", "v1beta1", "syncing"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_Service_GetSyncing_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4}, []string{"cosmos", "base", "tendermint", "v1beta1", "syncing"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
|
||||
pattern_Service_GetLatestBlock_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 2, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "blocks", "latest"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_Service_GetLatestBlock_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 2, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "blocks", "latest"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
|
||||
pattern_Service_GetBlockByHeight_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "blocks", "height"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_Service_GetBlockByHeight_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "blocks", "height"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
|
||||
pattern_Service_GetLatestValidatorSet_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 2, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "validatorsets", "latest"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_Service_GetLatestValidatorSet_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 2, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "validatorsets", "latest"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
|
||||
pattern_Service_GetValidatorSetByHeight_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "validatorsets", "height"}, "", runtime.AssumeColonVerbOpt(true)))
|
||||
pattern_Service_GetValidatorSetByHeight_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"cosmos", "base", "tendermint", "v1beta1", "validatorsets", "height"}, "", runtime.AssumeColonVerbOpt(false)))
|
||||
)
|
||||
|
||||
var (
|
||||
|
|
|
@ -104,6 +104,9 @@ func (s queryServer) GetLatestValidatorSet(ctx context.Context, req *GetLatestVa
|
|||
outputValidatorsRes := &GetLatestValidatorSetResponse{
|
||||
BlockHeight: validatorsRes.BlockHeight,
|
||||
Validators: make([]*Validator, len(validatorsRes.Validators)),
|
||||
Pagination: &qtypes.PageResponse{
|
||||
Total: validatorsRes.Total,
|
||||
},
|
||||
}
|
||||
|
||||
for i, validator := range validatorsRes.Validators {
|
||||
|
@ -156,6 +159,7 @@ func (s queryServer) GetValidatorSetByHeight(ctx context.Context, req *GetValida
|
|||
outputValidatorsRes := &GetValidatorSetByHeightResponse{
|
||||
BlockHeight: validatorsRes.BlockHeight,
|
||||
Validators: make([]*Validator, len(validatorsRes.Validators)),
|
||||
Pagination: &qtypes.PageResponse{Total: validatorsRes.Total},
|
||||
}
|
||||
|
||||
for i, validator := range validatorsRes.Validators {
|
||||
|
|
|
@ -131,32 +131,126 @@ func (s IntegrationTestSuite) TestQueryLatestValidatorSet() {
|
|||
s.Require().Equal(validatorSetRes.Validators[0].PubKey, anyPub)
|
||||
}
|
||||
|
||||
func (s IntegrationTestSuite) TestQueryValidatorSetByHeight() {
|
||||
val := s.network.Validators[0]
|
||||
func (s IntegrationTestSuite) TestLatestValidatorSet_GRPC() {
|
||||
vals := s.network.Validators
|
||||
testCases := []struct {
|
||||
name string
|
||||
req *tmservice.GetLatestValidatorSetRequest
|
||||
expErr bool
|
||||
expErrMsg string
|
||||
}{
|
||||
{"nil request", nil, true, "cannot be nil"},
|
||||
{"no pagination", &tmservice.GetLatestValidatorSetRequest{}, false, ""},
|
||||
{"with pagination", &tmservice.GetLatestValidatorSetRequest{Pagination: &qtypes.PageRequest{Offset: 0, Limit: uint64(len(vals))}}, false, ""},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
tc := tc
|
||||
s.Run(tc.name, func() {
|
||||
grpcRes, err := s.queryClient.GetLatestValidatorSet(context.Background(), tc.req)
|
||||
if tc.expErr {
|
||||
s.Require().Error(err)
|
||||
s.Require().Contains(err.Error(), tc.expErrMsg)
|
||||
} else {
|
||||
s.Require().NoError(err)
|
||||
s.Require().Len(grpcRes.Validators, len(vals))
|
||||
s.Require().Equal(grpcRes.Pagination.Total, uint64(len(vals)))
|
||||
content, ok := grpcRes.Validators[0].PubKey.GetCachedValue().(cryptotypes.PubKey)
|
||||
s.Require().Equal(true, ok)
|
||||
s.Require().Equal(content, vals[0].PubKey)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// nil pagination
|
||||
_, err := s.queryClient.GetValidatorSetByHeight(context.Background(), &tmservice.GetValidatorSetByHeightRequest{
|
||||
Height: 1,
|
||||
Pagination: nil,
|
||||
})
|
||||
s.Require().NoError(err)
|
||||
func (s IntegrationTestSuite) TestLatestValidatorSet_GRPCGateway() {
|
||||
vals := s.network.Validators
|
||||
testCases := []struct {
|
||||
name string
|
||||
url string
|
||||
expErr bool
|
||||
expErrMsg string
|
||||
}{
|
||||
{"no pagination", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/latest", vals[0].APIAddress), false, ""},
|
||||
{"pagination invalid fields", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/latest?pagination.offset=-1&pagination.limit=-2", vals[0].APIAddress), true, "strconv.ParseUint"},
|
||||
{"with pagination", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/latest?pagination.offset=0&pagination.limit=2", vals[0].APIAddress), false, ""},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
tc := tc
|
||||
s.Run(tc.name, func() {
|
||||
res, err := rest.GetRequest(tc.url)
|
||||
s.Require().NoError(err)
|
||||
if tc.expErr {
|
||||
s.Require().Contains(string(res), tc.expErrMsg)
|
||||
} else {
|
||||
var result tmservice.GetLatestValidatorSetResponse
|
||||
err = vals[0].ClientCtx.JSONMarshaler.UnmarshalJSON(res, &result)
|
||||
s.Require().NoError(err)
|
||||
s.Require().Equal(uint64(len(vals)), result.Pagination.Total)
|
||||
anyPub, err := codectypes.NewAnyWithValue(vals[0].PubKey)
|
||||
s.Require().NoError(err)
|
||||
s.Require().Equal(result.Validators[0].PubKey, anyPub)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
_, err = s.queryClient.GetValidatorSetByHeight(context.Background(), &tmservice.GetValidatorSetByHeightRequest{
|
||||
Height: 1,
|
||||
Pagination: &qtypes.PageRequest{
|
||||
Offset: 0,
|
||||
Limit: 10,
|
||||
}})
|
||||
s.Require().NoError(err)
|
||||
func (s IntegrationTestSuite) TestValidatorSetByHeight_GRPC() {
|
||||
vals := s.network.Validators
|
||||
testCases := []struct {
|
||||
name string
|
||||
req *tmservice.GetValidatorSetByHeightRequest
|
||||
expErr bool
|
||||
expErrMsg string
|
||||
}{
|
||||
{"nil request", nil, true, "request cannot be nil"},
|
||||
{"empty request", &tmservice.GetValidatorSetByHeightRequest{}, true, "height must be greater than 0"},
|
||||
{"no pagination", &tmservice.GetValidatorSetByHeightRequest{Height: 1}, false, ""},
|
||||
{"with pagination", &tmservice.GetValidatorSetByHeightRequest{Height: 1, Pagination: &qtypes.PageRequest{Offset: 0, Limit: 1}}, false, ""},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
tc := tc
|
||||
s.Run(tc.name, func() {
|
||||
grpcRes, err := s.queryClient.GetValidatorSetByHeight(context.Background(), tc.req)
|
||||
if tc.expErr {
|
||||
s.Require().Error(err)
|
||||
s.Require().Contains(err.Error(), tc.expErrMsg)
|
||||
} else {
|
||||
s.Require().NoError(err)
|
||||
s.Require().Len(grpcRes.Validators, len(vals))
|
||||
s.Require().Equal(grpcRes.Pagination.Total, uint64(len(vals)))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// no pagination rest
|
||||
_, err = rest.GetRequest(fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/%d", val.APIAddress, 1))
|
||||
s.Require().NoError(err)
|
||||
|
||||
// rest query with pagination
|
||||
restRes, err := rest.GetRequest(fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/%d?pagination.offset=%d&pagination.limit=%d", val.APIAddress, 1, 0, 1))
|
||||
var validatorSetRes tmservice.GetValidatorSetByHeightResponse
|
||||
s.Require().NoError(val.ClientCtx.JSONMarshaler.UnmarshalJSON(restRes, &validatorSetRes))
|
||||
func (s IntegrationTestSuite) TestValidatorSetByHeight_GRPCGateway() {
|
||||
vals := s.network.Validators
|
||||
testCases := []struct {
|
||||
name string
|
||||
url string
|
||||
expErr bool
|
||||
expErrMsg string
|
||||
}{
|
||||
{"invalid height", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/%d", vals[0].APIAddress, -1), true, "height must be greater than 0"},
|
||||
{"no pagination", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/%d", vals[0].APIAddress, 1), false, ""},
|
||||
{"pagination invalid fields", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/%d?pagination.offset=-1&pagination.limit=-2", vals[0].APIAddress, 1), true, "strconv.ParseUint"},
|
||||
{"with pagination", fmt.Sprintf("%s/cosmos/base/tendermint/v1beta1/validatorsets/%d?pagination.offset=0&pagination.limit=2", vals[0].APIAddress, 1), false, ""},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
tc := tc
|
||||
s.Run(tc.name, func() {
|
||||
res, err := rest.GetRequest(tc.url)
|
||||
s.Require().NoError(err)
|
||||
if tc.expErr {
|
||||
s.Require().Contains(string(res), tc.expErrMsg)
|
||||
} else {
|
||||
var result tmservice.GetValidatorSetByHeightResponse
|
||||
err = vals[0].ClientCtx.JSONMarshaler.UnmarshalJSON(res, &result)
|
||||
s.Require().NoError(err)
|
||||
s.Require().Equal(uint64(len(vals)), result.Pagination.Total)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIntegrationTestSuite(t *testing.T) {
|
||||
|
|
|
@ -24,86 +24,54 @@ var _ gogogrpc.ClientConn = Context{}
|
|||
var protoCodec = encoding.GetCodec(proto.Name)
|
||||
|
||||
// Invoke implements the grpc ClientConn.Invoke method
|
||||
func (ctx Context) Invoke(grpcCtx gocontext.Context, method string, args, reply interface{}, opts ...grpc.CallOption) (err error) {
|
||||
func (ctx Context) Invoke(grpcCtx gocontext.Context, method string, req, reply interface{}, opts ...grpc.CallOption) (err error) {
|
||||
// Two things can happen here:
|
||||
// 1. either we're broadcasting a Tx, in which call we call Tendermint's broadcast endpoint directly,
|
||||
// 2. or we are querying for state, in which case we call ABCI's Query.
|
||||
|
||||
// In both cases, we don't allow empty request args (it will panic unexpectedly).
|
||||
if reflect.ValueOf(args).IsNil() {
|
||||
// In both cases, we don't allow empty request req (it will panic unexpectedly).
|
||||
if reflect.ValueOf(req).IsNil() {
|
||||
return sdkerrors.Wrap(sdkerrors.ErrInvalidRequest, "request cannot be nil")
|
||||
}
|
||||
|
||||
// Case 1. Broadcasting a Tx.
|
||||
if isBroadcast(method) {
|
||||
req, ok := args.(*tx.BroadcastTxRequest)
|
||||
if reqProto, ok := req.(*tx.BroadcastTxRequest); ok {
|
||||
if !ok {
|
||||
return sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "expected %T, got %T", (*tx.BroadcastTxRequest)(nil), args)
|
||||
return sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "expected %T, got %T", (*tx.BroadcastTxRequest)(nil), req)
|
||||
}
|
||||
res, ok := reply.(*tx.BroadcastTxResponse)
|
||||
resProto, ok := reply.(*tx.BroadcastTxResponse)
|
||||
if !ok {
|
||||
return sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "expected %T, got %T", (*tx.BroadcastTxResponse)(nil), args)
|
||||
return sdkerrors.Wrapf(sdkerrors.ErrInvalidRequest, "expected %T, got %T", (*tx.BroadcastTxResponse)(nil), req)
|
||||
}
|
||||
|
||||
broadcastRes, err := TxServiceBroadcast(grpcCtx, ctx, req)
|
||||
broadcastRes, err := TxServiceBroadcast(grpcCtx, ctx, reqProto)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*res = *broadcastRes
|
||||
*resProto = *broadcastRes
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Case 2. Querying state.
|
||||
reqBz, err := protoCodec.Marshal(args)
|
||||
inMd, _ := metadata.FromOutgoingContext(grpcCtx)
|
||||
abciRes, outMd, err := RunGRPCQuery(ctx, grpcCtx, method, req, inMd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// parse height header
|
||||
md, _ := metadata.FromOutgoingContext(grpcCtx)
|
||||
if heights := md.Get(grpctypes.GRPCBlockHeightHeader); len(heights) > 0 {
|
||||
height, err := strconv.ParseInt(heights[0], 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if height < 0 {
|
||||
return sdkerrors.Wrapf(
|
||||
sdkerrors.ErrInvalidRequest,
|
||||
"client.Context.Invoke: height (%d) from %q must be >= 0", height, grpctypes.GRPCBlockHeightHeader)
|
||||
}
|
||||
|
||||
ctx = ctx.WithHeight(height)
|
||||
}
|
||||
|
||||
req := abci.RequestQuery{
|
||||
Path: method,
|
||||
Data: reqBz,
|
||||
}
|
||||
|
||||
res, err := ctx.QueryABCI(req)
|
||||
err = protoCodec.Unmarshal(abciRes.Value, reply)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = protoCodec.Unmarshal(res.Value, reply)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create header metadata. For now the headers contain:
|
||||
// - block height
|
||||
// We then parse all the call options, if the call option is a
|
||||
// HeaderCallOption, then we manually set the value of that header to the
|
||||
// metadata.
|
||||
md = metadata.Pairs(grpctypes.GRPCBlockHeightHeader, strconv.FormatInt(res.Height, 10))
|
||||
for _, callOpt := range opts {
|
||||
header, ok := callOpt.(grpc.HeaderCallOption)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
*header.HeaderAddr = md
|
||||
*header.HeaderAddr = outMd
|
||||
}
|
||||
|
||||
if ctx.InterfaceRegistry != nil {
|
||||
|
@ -118,6 +86,47 @@ func (Context) NewStream(gocontext.Context, *grpc.StreamDesc, string, ...grpc.Ca
|
|||
return nil, fmt.Errorf("streaming rpc not supported")
|
||||
}
|
||||
|
||||
func isBroadcast(method string) bool {
|
||||
return method == "/cosmos.tx.v1beta1.Service/BroadcastTx"
|
||||
// RunGRPCQuery runs a gRPC query from the clientCtx, given all necessary
|
||||
// arguments for the gRPC method, and returns the ABCI response. It is used
|
||||
// to factorize code between client (Invoke) and server (RegisterGRPCServer)
|
||||
// gRPC handlers.
|
||||
func RunGRPCQuery(ctx Context, grpcCtx gocontext.Context, method string, req interface{}, md metadata.MD) (abci.ResponseQuery, metadata.MD, error) {
|
||||
reqBz, err := protoCodec.Marshal(req)
|
||||
if err != nil {
|
||||
return abci.ResponseQuery{}, nil, err
|
||||
}
|
||||
|
||||
// parse height header
|
||||
if heights := md.Get(grpctypes.GRPCBlockHeightHeader); len(heights) > 0 {
|
||||
height, err := strconv.ParseInt(heights[0], 10, 64)
|
||||
if err != nil {
|
||||
return abci.ResponseQuery{}, nil, err
|
||||
}
|
||||
if height < 0 {
|
||||
return abci.ResponseQuery{}, nil, sdkerrors.Wrapf(
|
||||
sdkerrors.ErrInvalidRequest,
|
||||
"client.Context.Invoke: height (%d) from %q must be >= 0", height, grpctypes.GRPCBlockHeightHeader)
|
||||
}
|
||||
|
||||
ctx = ctx.WithHeight(height)
|
||||
}
|
||||
|
||||
abciReq := abci.RequestQuery{
|
||||
Path: method,
|
||||
Data: reqBz,
|
||||
}
|
||||
|
||||
abciRes, err := ctx.QueryABCI(abciReq)
|
||||
if err != nil {
|
||||
return abci.ResponseQuery{}, nil, err
|
||||
}
|
||||
|
||||
// Create header metadata. For now the headers contain:
|
||||
// - block height
|
||||
// We then parse all the call options, if the call option is a
|
||||
// HeaderCallOption, then we manually set the value of that header to the
|
||||
// metadata.
|
||||
md = metadata.Pairs(grpctypes.GRPCBlockHeightHeader, strconv.FormatInt(abciRes.Height, 10))
|
||||
|
||||
return abciRes, md, nil
|
||||
}
|
||||
|
|
|
@ -90,21 +90,7 @@ func runAddCmd(cmd *cobra.Command, args []string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
var kr keyring.Keyring
|
||||
|
||||
dryRun, _ := cmd.Flags().GetBool(flags.FlagDryRun)
|
||||
if dryRun {
|
||||
kr, err = keyring.New(sdk.KeyringServiceName(), keyring.BackendMemory, clientCtx.KeyringDir, buf)
|
||||
} else {
|
||||
backend, _ := cmd.Flags().GetString(flags.FlagKeyringBackend)
|
||||
kr, err = keyring.New(sdk.KeyringServiceName(), backend, clientCtx.KeyringDir, buf)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return RunAddCmd(cmd, args, kr, buf)
|
||||
return RunAddCmd(cmd, args, clientCtx.Keyring, buf)
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -29,8 +29,8 @@ func Test_runAddCmdLedgerWithCustomCoinType(t *testing.T) {
|
|||
bech32PrefixConsAddr := "terravalcons"
|
||||
bech32PrefixConsPub := "terravalconspub"
|
||||
|
||||
config.SetPurpose(44)
|
||||
config.SetCoinType(330)
|
||||
config.SetFullFundraiserPath("44'/330'/0'/0/0")
|
||||
config.SetBech32PrefixForAccount(bech32PrefixAccAddr, bech32PrefixAccPub)
|
||||
config.SetBech32PrefixForValidator(bech32PrefixValAddr, bech32PrefixValPub)
|
||||
config.SetBech32PrefixForConsensusNode(bech32PrefixConsAddr, bech32PrefixConsPub)
|
||||
|
@ -77,8 +77,8 @@ func Test_runAddCmdLedgerWithCustomCoinType(t *testing.T) {
|
|||
"terrapub1addwnpepqvpg7r26nl2pvqqern00m6s9uaax3hauu2rzg8qpjzq9hy6xve7sw0d84m6",
|
||||
sdk.MustBech32ifyPubKey(sdk.Bech32PubKeyTypeAccPub, key1.GetPubKey()))
|
||||
|
||||
config.SetPurpose(44)
|
||||
config.SetCoinType(118)
|
||||
config.SetFullFundraiserPath("44'/118'/0'/0/0")
|
||||
config.SetBech32PrefixForAccount(sdk.Bech32PrefixAccAddr, sdk.Bech32PrefixAccPub)
|
||||
config.SetBech32PrefixForValidator(sdk.Bech32PrefixValAddr, sdk.Bech32PrefixValPub)
|
||||
config.SetBech32PrefixForConsensusNode(sdk.Bech32PrefixConsAddr, sdk.Bech32PrefixConsPub)
|
||||
|
|
|
@ -31,7 +31,7 @@ func Test_runDeleteCmd(t *testing.T) {
|
|||
fakeKeyName1 := "runDeleteCmd_Key1"
|
||||
fakeKeyName2 := "runDeleteCmd_Key2"
|
||||
|
||||
path := sdk.GetConfig().GetFullFundraiserPath()
|
||||
path := sdk.GetConfig().GetFullBIP44Path()
|
||||
|
||||
kb, err := keyring.New(sdk.KeyringServiceName(), keyring.BackendTest, kbHome, mockIn)
|
||||
require.NoError(t, err)
|
||||
|
@ -39,7 +39,7 @@ func Test_runDeleteCmd(t *testing.T) {
|
|||
_, err = kb.NewAccount(fakeKeyName1, testutil.TestMnemonic, "", path, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, _, err = kb.NewMnemonic(fakeKeyName2, keyring.English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = kb.NewMnemonic(fakeKeyName2, keyring.English, sdk.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
cmd.SetArgs([]string{"blah", fmt.Sprintf("--%s=%s", flags.FlagHome, kbHome)})
|
||||
|
|
|
@ -31,7 +31,7 @@ func Test_runExportCmd(t *testing.T) {
|
|||
kb.Delete("keyname1") // nolint:errcheck
|
||||
})
|
||||
|
||||
path := sdk.GetConfig().GetFullFundraiserPath()
|
||||
path := sdk.GetConfig().GetFullBIP44Path()
|
||||
_, err = kb.NewAccount("keyname1", testutil.TestMnemonic, "", path, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ func Test_runListCmd(t *testing.T) {
|
|||
clientCtx := client.Context{}.WithKeyring(kb)
|
||||
ctx := context.WithValue(context.Background(), client.ClientContextKey, &clientCtx)
|
||||
|
||||
path := "" //sdk.GetConfig().GetFullFundraiserPath()
|
||||
path := "" //sdk.GetConfig().GetFullBIP44Path()
|
||||
_, err = kb.NewAccount("something", testutil.TestMnemonic, "", path, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
|
|
@ -22,16 +22,18 @@ const migratePassphrase = "NOOP_PASSPHRASE"
|
|||
// MigrateCommand migrates key information from legacy keybase to OS secret store.
|
||||
func MigrateCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "migrate",
|
||||
Use: "migrate <old_home_dir>",
|
||||
Short: "Migrate keys from the legacy (db-based) Keybase",
|
||||
Long: `Migrate key information from the legacy (db-based) Keybase to the new keyring-based Keybase.
|
||||
Long: `Migrate key information from the legacy (db-based) Keybase to the new keyring-based Keyring.
|
||||
The legacy Keybase used to persist keys in a LevelDB database stored in a 'keys' sub-directory of
|
||||
the old client application's home directory, e.g. $HOME/.gaiacli/keys/.
|
||||
For each key material entry, the command will prompt if the key should be skipped or not. If the key
|
||||
is not to be skipped, the passphrase must be entered. The key will only be migrated if the passphrase
|
||||
is correct. Otherwise, the command will exit and migration must be repeated.
|
||||
|
||||
It is recommended to run in 'dry-run' mode first to verify all key migration material.
|
||||
`,
|
||||
Args: cobra.ExactArgs(0),
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runMigrateCmd,
|
||||
}
|
||||
|
||||
|
@ -44,12 +46,12 @@ func runMigrateCmd(cmd *cobra.Command, args []string) error {
|
|||
|
||||
// instantiate legacy keybase
|
||||
var legacyKb keyring.LegacyKeybase
|
||||
legacyKb, err := NewLegacyKeyBaseFromDir(rootDir)
|
||||
legacyKb, err := NewLegacyKeyBaseFromDir(args[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
defer legacyKb.Close()
|
||||
defer func() { _ = legacyKb.Close() }()
|
||||
|
||||
// fetch list of keys from legacy keybase
|
||||
oldKeys, err := legacyKb.List()
|
||||
|
@ -71,7 +73,7 @@ func runMigrateCmd(cmd *cobra.Command, args []string) error {
|
|||
return errors.Wrap(err, "failed to create temporary directory for dryrun migration")
|
||||
}
|
||||
|
||||
defer os.RemoveAll(tmpDir)
|
||||
defer func() { _ = os.RemoveAll(tmpDir) }()
|
||||
|
||||
migrator, err = keyring.New(keyringServiceName, keyring.BackendTest, tmpDir, buf)
|
||||
} else {
|
||||
|
@ -91,11 +93,11 @@ func runMigrateCmd(cmd *cobra.Command, args []string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
for _, key := range oldKeys {
|
||||
keyName := key.GetName()
|
||||
keyType := key.GetType()
|
||||
for _, oldInfo := range oldKeys {
|
||||
keyName := oldInfo.GetName()
|
||||
keyType := oldInfo.GetType()
|
||||
|
||||
cmd.PrintErrf("Migrating key: '%s (%s)' ...\n", key.GetName(), keyType)
|
||||
cmd.PrintErrf("Migrating key: '%s (%s)' ...\n", keyName, keyType)
|
||||
|
||||
// allow user to skip migrating specific keys
|
||||
ok, err := input.GetConfirmation("Skip key migration?", buf, cmd.ErrOrStderr())
|
||||
|
@ -106,13 +108,15 @@ func runMigrateCmd(cmd *cobra.Command, args []string) error {
|
|||
continue
|
||||
}
|
||||
|
||||
// TypeLocal needs an additional step to ask password.
|
||||
// The other keyring types are handled by ImportInfo.
|
||||
if keyType != keyring.TypeLocal {
|
||||
pubkeyArmor, err := legacyKb.ExportPubKey(keyName)
|
||||
if err != nil {
|
||||
return err
|
||||
infoImporter, ok := migrator.(keyring.LegacyInfoImporter)
|
||||
if !ok {
|
||||
return fmt.Errorf("the Keyring implementation does not support import operations of Info types")
|
||||
}
|
||||
|
||||
if err := migrator.ImportPubKey(keyName, pubkeyArmor); err != nil {
|
||||
if err = infoImporter.ImportInfo(oldInfo); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -135,8 +139,9 @@ func runMigrateCmd(cmd *cobra.Command, args []string) error {
|
|||
if err := migrator.ImportPrivKey(keyName, armoredPriv, migratePassphrase); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
}
|
||||
cmd.Print("Migration Complete")
|
||||
cmd.PrintErrln("Migration complete.")
|
||||
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -5,44 +5,38 @@ import (
|
|||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/client"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/otiai10/copy"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/client"
|
||||
"github.com/cosmos/cosmos-sdk/client/flags"
|
||||
"github.com/cosmos/cosmos-sdk/crypto/keyring"
|
||||
"github.com/cosmos/cosmos-sdk/testutil"
|
||||
)
|
||||
|
||||
func Test_runMigrateCmd(t *testing.T) {
|
||||
cmd := AddKeyCommand()
|
||||
_ = testutil.ApplyMockIODiscardOutErr(cmd)
|
||||
cmd.Flags().AddFlagSet(Commands("home").PersistentFlags())
|
||||
|
||||
kbHome := t.TempDir()
|
||||
|
||||
clientCtx := client.Context{}.WithKeyringDir(kbHome)
|
||||
ctx := context.WithValue(context.Background(), client.ClientContextKey, &clientCtx)
|
||||
|
||||
copy.Copy("testdata", kbHome)
|
||||
cmd.SetArgs([]string{
|
||||
"keyname1",
|
||||
fmt.Sprintf("--%s=%s", cli.OutputFlag, OutputFormatText),
|
||||
fmt.Sprintf("--%s=%s", flags.FlagKeyringBackend, keyring.BackendTest),
|
||||
})
|
||||
assert.NoError(t, cmd.ExecuteContext(ctx))
|
||||
require.NoError(t, copy.Copy("testdata", kbHome))
|
||||
|
||||
cmd = MigrateCommand()
|
||||
cmd := MigrateCommand()
|
||||
cmd.Flags().AddFlagSet(Commands("home").PersistentFlags())
|
||||
mockIn := testutil.ApplyMockIODiscardOutErr(cmd)
|
||||
//mockIn := testutil.ApplyMockIODiscardOutErr(cmd)
|
||||
mockIn, mockOut := testutil.ApplyMockIO(cmd)
|
||||
|
||||
cmd.SetArgs([]string{
|
||||
fmt.Sprintf("--%s=%s", flags.FlagHome, kbHome),
|
||||
kbHome,
|
||||
//fmt.Sprintf("--%s=%s", flags.FlagHome, kbHome),
|
||||
fmt.Sprintf("--%s=true", flags.FlagDryRun),
|
||||
fmt.Sprintf("--%s=%s", flags.FlagKeyringBackend, keyring.BackendTest),
|
||||
})
|
||||
|
||||
mockIn.Reset("test1234\ntest1234\n")
|
||||
mockIn.Reset("\n12345678\n\n\n\n\n")
|
||||
t.Log(mockOut.String())
|
||||
assert.NoError(t, cmd.ExecuteContext(ctx))
|
||||
}
|
||||
|
|
Binary file not shown.
Binary file not shown.
|
@ -1 +1 @@
|
|||
MANIFEST-000005
|
||||
MANIFEST-000167
|
||||
|
|
|
@ -1 +1 @@
|
|||
MANIFEST-000003
|
||||
MANIFEST-000165
|
||||
|
|
|
@ -1,30 +1,876 @@
|
|||
=============== Feb 2, 2021 (IST) ===============
|
||||
00:03:25.348369 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
00:03:25.350695 db@open opening
|
||||
00:03:25.350888 version@stat F·[] S·0B[] Sc·[]
|
||||
00:03:25.351864 db@janitor F·2 G·0
|
||||
00:03:25.351881 db@open done T·1.169825ms
|
||||
00:03:25.351895 db@close closing
|
||||
00:03:25.351929 db@close done T·33.042µs
|
||||
=============== Feb 2, 2021 (IST) ===============
|
||||
00:03:34.450638 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
00:03:34.450722 version@stat F·[] S·0B[] Sc·[]
|
||||
00:03:34.450737 db@open opening
|
||||
00:03:34.450765 journal@recovery F·1
|
||||
00:03:34.450851 journal@recovery recovering @1
|
||||
00:03:34.451173 version@stat F·[] S·0B[] Sc·[]
|
||||
00:03:34.454278 db@janitor F·2 G·0
|
||||
00:03:34.454298 db@open done T·3.548046ms
|
||||
00:03:34.454307 db@close closing
|
||||
00:03:34.454327 db@close done T·19.017µs
|
||||
=============== Feb 2, 2021 (IST) ===============
|
||||
00:03:42.025705 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
00:03:42.025892 version@stat F·[] S·0B[] Sc·[]
|
||||
00:03:42.025907 db@open opening
|
||||
00:03:42.025943 journal@recovery F·1
|
||||
00:03:42.026790 journal@recovery recovering @2
|
||||
00:03:42.026946 version@stat F·[] S·0B[] Sc·[]
|
||||
00:03:42.031645 db@janitor F·2 G·0
|
||||
00:03:42.031661 db@open done T·5.750008ms
|
||||
00:03:42.283102 db@close closing
|
||||
00:03:42.283162 db@close done T·58.775µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:56:38.444867 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:56:38.447630 db@open opening
|
||||
14:56:38.447826 version@stat F·[] S·0B[] Sc·[]
|
||||
14:56:38.449162 db@janitor F·2 G·0
|
||||
14:56:38.449180 db@open done T·1.537964ms
|
||||
14:56:38.449193 db@close closing
|
||||
14:56:38.449264 db@close done T·69.313µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:56:49.081871 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:56:49.081975 version@stat F·[] S·0B[] Sc·[]
|
||||
14:56:49.081994 db@open opening
|
||||
14:56:49.082040 journal@recovery F·1
|
||||
14:56:49.082399 journal@recovery recovering @1
|
||||
14:56:49.083134 version@stat F·[] S·0B[] Sc·[]
|
||||
14:56:49.088411 db@janitor F·2 G·0
|
||||
14:56:49.088430 db@open done T·6.428462ms
|
||||
14:56:49.088440 db@close closing
|
||||
14:56:49.088491 db@close done T·48.589µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:56:55.214003 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:56:55.214144 version@stat F·[] S·0B[] Sc·[]
|
||||
14:56:55.214165 db@open opening
|
||||
14:56:55.214215 journal@recovery F·1
|
||||
14:56:55.214329 journal@recovery recovering @2
|
||||
14:56:55.214750 version@stat F·[] S·0B[] Sc·[]
|
||||
14:56:55.221347 db@janitor F·2 G·0
|
||||
14:56:55.221365 db@open done T·7.194565ms
|
||||
14:56:55.608587 db@close closing
|
||||
14:56:55.608644 db@close done T·54.685µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:07.211101 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:07.211224 version@stat F·[] S·0B[] Sc·[]
|
||||
14:57:07.211243 db@open opening
|
||||
14:57:07.211287 journal@recovery F·1
|
||||
14:57:07.211388 journal@recovery recovering @4
|
||||
14:57:07.213734 memdb@flush created L0@6 N·2 S·470B "cos..ess,v2":"val..nfo,v1"
|
||||
14:57:07.214142 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:07.218723 db@janitor F·3 G·0
|
||||
14:57:07.218743 db@open done T·7.488657ms
|
||||
14:57:07.218804 db@close closing
|
||||
14:57:07.218842 db@close done T·36.603µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:16.418006 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:16.418133 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.418153 db@open opening
|
||||
14:57:16.418199 journal@recovery F·1
|
||||
14:57:16.418508 journal@recovery recovering @7
|
||||
14:57:16.418891 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.425395 db@janitor F·3 G·0
|
||||
14:57:16.425423 db@open done T·7.257565ms
|
||||
14:57:16.425482 db@close closing
|
||||
14:57:16.425522 db@close done T·38.172µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:16.425854 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:16.425965 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.425983 db@open opening
|
||||
14:57:16.426027 journal@recovery F·1
|
||||
14:57:16.426133 journal@recovery recovering @9
|
||||
14:57:16.426324 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.431088 db@janitor F·3 G·0
|
||||
14:57:16.431103 db@open done T·5.115335ms
|
||||
14:57:16.431142 db@close closing
|
||||
14:57:16.431179 db@close done T·35.705µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:16.431287 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:16.431376 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.431394 db@open opening
|
||||
14:57:16.431437 journal@recovery F·1
|
||||
14:57:16.431721 journal@recovery recovering @11
|
||||
14:57:16.432205 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.437468 db@janitor F·3 G·0
|
||||
14:57:16.437486 db@open done T·6.087128ms
|
||||
14:57:16.437529 db@close closing
|
||||
14:57:16.437571 db@close done T·40.188µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:16.437907 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:16.438006 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.438024 db@open opening
|
||||
14:57:16.438067 journal@recovery F·1
|
||||
14:57:16.438573 journal@recovery recovering @13
|
||||
14:57:16.439155 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.443451 db@janitor F·3 G·0
|
||||
14:57:16.443466 db@open done T·5.437579ms
|
||||
14:57:16.443511 db@close closing
|
||||
14:57:16.443634 db@close done T·118.642µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:16.443733 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:16.443847 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.443864 db@open opening
|
||||
14:57:16.443915 journal@recovery F·1
|
||||
14:57:16.444629 journal@recovery recovering @15
|
||||
14:57:16.445570 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:16.450978 db@janitor F·3 G·0
|
||||
14:57:16.451001 db@open done T·7.132193ms
|
||||
14:57:16.451050 db@close closing
|
||||
14:57:16.451089 db@close done T·37.371µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
14:57:19.439656 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
14:57:19.439775 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:19.439793 db@open opening
|
||||
14:57:19.439845 journal@recovery F·1
|
||||
14:57:19.440199 journal@recovery recovering @17
|
||||
14:57:19.440624 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
14:57:19.445819 db@janitor F·3 G·0
|
||||
14:57:19.445837 db@open done T·6.03822ms
|
||||
14:57:19.828985 db@close closing
|
||||
14:57:19.829058 db@close done T·71.028µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:04.002859 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:04.002990 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:04.003010 db@open opening
|
||||
15:07:04.003081 journal@recovery F·1
|
||||
15:07:04.003191 journal@recovery recovering @19
|
||||
15:07:04.003591 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:04.008917 db@janitor F·3 G·0
|
||||
15:07:04.008942 db@open done T·5.916433ms
|
||||
15:07:04.009005 db@close closing
|
||||
15:07:04.009050 db@close done T·42.762µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:15.240666 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:15.240802 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.240825 db@open opening
|
||||
15:07:15.240871 journal@recovery F·1
|
||||
15:07:15.241288 journal@recovery recovering @21
|
||||
15:07:15.241702 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.249270 db@janitor F·3 G·0
|
||||
15:07:15.249299 db@open done T·8.459432ms
|
||||
15:07:15.249363 db@close closing
|
||||
15:07:15.249404 db@close done T·39.294µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:15.249761 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:15.249850 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.249868 db@open opening
|
||||
15:07:15.249911 journal@recovery F·1
|
||||
15:07:15.250026 journal@recovery recovering @23
|
||||
15:07:15.250195 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.254923 db@janitor F·3 G·0
|
||||
15:07:15.254943 db@open done T·5.069716ms
|
||||
15:07:15.254987 db@close closing
|
||||
15:07:15.255026 db@close done T·37.365µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:15.255136 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:15.255218 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.255235 db@open opening
|
||||
15:07:15.255277 journal@recovery F·1
|
||||
15:07:15.255617 journal@recovery recovering @25
|
||||
15:07:15.256091 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.262240 db@janitor F·3 G·0
|
||||
15:07:15.262260 db@open done T·7.018813ms
|
||||
15:07:15.262310 db@close closing
|
||||
15:07:15.262353 db@close done T·41.276µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:15.262707 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:15.262808 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.262829 db@open opening
|
||||
15:07:15.262874 journal@recovery F·1
|
||||
15:07:15.263408 journal@recovery recovering @27
|
||||
15:07:15.263994 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.268793 db@janitor F·3 G·0
|
||||
15:07:15.268810 db@open done T·5.975152ms
|
||||
15:07:15.268861 db@close closing
|
||||
15:07:15.268900 db@close done T·37.419µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:15.268989 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:15.269096 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.269117 db@open opening
|
||||
15:07:15.269165 journal@recovery F·1
|
||||
15:07:15.269858 journal@recovery recovering @29
|
||||
15:07:15.270587 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:15.275935 db@janitor F·3 G·0
|
||||
15:07:15.275951 db@open done T·6.828156ms
|
||||
15:07:15.275999 db@close closing
|
||||
15:07:15.276033 db@close done T·32.757µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:21.660414 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:21.660547 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.660568 db@open opening
|
||||
15:07:21.660655 journal@recovery F·1
|
||||
15:07:21.660960 journal@recovery recovering @31
|
||||
15:07:21.661682 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.667796 db@janitor F·3 G·0
|
||||
15:07:21.667813 db@open done T·7.237366ms
|
||||
15:07:21.667869 db@close closing
|
||||
15:07:21.667914 db@close done T·43.496µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:21.668253 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:21.668354 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.668372 db@open opening
|
||||
15:07:21.668418 journal@recovery F·1
|
||||
15:07:21.668529 journal@recovery recovering @33
|
||||
15:07:21.668930 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.674796 db@janitor F·3 G·0
|
||||
15:07:21.674817 db@open done T·6.440491ms
|
||||
15:07:21.674861 db@close closing
|
||||
15:07:21.674898 db@close done T·35.584µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:21.675013 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:21.675115 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.675131 db@open opening
|
||||
15:07:21.675179 journal@recovery F·1
|
||||
15:07:21.675707 journal@recovery recovering @35
|
||||
15:07:21.676833 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.681212 db@janitor F·3 G·0
|
||||
15:07:21.681226 db@open done T·6.089677ms
|
||||
15:07:21.681270 db@close closing
|
||||
15:07:21.681299 db@close done T·27.867µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:21.681691 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:21.681799 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.681817 db@open opening
|
||||
15:07:21.681882 journal@recovery F·1
|
||||
15:07:21.683119 journal@recovery recovering @37
|
||||
15:07:21.684000 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.689926 db@janitor F·3 G·0
|
||||
15:07:21.689940 db@open done T·8.117662ms
|
||||
15:07:21.689984 db@close closing
|
||||
15:07:21.690027 db@close done T·42.379µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:21.690104 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:21.690189 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.690205 db@open opening
|
||||
15:07:21.690247 journal@recovery F·1
|
||||
15:07:21.690536 journal@recovery recovering @39
|
||||
15:07:21.690899 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:21.695207 db@janitor F·3 G·0
|
||||
15:07:21.695223 db@open done T·5.013121ms
|
||||
15:07:21.695265 db@close closing
|
||||
15:07:21.695320 db@close done T·53.965µs
|
||||
=============== Sep 12, 2020 (BST) ===============
|
||||
15:07:24.335083 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:07:24.335214 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:24.335233 db@open opening
|
||||
15:07:24.335282 journal@recovery F·1
|
||||
15:07:24.336367 journal@recovery recovering @41
|
||||
15:07:24.336786 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:07:24.342965 db@janitor F·3 G·0
|
||||
15:07:24.342984 db@open done T·7.745647ms
|
||||
15:07:24.725175 db@close closing
|
||||
15:07:24.725234 db@close done T·57.895µs
|
||||
=============== Nov 2, 2020 (GMT) ===============
|
||||
00:08:43.299526 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
00:08:43.299860 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
00:08:43.299875 db@open opening
|
||||
00:08:43.299900 journal@recovery F·1
|
||||
00:08:43.300467 journal@recovery recovering @43
|
||||
00:08:43.301378 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
00:08:43.307882 db@janitor F·3 G·0
|
||||
00:08:43.307911 db@open done T·8.03178ms
|
||||
00:08:43.308144 db@close closing
|
||||
00:08:43.308231 db@close done T·85.824µs
|
||||
=============== Nov 2, 2020 (GMT) ===============
|
||||
00:09:14.493119 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
00:09:14.493237 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
00:09:14.493272 db@open opening
|
||||
00:09:14.493296 journal@recovery F·1
|
||||
00:09:14.493370 journal@recovery recovering @45
|
||||
00:09:14.493648 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
00:09:14.499436 db@janitor F·3 G·0
|
||||
00:09:14.499452 db@open done T·6.170984ms
|
||||
00:09:14.499537 db@close closing
|
||||
00:09:14.499592 db@close done T·52.707µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
12:47:15.935887 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:47:15.937333 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
12:47:15.937343 db@open opening
|
||||
12:47:15.937370 journal@recovery F·1
|
||||
12:47:15.937642 journal@recovery recovering @47
|
||||
12:47:15.937942 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
12:47:15.944262 db@janitor F·3 G·0
|
||||
12:47:15.944270 db@open done T·6.922789ms
|
||||
12:47:15.944460 db@close closing
|
||||
12:47:15.944492 db@close done T·30.723µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:23:04.060521 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:23:04.060694 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:23:04.060708 db@open opening
|
||||
15:23:04.060734 journal@recovery F·1
|
||||
15:23:04.061045 journal@recovery recovering @49
|
||||
15:23:04.061463 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:23:04.067352 db@janitor F·3 G·0
|
||||
15:23:04.067386 db@open done T·6.675171ms
|
||||
15:23:11.819265 db@close closing
|
||||
15:23:11.819317 db@close done T·51.057µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:23:14.037455 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:23:14.037524 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:23:14.037535 db@open opening
|
||||
15:23:14.037560 journal@recovery F·1
|
||||
15:23:14.037629 journal@recovery recovering @51
|
||||
15:23:14.037951 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:23:14.045002 db@janitor F·3 G·0
|
||||
15:23:14.045020 db@open done T·7.475686ms
|
||||
15:23:22.065063 db@close closing
|
||||
15:23:22.065111 db@close done T·47.074µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:23:43.145956 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:23:43.146094 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:23:43.146107 db@open opening
|
||||
15:23:43.146132 journal@recovery F·1
|
||||
15:23:43.146447 journal@recovery recovering @53
|
||||
15:23:43.146912 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:23:43.153059 db@janitor F·3 G·0
|
||||
15:23:43.153108 db@open done T·6.977141ms
|
||||
15:23:43.153245 db@close closing
|
||||
15:23:43.153290 db@close done T·43.663µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:25:14.027169 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:25:14.027240 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:25:14.027250 db@open opening
|
||||
15:25:14.027274 journal@recovery F·1
|
||||
15:25:14.027627 journal@recovery recovering @55
|
||||
15:25:14.028059 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:25:14.033292 db@janitor F·3 G·0
|
||||
15:25:14.033304 db@open done T·6.047911ms
|
||||
15:25:19.981971 db@close closing
|
||||
15:25:19.982011 db@close done T·39.165µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:25:51.137523 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:25:51.138542 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:25:51.138553 db@open opening
|
||||
15:25:51.138579 journal@recovery F·1
|
||||
15:25:51.138632 journal@recovery recovering @57
|
||||
15:25:51.138981 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:25:51.144970 db@janitor F·3 G·0
|
||||
15:25:51.144983 db@open done T·6.422769ms
|
||||
15:25:51.145031 db@close closing
|
||||
15:25:51.145071 db@close done T·39.108µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:25:56.504732 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:25:56.504809 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:25:56.504824 db@open opening
|
||||
15:25:56.504872 journal@recovery F·1
|
||||
15:25:56.505474 journal@recovery recovering @59
|
||||
15:25:56.505571 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:25:56.512054 db@janitor F·3 G·0
|
||||
15:25:56.512061 db@open done T·7.232269ms
|
||||
15:25:56.710823 db@close closing
|
||||
15:25:56.710860 db@close done T·36.326µs
|
||||
=============== Jan 22, 2021 (GMT) ===============
|
||||
15:26:02.847640 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
15:26:02.847733 version@stat F·[1] S·470B[470B] Sc·[0.25]
|
||||
15:26:02.847745 db@open opening
|
||||
15:26:02.847771 journal@recovery F·1
|
||||
15:26:02.848002 journal@recovery recovering @61
|
||||
15:26:02.850382 memdb@flush created L0@63 N·2 S·472B "cos..ess,v5":"tes..nfo,v4"
|
||||
15:26:02.850491 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
15:26:02.854544 db@janitor F·4 G·0
|
||||
15:26:02.854552 db@open done T·6.802972ms
|
||||
15:26:09.729296 db@close closing
|
||||
15:26:09.729392 db@close done T·95.18µs
|
||||
=============== Feb 6, 2021 (GMT) ===============
|
||||
12:21:53.904083 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:21:53.904380 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
12:21:53.904391 db@open opening
|
||||
12:21:53.904417 journal@recovery F·1
|
||||
12:21:53.905225 journal@recovery recovering @64
|
||||
12:21:53.905589 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
12:21:53.910965 db@janitor F·4 G·0
|
||||
12:21:53.910976 db@open done T·6.578518ms
|
||||
12:21:53.911304 db@close closing
|
||||
12:21:53.911387 db@close done T·82.205µs
|
||||
=============== Feb 6, 2021 (GMT) ===============
|
||||
12:22:02.353974 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:22:02.354077 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
12:22:02.354089 db@open opening
|
||||
12:22:02.354116 journal@recovery F·1
|
||||
12:22:02.354419 journal@recovery recovering @66
|
||||
12:22:02.354608 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
12:22:02.359491 db@janitor F·4 G·0
|
||||
12:22:02.359504 db@open done T·5.408186ms
|
||||
12:22:02.359514 db@close closing
|
||||
12:22:02.359542 db@close done T·27.662µs
|
||||
=============== Feb 6, 2021 (GMT) ===============
|
||||
12:22:07.888198 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:22:07.888300 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
12:22:07.888310 db@open opening
|
||||
12:22:07.888338 journal@recovery F·1
|
||||
12:22:07.888397 journal@recovery recovering @68
|
||||
12:22:07.888494 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
12:22:07.895048 db@janitor F·4 G·0
|
||||
12:22:07.895060 db@open done T·6.746979ms
|
||||
12:22:08.093013 db@close closing
|
||||
12:22:08.093057 db@close done T·43.222µs
|
||||
=============== Feb 18, 2021 (GMT) ===============
|
||||
07:32:13.660053 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
07:32:13.661098 version@stat F·[2] S·942B[942B] Sc·[0.50]
|
||||
07:32:13.661111 db@open opening
|
||||
07:32:13.661140 journal@recovery F·1
|
||||
07:32:13.661439 journal@recovery recovering @70
|
||||
07:32:13.663498 memdb@flush created L0@72 N·2 S·465B "cia..nfo,v7":"cos..ess,v8"
|
||||
07:32:13.663598 version@stat F·[3] S·1KiB[1KiB] Sc·[0.75]
|
||||
07:32:13.668369 db@janitor F·5 G·0
|
||||
07:32:13.668400 db@open done T·7.285777ms
|
||||
07:32:13.668491 db@close closing
|
||||
07:32:13.668557 db@close done T·65.011µs
|
||||
=============== Feb 18, 2021 (GMT) ===============
|
||||
07:32:20.349460 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
07:32:20.349568 version@stat F·[3] S·1KiB[1KiB] Sc·[0.75]
|
||||
07:32:20.349618 db@open opening
|
||||
07:32:20.349691 journal@recovery F·1
|
||||
07:32:20.349769 journal@recovery recovering @73
|
||||
07:32:20.349867 version@stat F·[3] S·1KiB[1KiB] Sc·[0.75]
|
||||
07:32:20.355997 db@janitor F·5 G·0
|
||||
07:32:20.356005 db@open done T·6.383828ms
|
||||
07:32:20.553221 db@close closing
|
||||
07:32:20.553251 db@close done T·28.713µs
|
||||
=============== Feb 18, 2021 (GMT) ===============
|
||||
07:32:30.022753 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
07:32:30.022830 version@stat F·[3] S·1KiB[1KiB] Sc·[0.75]
|
||||
07:32:30.022842 db@open opening
|
||||
07:32:30.022870 journal@recovery F·1
|
||||
07:32:30.023106 journal@recovery recovering @75
|
||||
07:32:30.025727 memdb@flush created L0@77 N·2 S·462B "cos..ess,v11":"foo.info,v10"
|
||||
07:32:30.025896 version@stat F·[4] S·1KiB[1KiB] Sc·[1.00]
|
||||
07:32:30.031203 db@janitor F·6 G·0
|
||||
07:32:30.031214 db@open done T·8.368455ms
|
||||
07:32:30.031222 db@close closing
|
||||
07:32:30.031249 db@close done T·26.625µs
|
||||
=============== Feb 18, 2021 (GMT) ===============
|
||||
07:32:36.137856 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
07:32:36.137945 version@stat F·[4] S·1KiB[1KiB] Sc·[1.00]
|
||||
07:32:36.137955 db@open opening
|
||||
07:32:36.137988 journal@recovery F·1
|
||||
07:32:36.138053 journal@recovery recovering @78
|
||||
07:32:36.138160 version@stat F·[4] S·1KiB[1KiB] Sc·[1.00]
|
||||
07:32:36.144271 db@janitor F·6 G·0
|
||||
07:32:36.144281 db@open done T·6.322633ms
|
||||
07:32:36.144342 table@compaction L0·4 -> L1·0 S·1KiB Q·12
|
||||
07:32:36.145937 table@build created L1@82 N·8 S·1KiB "cia..nfo,v7":"val..nfo,v1"
|
||||
07:32:36.145957 version@stat F·[0 1] S·1KiB[0B 1KiB] Sc·[0.00 0.00]
|
||||
07:32:36.147223 table@compaction committed F-3 S-606B Ke·0 D·0 T·2.864358ms
|
||||
07:32:36.147251 table@remove removed @77
|
||||
07:32:36.147265 table@remove removed @72
|
||||
07:32:36.147280 table@remove removed @63
|
||||
07:32:36.147394 table@remove removed @6
|
||||
07:32:36.341754 db@close closing
|
||||
07:32:36.341789 db@close done T·34.217µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
11:59:56.652297 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
11:59:56.653267 version@stat F·[0 1] S·1KiB[0B 1KiB] Sc·[0.00 0.00]
|
||||
11:59:56.653279 db@open opening
|
||||
11:59:56.653333 journal@recovery F·1
|
||||
11:59:56.653684 journal@recovery recovering @80
|
||||
11:59:56.655439 memdb@flush created L0@83 N·2 S·491B "bar.info,v13":"cos..ess,v14"
|
||||
11:59:56.655563 version@stat F·[1 1] S·1KiB[491B 1KiB] Sc·[0.25 0.00]
|
||||
11:59:56.659803 db@janitor F·4 G·0
|
||||
11:59:56.659812 db@open done T·6.529102ms
|
||||
11:59:56.659952 db@close closing
|
||||
11:59:56.660013 db@close done T·59.126µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:01:34.578182 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:01:34.578308 version@stat F·[1 1] S·1KiB[491B 1KiB] Sc·[0.25 0.00]
|
||||
12:01:34.578348 db@open opening
|
||||
12:01:34.578422 journal@recovery F·1
|
||||
12:01:34.578796 journal@recovery recovering @84
|
||||
12:01:34.579157 version@stat F·[1 1] S·1KiB[491B 1KiB] Sc·[0.25 0.00]
|
||||
12:01:34.583888 db@janitor F·4 G·0
|
||||
12:01:34.583925 db@open done T·5.547338ms
|
||||
12:01:34.583962 db@close closing
|
||||
12:01:34.584011 db@close done T·46.636µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:01:34.584060 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:01:34.584136 version@stat F·[1 1] S·1KiB[491B 1KiB] Sc·[0.25 0.00]
|
||||
12:01:34.584166 db@open opening
|
||||
12:01:34.584195 journal@recovery F·1
|
||||
12:01:34.584799 journal@recovery recovering @86
|
||||
12:01:34.584896 version@stat F·[1 1] S·1KiB[491B 1KiB] Sc·[0.25 0.00]
|
||||
12:01:34.590435 db@janitor F·4 G·0
|
||||
12:01:34.590445 db@open done T·6.275747ms
|
||||
12:01:44.922399 db@close closing
|
||||
12:01:44.922453 db@close done T·53.361µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:01:53.346191 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:01:53.346299 version@stat F·[1 1] S·1KiB[491B 1KiB] Sc·[0.25 0.00]
|
||||
12:01:53.346310 db@open opening
|
||||
12:01:53.346427 journal@recovery F·1
|
||||
12:01:53.346591 journal@recovery recovering @88
|
||||
12:01:53.350436 memdb@flush created L0@90 N·2 S·259B "cos..ess,v17":"led..nfo,v16"
|
||||
12:01:53.350863 version@stat F·[2 1] S·1KiB[750B 1KiB] Sc·[0.50 0.00]
|
||||
12:01:53.356998 db@janitor F·5 G·0
|
||||
12:01:53.357009 db@open done T·10.694071ms
|
||||
12:01:53.357177 db@close closing
|
||||
12:01:53.357258 db@close done T·79.894µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:01:57.771688 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:01:57.771807 version@stat F·[2 1] S·1KiB[750B 1KiB] Sc·[0.50 0.00]
|
||||
12:01:57.771818 db@open opening
|
||||
12:01:57.771844 journal@recovery F·1
|
||||
12:01:57.771911 journal@recovery recovering @91
|
||||
12:01:57.772211 version@stat F·[2 1] S·1KiB[750B 1KiB] Sc·[0.50 0.00]
|
||||
12:01:57.777712 db@janitor F·5 G·0
|
||||
12:01:57.777726 db@open done T·5.899191ms
|
||||
12:01:57.777794 db@close closing
|
||||
12:01:57.777821 db@close done T·26.301µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:01.179234 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:01.179444 version@stat F·[2 1] S·1KiB[750B 1KiB] Sc·[0.50 0.00]
|
||||
12:02:01.179471 db@open opening
|
||||
12:02:01.179568 journal@recovery F·1
|
||||
12:02:01.180395 journal@recovery recovering @93
|
||||
12:02:01.180499 version@stat F·[2 1] S·1KiB[750B 1KiB] Sc·[0.50 0.00]
|
||||
12:02:01.186898 db@janitor F·5 G·0
|
||||
12:02:01.186908 db@open done T·7.433758ms
|
||||
12:02:01.376649 db@close closing
|
||||
12:02:01.376744 db@close done T·94.311µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:08.325782 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:08.325880 version@stat F·[2 1] S·1KiB[750B 1KiB] Sc·[0.50 0.00]
|
||||
12:02:08.325892 db@open opening
|
||||
12:02:08.325919 journal@recovery F·1
|
||||
12:02:08.326096 journal@recovery recovering @95
|
||||
12:02:08.328874 memdb@flush created L0@97 N·2 S·189B "cos..ess,d19":"tes..nfo,d20"
|
||||
12:02:08.329781 version@stat F·[3 1] S·2KiB[939B 1KiB] Sc·[0.75 0.00]
|
||||
12:02:08.335685 db@janitor F·6 G·0
|
||||
12:02:08.335726 db@open done T·9.800531ms
|
||||
12:02:08.335812 db@close closing
|
||||
12:02:08.335913 db@close done T·98.185µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:10.989199 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:10.989372 version@stat F·[3 1] S·2KiB[939B 1KiB] Sc·[0.75 0.00]
|
||||
12:02:10.989381 db@open opening
|
||||
12:02:10.989413 journal@recovery F·1
|
||||
12:02:10.989493 journal@recovery recovering @98
|
||||
12:02:10.989823 version@stat F·[3 1] S·2KiB[939B 1KiB] Sc·[0.75 0.00]
|
||||
12:02:10.997764 db@janitor F·6 G·0
|
||||
12:02:10.997775 db@open done T·8.391051ms
|
||||
12:02:11.186825 db@close closing
|
||||
12:02:11.186873 db@close done T·46.355µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:13.779564 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:13.779705 version@stat F·[3 1] S·2KiB[939B 1KiB] Sc·[0.75 0.00]
|
||||
12:02:13.779716 db@open opening
|
||||
12:02:13.779766 journal@recovery F·1
|
||||
12:02:13.780050 journal@recovery recovering @100
|
||||
12:02:13.782794 memdb@flush created L0@102 N·2 S·186B "cia..nfo,d23":"cos..ess,d22"
|
||||
12:02:13.782888 version@stat F·[4 1] S·2KiB[1KiB 1KiB] Sc·[1.00 0.00]
|
||||
12:02:13.787114 db@janitor F·7 G·0
|
||||
12:02:13.787129 db@open done T·7.382544ms
|
||||
12:02:13.787201 table@compaction L0·4 -> L1·1 S·2KiB Q·24
|
||||
12:02:13.787271 db@close closing
|
||||
12:02:13.789006 table@build created L1@105 N·8 S·1KiB "bar.info,v13":"val..nfo,v1"
|
||||
12:02:13.789011 table@build exiting
|
||||
12:02:13.789013 table@build revert @105
|
||||
12:02:13.789055 db@close done T·1.783005ms
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:19.245131 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:19.245285 version@stat F·[4 1] S·2KiB[1KiB 1KiB] Sc·[1.00 0.00]
|
||||
12:02:19.245315 db@open opening
|
||||
12:02:19.245368 journal@recovery F·1
|
||||
12:02:19.245465 journal@recovery recovering @103
|
||||
12:02:19.245858 version@stat F·[4 1] S·2KiB[1KiB 1KiB] Sc·[1.00 0.00]
|
||||
12:02:19.251449 db@janitor F·7 G·0
|
||||
12:02:19.251465 db@open done T·6.140479ms
|
||||
12:02:19.251485 table@compaction L0·4 -> L1·1 S·2KiB Q·24
|
||||
12:02:19.251521 db@close closing
|
||||
12:02:19.251592 db@close done T·70.226µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:21.580113 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:21.580210 version@stat F·[4 1] S·2KiB[1KiB 1KiB] Sc·[1.00 0.00]
|
||||
12:02:21.580222 db@open opening
|
||||
12:02:21.580272 journal@recovery F·1
|
||||
12:02:21.580647 journal@recovery recovering @105
|
||||
12:02:21.580747 version@stat F·[4 1] S·2KiB[1KiB 1KiB] Sc·[1.00 0.00]
|
||||
12:02:21.587123 db@janitor F·7 G·0
|
||||
12:02:21.587130 db@open done T·6.905846ms
|
||||
12:02:21.587221 table@compaction L0·4 -> L1·1 S·2KiB Q·24
|
||||
12:02:21.589889 table@build created L1@109 N·8 S·1KiB "bar.info,v13":"val..nfo,v1"
|
||||
12:02:21.589929 version@stat F·[0 1] S·1KiB[0B 1KiB] Sc·[0.00 0.00]
|
||||
12:02:21.591275 table@compaction committed F-4 S-1KiB Ke·0 D·8 T·4.039289ms
|
||||
12:02:21.591357 table@remove removed @102
|
||||
12:02:21.591414 table@remove removed @97
|
||||
12:02:21.591428 table@remove removed @90
|
||||
12:02:21.591440 table@remove removed @83
|
||||
12:02:21.591472 table@remove removed @82
|
||||
12:02:21.777758 db@close closing
|
||||
12:02:21.777800 db@close done T·40.787µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:22.900722 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:22.900859 version@stat F·[0 1] S·1KiB[0B 1KiB] Sc·[0.00 0.00]
|
||||
12:02:22.900892 db@open opening
|
||||
12:02:22.900963 journal@recovery F·1
|
||||
12:02:22.901083 journal@recovery recovering @107
|
||||
12:02:22.904868 memdb@flush created L0@110 N·2 S·193B "cos..ess,d25":"val..nfo,d26"
|
||||
12:02:22.905267 version@stat F·[1 1] S·1KiB[193B 1KiB] Sc·[0.25 0.00]
|
||||
12:02:22.909786 db@janitor F·4 G·0
|
||||
12:02:22.909799 db@open done T·8.899965ms
|
||||
12:02:22.909931 db@close closing
|
||||
12:02:22.910008 db@close done T·74.647µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:53.139966 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:53.140102 version@stat F·[1 1] S·1KiB[193B 1KiB] Sc·[0.25 0.00]
|
||||
12:02:53.140135 db@open opening
|
||||
12:02:53.140206 journal@recovery F·1
|
||||
12:02:53.140586 journal@recovery recovering @111
|
||||
12:02:53.141053 version@stat F·[1 1] S·1KiB[193B 1KiB] Sc·[0.25 0.00]
|
||||
12:02:53.147675 db@janitor F·4 G·0
|
||||
12:02:53.147687 db@open done T·7.546001ms
|
||||
12:02:53.147750 db@close closing
|
||||
12:02:53.147818 db@close done T·67.754µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:02:53.147913 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:02:53.147982 version@stat F·[1 1] S·1KiB[193B 1KiB] Sc·[0.25 0.00]
|
||||
12:02:53.147993 db@open opening
|
||||
12:02:53.148043 journal@recovery F·1
|
||||
12:02:53.148101 journal@recovery recovering @113
|
||||
12:02:53.148192 version@stat F·[1 1] S·1KiB[193B 1KiB] Sc·[0.25 0.00]
|
||||
12:02:53.152906 db@janitor F·4 G·0
|
||||
12:02:53.152912 db@open done T·4.91707ms
|
||||
12:02:53.156922 db@close closing
|
||||
12:02:53.156949 db@close done T·25.968µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:03:24.147022 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:03:24.147113 version@stat F·[1 1] S·1KiB[193B 1KiB] Sc·[0.25 0.00]
|
||||
12:03:24.147123 db@open opening
|
||||
12:03:24.147195 journal@recovery F·1
|
||||
12:03:24.147542 journal@recovery recovering @115
|
||||
12:03:24.150459 memdb@flush created L0@117 N·2 S·244B "cos..ess,v29":"pub..nfo,v28"
|
||||
12:03:24.150556 version@stat F·[2 1] S·1KiB[437B 1KiB] Sc·[0.50 0.00]
|
||||
12:03:24.156079 db@janitor F·5 G·0
|
||||
12:03:24.156116 db@open done T·8.964543ms
|
||||
12:03:24.156215 db@close closing
|
||||
12:03:24.156330 db@close done T·113.154µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:03:33.230269 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:03:33.230428 version@stat F·[2 1] S·1KiB[437B 1KiB] Sc·[0.50 0.00]
|
||||
12:03:33.230456 db@open opening
|
||||
12:03:33.230505 journal@recovery F·1
|
||||
12:03:33.230859 journal@recovery recovering @118
|
||||
12:03:33.231123 version@stat F·[2 1] S·1KiB[437B 1KiB] Sc·[0.50 0.00]
|
||||
12:03:33.237886 db@janitor F·5 G·0
|
||||
12:03:33.237932 db@open done T·7.464889ms
|
||||
12:03:33.238009 db@close closing
|
||||
12:03:33.238077 db@close done T·67.991µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:03:33.238135 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:03:33.238190 version@stat F·[2 1] S·1KiB[437B 1KiB] Sc·[0.50 0.00]
|
||||
12:03:33.238200 db@open opening
|
||||
12:03:33.238226 journal@recovery F·1
|
||||
12:03:33.238295 journal@recovery recovering @120
|
||||
12:03:33.238459 version@stat F·[2 1] S·1KiB[437B 1KiB] Sc·[0.50 0.00]
|
||||
12:03:33.242714 db@janitor F·5 G·0
|
||||
12:03:33.242723 db@open done T·4.520893ms
|
||||
12:03:33.246526 db@close closing
|
||||
12:03:33.246576 db@close done T·49.286µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:03:36.732039 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:03:36.732132 version@stat F·[2 1] S·1KiB[437B 1KiB] Sc·[0.50 0.00]
|
||||
12:03:36.732143 db@open opening
|
||||
12:03:36.732193 journal@recovery F·1
|
||||
12:03:36.732321 journal@recovery recovering @122
|
||||
12:03:36.734960 memdb@flush created L0@124 N·2 S·244B "cos..ess,v32":"pub..nfo,v31"
|
||||
12:03:36.735282 version@stat F·[3 1] S·1KiB[681B 1KiB] Sc·[0.75 0.00]
|
||||
12:03:36.740852 db@janitor F·6 G·0
|
||||
12:03:36.740890 db@open done T·8.717358ms
|
||||
12:03:36.741044 db@close closing
|
||||
12:03:36.741134 db@close done T·87.869µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:03:56.009876 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:03:56.009989 version@stat F·[3 1] S·1KiB[681B 1KiB] Sc·[0.75 0.00]
|
||||
12:03:56.010002 db@open opening
|
||||
12:03:56.010034 journal@recovery F·1
|
||||
12:03:56.010178 journal@recovery recovering @125
|
||||
12:03:56.011128 version@stat F·[3 1] S·1KiB[681B 1KiB] Sc·[0.75 0.00]
|
||||
12:03:56.018052 db@janitor F·6 G·0
|
||||
12:03:56.018064 db@open done T·8.05417ms
|
||||
12:03:56.018173 db@close closing
|
||||
12:03:56.018224 db@close done T·49.879µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:03:58.983153 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:03:58.983257 version@stat F·[3 1] S·1KiB[681B 1KiB] Sc·[0.75 0.00]
|
||||
12:03:58.983268 db@open opening
|
||||
12:03:58.983297 journal@recovery F·1
|
||||
12:03:58.983885 journal@recovery recovering @127
|
||||
12:03:58.983986 version@stat F·[3 1] S·1KiB[681B 1KiB] Sc·[0.75 0.00]
|
||||
12:03:58.991844 db@janitor F·6 G·0
|
||||
12:03:58.991851 db@open done T·8.580014ms
|
||||
12:03:59.181560 db@close closing
|
||||
12:03:59.181637 db@close done T·76.045µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:04:10.259722 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:04:10.259852 version@stat F·[3 1] S·1KiB[681B 1KiB] Sc·[0.75 0.00]
|
||||
12:04:10.259869 db@open opening
|
||||
12:04:10.259919 journal@recovery F·1
|
||||
12:04:10.260104 journal@recovery recovering @129
|
||||
12:04:10.264224 memdb@flush created L0@131 N·2 S·187B "cos..ess,d34":"foo.info,d35"
|
||||
12:04:10.264492 version@stat F·[4 1] S·1KiB[868B 1KiB] Sc·[1.00 0.00]
|
||||
12:04:10.268582 db@janitor F·7 G·0
|
||||
12:04:10.268595 db@open done T·8.720601ms
|
||||
12:04:10.268655 table@compaction L0·4 -> L1·1 S·1KiB Q·36
|
||||
12:04:10.268669 db@close closing
|
||||
12:04:10.268830 db@close done T·159.948µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:04:10.268891 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:04:10.269025 version@stat F·[4 1] S·1KiB[868B 1KiB] Sc·[1.00 0.00]
|
||||
12:04:10.269034 db@open opening
|
||||
12:04:10.269089 journal@recovery F·1
|
||||
12:04:10.269152 journal@recovery recovering @132
|
||||
12:04:10.269259 version@stat F·[4 1] S·1KiB[868B 1KiB] Sc·[1.00 0.00]
|
||||
12:04:10.274436 db@janitor F·7 G·0
|
||||
12:04:10.274466 db@open done T·5.404186ms
|
||||
12:04:10.274543 table@compaction L0·4 -> L1·1 S·1KiB Q·36
|
||||
12:04:10.277245 table@build created L1@136 N·8 S·825B "bar.info,v13":"pub..nfo,v31"
|
||||
12:04:10.277287 version@stat F·[0 1] S·825B[0B 825B] Sc·[0.00 0.00]
|
||||
12:04:10.278388 db@close closing
|
||||
12:04:10.280880 table@commit exiting
|
||||
12:04:10.280907 db@close done T·2.542424ms
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:04:12.868499 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:04:12.868628 version@stat F·[0 1] S·825B[0B 825B] Sc·[0.00 0.00]
|
||||
12:04:12.868640 db@open opening
|
||||
12:04:12.868670 journal@recovery F·1
|
||||
12:04:12.868785 journal@recovery recovering @134
|
||||
12:04:12.870434 memdb@flush created L0@137 N·2 S·244B "cos..ess,v38":"pub..nfo,v37"
|
||||
12:04:12.871017 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:04:12.876243 db@janitor F·9 G·5
|
||||
12:04:12.876251 db@janitor removing table-124
|
||||
12:04:12.876290 db@janitor removing table-110
|
||||
12:04:12.876302 db@janitor removing table-109
|
||||
12:04:12.876330 db@janitor removing table-117
|
||||
12:04:12.876340 db@janitor removing table-131
|
||||
12:04:12.876381 db@open done T·7.712682ms
|
||||
12:04:12.876440 db@close closing
|
||||
12:04:12.876498 db@close done T·55.873µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:09:38.966259 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:09:38.966450 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:09:38.966463 db@open opening
|
||||
12:09:38.966490 journal@recovery F·1
|
||||
12:09:38.966746 journal@recovery recovering @138
|
||||
12:09:38.967252 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:09:38.974464 db@janitor F·4 G·0
|
||||
12:09:38.974477 db@open done T·8.005768ms
|
||||
12:09:56.196454 db@close closing
|
||||
12:09:56.196575 db@close done T·142.606µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:10:09.568902 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:10:09.568981 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:10:09.568993 db@open opening
|
||||
12:10:09.569022 journal@recovery F·1
|
||||
12:10:09.569291 journal@recovery recovering @140
|
||||
12:10:09.569781 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:10:09.575840 db@janitor F·4 G·0
|
||||
12:10:09.575848 db@open done T·6.851269ms
|
||||
12:10:23.290522 db@close closing
|
||||
12:10:23.290590 db@close done T·66.518µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:11:01.674005 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:11:01.674086 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:11:01.674098 db@open opening
|
||||
12:11:01.674128 journal@recovery F·1
|
||||
12:11:01.674359 journal@recovery recovering @142
|
||||
12:11:01.674814 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:11:01.680965 db@janitor F·4 G·0
|
||||
12:11:01.680980 db@open done T·6.874747ms
|
||||
12:11:06.655715 db@close closing
|
||||
12:11:06.655759 db@close done T·43.852µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:19:52.269690 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:19:52.269780 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:19:52.269792 db@open opening
|
||||
12:19:52.269826 journal@recovery F·1
|
||||
12:19:52.270051 journal@recovery recovering @144
|
||||
12:19:52.270585 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:19:52.276899 db@janitor F·4 G·0
|
||||
12:19:52.276939 db@open done T·7.116495ms
|
||||
12:19:59.249868 db@close closing
|
||||
12:19:59.249968 db@close done T·99.117µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:20:30.569407 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:20:30.569504 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:20:30.569516 db@open opening
|
||||
12:20:30.569545 journal@recovery F·1
|
||||
12:20:30.569730 journal@recovery recovering @146
|
||||
12:20:30.570245 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:20:30.577100 db@janitor F·4 G·0
|
||||
12:20:30.577111 db@open done T·7.591098ms
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:20:35.223490 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:20:35.223588 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:20:35.223601 db@open opening
|
||||
12:20:35.223630 journal@recovery F·1
|
||||
12:20:35.223986 journal@recovery recovering @148
|
||||
12:20:35.224401 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:20:35.229848 db@janitor F·4 G·0
|
||||
12:20:35.229856 db@open done T·6.250812ms
|
||||
12:20:41.049391 db@close closing
|
||||
12:20:41.049441 db@close done T·49.18µs
|
||||
=============== Feb 23, 2021 (GMT) ===============
|
||||
12:21:45.804793 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
12:21:45.804915 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:21:45.804928 db@open opening
|
||||
12:21:45.804961 journal@recovery F·1
|
||||
12:21:45.805201 journal@recovery recovering @150
|
||||
12:21:45.805681 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
12:21:45.810888 db@janitor F·4 G·0
|
||||
12:21:45.810920 db@open done T·5.985873ms
|
||||
12:21:49.489917 db@close closing
|
||||
12:21:49.490008 db@close done T·89.528µs
|
||||
=============== Feb 26, 2021 (GMT) ===============
|
||||
11:30:44.083018 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
11:30:44.084062 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:30:44.084075 db@open opening
|
||||
11:30:44.084102 journal@recovery F·1
|
||||
11:30:44.084383 journal@recovery recovering @152
|
||||
11:30:44.084768 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:30:44.090432 db@janitor F·4 G·0
|
||||
11:30:44.090476 db@open done T·6.381184ms
|
||||
11:30:44.090566 db@close closing
|
||||
11:30:44.090613 db@close done T·44.34µs
|
||||
=============== Feb 26, 2021 (GMT) ===============
|
||||
11:32:36.352559 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
11:32:36.352641 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:32:36.352653 db@open opening
|
||||
11:32:36.352681 journal@recovery F·1
|
||||
11:32:36.352756 journal@recovery recovering @154
|
||||
11:32:36.353034 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:32:36.360804 db@janitor F·4 G·0
|
||||
11:32:36.360816 db@open done T·8.15837ms
|
||||
11:32:36.360904 db@close closing
|
||||
11:32:36.360960 db@close done T·54.048µs
|
||||
=============== Feb 26, 2021 (GMT) ===============
|
||||
11:32:48.449675 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
11:32:48.449787 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:32:48.449820 db@open opening
|
||||
11:32:48.449847 journal@recovery F·1
|
||||
11:32:48.449955 journal@recovery recovering @156
|
||||
11:32:48.450282 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:32:48.456194 db@janitor F·4 G·0
|
||||
11:32:48.456235 db@open done T·6.384513ms
|
||||
11:32:48.456367 db@close closing
|
||||
11:32:48.456478 db@close done T·109.034µs
|
||||
=============== Feb 26, 2021 (GMT) ===============
|
||||
11:34:15.269223 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
11:34:15.269382 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:34:15.269414 db@open opening
|
||||
11:34:15.269464 journal@recovery F·1
|
||||
11:34:15.269563 journal@recovery recovering @158
|
||||
11:34:15.269872 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:34:15.275610 db@janitor F·4 G·0
|
||||
11:34:15.275622 db@open done T·6.200818ms
|
||||
11:34:15.275707 db@close closing
|
||||
11:34:15.275752 db@close done T·44.471µs
|
||||
=============== Feb 26, 2021 (GMT) ===============
|
||||
11:34:32.038701 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
11:34:32.038798 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:34:32.038810 db@open opening
|
||||
11:34:32.038837 journal@recovery F·1
|
||||
11:34:32.039081 journal@recovery recovering @160
|
||||
11:34:32.039560 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
11:34:32.045125 db@janitor F·4 G·0
|
||||
11:34:32.045132 db@open done T·6.318174ms
|
||||
11:34:52.928799 db@close closing
|
||||
11:34:52.928908 db@close done T·94.101µs
|
||||
=============== Feb 26, 2021 (GMT) ===============
|
||||
19:42:33.585125 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
19:42:33.585220 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
19:42:33.585232 db@open opening
|
||||
19:42:33.585283 journal@recovery F·1
|
||||
19:42:33.585544 journal@recovery recovering @162
|
||||
19:42:33.585964 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
19:42:33.592890 db@janitor F·4 G·0
|
||||
19:42:33.592928 db@open done T·7.666705ms
|
||||
19:42:33.592996 db@close closing
|
||||
19:42:33.593063 db@close done T·63.906µs
|
||||
=============== Feb 27, 2021 (GMT) ===============
|
||||
17:05:01.817733 log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed
|
||||
17:05:01.817819 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
17:05:01.817830 db@open opening
|
||||
17:05:01.817855 journal@recovery F·1
|
||||
17:05:01.818108 journal@recovery recovering @164
|
||||
17:05:01.818567 version@stat F·[1 1] S·1KiB[244B 825B] Sc·[0.25 0.00]
|
||||
17:05:01.824986 db@janitor F·4 G·0
|
||||
17:05:01.825024 db@open done T·7.162696ms
|
||||
17:05:01.825107 db@close closing
|
||||
17:05:01.825221 db@close done T·111.618µs
|
||||
|
|
Binary file not shown.
Binary file not shown.
|
@ -25,6 +25,7 @@ func addHTTPDeprecationHeaders(h http.Handler) http.Handler {
|
|||
// WithHTTPDeprecationHeaders returns a new *mux.Router, identical to its input
|
||||
// but with the addition of HTTP Deprecation headers. This is used to mark legacy
|
||||
// amino REST endpoints as deprecated in the REST API.
|
||||
// nolint: gocritic
|
||||
func WithHTTPDeprecationHeaders(r *mux.Router) *mux.Router {
|
||||
subRouter := r.NewRoute().Subrouter()
|
||||
subRouter.Use(addHTTPDeprecationHeaders)
|
||||
|
|
|
@ -15,7 +15,7 @@ import (
|
|||
"github.com/cosmos/cosmos-sdk/types/rest"
|
||||
)
|
||||
|
||||
//BlockCommand returns the verified block data for a given heights
|
||||
// BlockCommand returns the verified block data for a given heights
|
||||
func BlockCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "block [height]",
|
||||
|
|
|
@ -22,7 +22,7 @@ import (
|
|||
|
||||
// TODO these next two functions feel kinda hacky based on their placement
|
||||
|
||||
//ValidatorCommand returns the validator set for a given height
|
||||
// ValidatorCommand returns the validator set for a given height
|
||||
func ValidatorCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "tendermint-validator-set [height]",
|
||||
|
@ -79,12 +79,14 @@ type ValidatorOutput struct {
|
|||
type ResultValidatorsOutput struct {
|
||||
BlockHeight int64 `json:"block_height"`
|
||||
Validators []ValidatorOutput `json:"validators"`
|
||||
Total uint64 `json:"total"`
|
||||
}
|
||||
|
||||
func (rvo ResultValidatorsOutput) String() string {
|
||||
var b strings.Builder
|
||||
|
||||
b.WriteString(fmt.Sprintf("block height: %d\n", rvo.BlockHeight))
|
||||
b.WriteString(fmt.Sprintf("total count: %d\n", rvo.Total))
|
||||
|
||||
for _, val := range rvo.Validators {
|
||||
b.WriteString(
|
||||
|
@ -129,9 +131,15 @@ func GetValidators(ctx context.Context, clientCtx client.Context, height *int64,
|
|||
return ResultValidatorsOutput{}, err
|
||||
}
|
||||
|
||||
total := validatorsRes.Total
|
||||
if validatorsRes.Total < 0 {
|
||||
total = 0
|
||||
}
|
||||
|
||||
outputValidatorsRes := ResultValidatorsOutput{
|
||||
BlockHeight: validatorsRes.BlockHeight,
|
||||
Validators: make([]ValidatorOutput, len(validatorsRes.Validators)),
|
||||
Total: uint64(total),
|
||||
}
|
||||
|
||||
for i := 0; i < len(validatorsRes.Validators); i++ {
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"github.com/cosmos/cosmos-sdk/simapp/params"
|
||||
"github.com/cosmos/cosmos-sdk/testutil/testdata"
|
||||
"github.com/cosmos/cosmos-sdk/types"
|
||||
sdk "github.com/cosmos/cosmos-sdk/types"
|
||||
signing2 "github.com/cosmos/cosmos-sdk/types/tx/signing"
|
||||
"github.com/cosmos/cosmos-sdk/x/auth/legacy/legacytx"
|
||||
"github.com/cosmos/cosmos-sdk/x/auth/signing"
|
||||
|
@ -23,14 +24,13 @@ import (
|
|||
const (
|
||||
memo = "waboom"
|
||||
gas = uint64(10000)
|
||||
timeoutHeight = 5
|
||||
timeoutHeight = uint64(5)
|
||||
)
|
||||
|
||||
var (
|
||||
fee = types.NewCoins(types.NewInt64Coin("bam", 100))
|
||||
_, pub1, addr1 = testdata.KeyTestPubAddr()
|
||||
_, _, addr2 = testdata.KeyTestPubAddr()
|
||||
msg = banktypes.NewMsgSend(addr1, addr2, types.NewCoins(types.NewInt64Coin("wack", 10000)))
|
||||
sig = signing2.SignatureV2{
|
||||
PubKey: pub1,
|
||||
Data: &signing2.SingleSignatureData{
|
||||
|
@ -38,13 +38,18 @@ var (
|
|||
Signature: []byte("dummy"),
|
||||
},
|
||||
}
|
||||
msg0 = banktypes.NewMsgSend(addr1, addr2, types.NewCoins(types.NewInt64Coin("wack", 1)))
|
||||
msg1 = sdk.ServiceMsg{
|
||||
MethodName: "/cosmos.bank.v1beta1.Msg/Send",
|
||||
Request: banktypes.NewMsgSend(addr1, addr2, types.NewCoins(types.NewInt64Coin("wack", 2))),
|
||||
}
|
||||
)
|
||||
|
||||
func buildTestTx(t *testing.T, builder client.TxBuilder) {
|
||||
builder.SetMemo(memo)
|
||||
builder.SetGasLimit(gas)
|
||||
builder.SetFeeAmount(fee)
|
||||
err := builder.SetMsgs(msg)
|
||||
err := builder.SetMsgs(msg0, msg1)
|
||||
require.NoError(t, err)
|
||||
err = builder.SetSignatures(sig)
|
||||
require.NoError(t, err)
|
||||
|
@ -75,11 +80,15 @@ func (s *TestSuite) TestCopyTx() {
|
|||
protoBuilder2 := s.protoCfg.NewTxBuilder()
|
||||
err = tx2.CopyTx(aminoBuilder.GetTx(), protoBuilder2, false)
|
||||
s.Require().NoError(err)
|
||||
bz, err := s.protoCfg.TxEncoder()(protoBuilder.GetTx())
|
||||
// Check sigs, signers and msgs.
|
||||
sigsV2_1, err := protoBuilder.GetTx().GetSignaturesV2()
|
||||
s.Require().NoError(err)
|
||||
bz2, err := s.protoCfg.TxEncoder()(protoBuilder2.GetTx())
|
||||
sigsV2_2, err := protoBuilder2.GetTx().GetSignaturesV2()
|
||||
s.Require().NoError(err)
|
||||
s.Require().Equal(bz, bz2)
|
||||
s.Require().Equal(sigsV2_1, sigsV2_2)
|
||||
s.Require().Equal(protoBuilder.GetTx().GetSigners(), protoBuilder2.GetTx().GetSigners())
|
||||
s.Require().Equal(protoBuilder.GetTx().GetMsgs()[0], protoBuilder2.GetTx().GetMsgs()[0])
|
||||
s.Require().Equal(protoBuilder.GetTx().GetMsgs()[1].(sdk.ServiceMsg).Request, protoBuilder2.GetTx().GetMsgs()[1]) // We lose the "ServiceMsg" information
|
||||
|
||||
// amino -> proto -> amino
|
||||
aminoBuilder = s.aminoCfg.NewTxBuilder()
|
||||
|
@ -90,11 +99,15 @@ func (s *TestSuite) TestCopyTx() {
|
|||
aminoBuilder2 := s.aminoCfg.NewTxBuilder()
|
||||
err = tx2.CopyTx(protoBuilder.GetTx(), aminoBuilder2, false)
|
||||
s.Require().NoError(err)
|
||||
bz, err = s.aminoCfg.TxEncoder()(aminoBuilder.GetTx())
|
||||
// Check sigs, signers, and msgs
|
||||
sigsV2_1, err = aminoBuilder.GetTx().GetSignaturesV2()
|
||||
s.Require().NoError(err)
|
||||
bz2, err = s.aminoCfg.TxEncoder()(aminoBuilder2.GetTx())
|
||||
sigsV2_2, err = aminoBuilder2.GetTx().GetSignaturesV2()
|
||||
s.Require().NoError(err)
|
||||
s.Require().Equal(bz, bz2)
|
||||
s.Require().Equal(sigsV2_1, sigsV2_2)
|
||||
s.Require().Equal(aminoBuilder.GetTx().GetSigners(), aminoBuilder2.GetTx().GetSigners())
|
||||
s.Require().Equal(aminoBuilder.GetTx().GetMsgs()[0], aminoBuilder2.GetTx().GetMsgs()[0])
|
||||
s.Require().Equal(aminoBuilder.GetTx().GetMsgs()[1], aminoBuilder2.GetTx().GetMsgs()[1]) // We lose the "ServiceMsg" information
|
||||
}
|
||||
|
||||
func (s *TestSuite) TestConvertTxToStdTx() {
|
||||
|
@ -106,7 +119,8 @@ func (s *TestSuite) TestConvertTxToStdTx() {
|
|||
s.Require().Equal(memo, stdTx.Memo)
|
||||
s.Require().Equal(gas, stdTx.Fee.Gas)
|
||||
s.Require().Equal(fee, stdTx.Fee.Amount)
|
||||
s.Require().Equal(msg, stdTx.Msgs[0])
|
||||
s.Require().Equal(msg0, stdTx.Msgs[0])
|
||||
s.Require().Equal(msg1.Request, stdTx.Msgs[1])
|
||||
s.Require().Equal(timeoutHeight, stdTx.TimeoutHeight)
|
||||
s.Require().Equal(sig.PubKey, stdTx.Signatures[0].PubKey)
|
||||
s.Require().Equal(sig.Data.(*signing2.SingleSignatureData).Signature, stdTx.Signatures[0].Signature)
|
||||
|
@ -125,7 +139,8 @@ func (s *TestSuite) TestConvertTxToStdTx() {
|
|||
s.Require().Equal(memo, stdTx.Memo)
|
||||
s.Require().Equal(gas, stdTx.Fee.Gas)
|
||||
s.Require().Equal(fee, stdTx.Fee.Amount)
|
||||
s.Require().Equal(msg, stdTx.Msgs[0])
|
||||
s.Require().Equal(msg0, stdTx.Msgs[0])
|
||||
s.Require().Equal(msg1.Request, stdTx.Msgs[1])
|
||||
s.Require().Equal(timeoutHeight, stdTx.TimeoutHeight)
|
||||
s.Require().Empty(stdTx.Signatures)
|
||||
|
||||
|
@ -158,3 +173,7 @@ func (s *TestSuite) TestConvertAndEncodeStdTx() {
|
|||
s.Require().NoError(err)
|
||||
s.Require().Equal(stdTx, decodedTx)
|
||||
}
|
||||
|
||||
func TestTestSuite(t *testing.T) {
|
||||
suite.Run(t, new(TestSuite))
|
||||
}
|
||||
|
|
|
@ -132,10 +132,10 @@ func TestSign(t *testing.T) {
|
|||
var from2 = "test_key2"
|
||||
|
||||
// create a new key using a mnemonic generator and test if we can reuse seed to recreate that account
|
||||
_, seed, err := kr.NewMnemonic(from1, keyring.English, path, hd.Secp256k1)
|
||||
_, seed, err := kr.NewMnemonic(from1, keyring.English, path, keyring.DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
requireT.NoError(err)
|
||||
requireT.NoError(kr.Delete(from1))
|
||||
info1, _, err := kr.NewMnemonic(from1, keyring.English, path, hd.Secp256k1)
|
||||
info1, _, err := kr.NewMnemonic(from1, keyring.English, path, keyring.DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
requireT.NoError(err)
|
||||
|
||||
info2, err := kr.NewAccount(from2, seed, "", path, hd.Secp256k1)
|
||||
|
|
|
@ -13,8 +13,8 @@ import (
|
|||
"github.com/cosmos/cosmos-sdk/codec/types"
|
||||
)
|
||||
|
||||
// deprecated: LegacyAmino defines a wrapper for an Amino codec that properly handles protobuf
|
||||
// types with Any's
|
||||
// LegacyAmino defines a wrapper for an Amino codec that properly
|
||||
// handles protobuf types with Any's. Deprecated.
|
||||
type LegacyAmino struct {
|
||||
Amino *amino.Codec
|
||||
}
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
package types
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
|
||||
sdkerrors "github.com/cosmos/cosmos-sdk/types/errors"
|
||||
|
@ -71,11 +73,14 @@ func NewAnyWithValue(v proto.Message) (*Any, error) {
|
|||
// into the protobuf Any serialization. For simple marshaling you should use NewAnyWithValue.
|
||||
func NewAnyWithCustomTypeURL(v proto.Message, typeURL string) (*Any, error) {
|
||||
bz, err := proto.Marshal(v)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Any{
|
||||
TypeUrl: typeURL,
|
||||
Value: bz,
|
||||
cachedValue: v,
|
||||
}, err
|
||||
}, nil
|
||||
}
|
||||
|
||||
// UnsafePackAny packs the value x in the Any and instead of returning the error
|
||||
|
@ -113,3 +118,26 @@ func (any *Any) pack(x proto.Message) error {
|
|||
func (any *Any) GetCachedValue() interface{} {
|
||||
return any.cachedValue
|
||||
}
|
||||
|
||||
// GoString returns a string representing valid go code to reproduce the current state of
|
||||
// the struct.
|
||||
func (any *Any) GoString() string {
|
||||
if any == nil {
|
||||
return "nil"
|
||||
}
|
||||
extra := ""
|
||||
if any.XXX_unrecognized != nil {
|
||||
extra = fmt.Sprintf(",\n XXX_unrecognized: %#v,\n", any.XXX_unrecognized)
|
||||
}
|
||||
return fmt.Sprintf("&Any{TypeUrl: %#v,\n Value: %#v%s\n}",
|
||||
any.TypeUrl, any.Value, extra)
|
||||
}
|
||||
|
||||
// String implements the stringer interface
|
||||
func (any *Any) String() string {
|
||||
if any == nil {
|
||||
return "nil"
|
||||
}
|
||||
return fmt.Sprintf("&Any{TypeUrl:%v,Value:%v,XXX_unrecognized:%v}",
|
||||
any.TypeUrl, any.Value, any.XXX_unrecognized)
|
||||
}
|
||||
|
|
|
@ -11,8 +11,6 @@ import (
|
|||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
reflect "reflect"
|
||||
strings "strings"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
|
@ -82,22 +80,23 @@ func init() {
|
|||
func init() { proto.RegisterFile("google/protobuf/any.proto", fileDescriptor_b53526c13ae22eb4) }
|
||||
|
||||
var fileDescriptor_b53526c13ae22eb4 = []byte{
|
||||
// 235 bytes of a gzipped FileDescriptorProto
|
||||
// 248 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4c, 0xcf, 0xcf, 0x4f,
|
||||
0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x4f, 0xcc, 0xab, 0xd4,
|
||||
0x03, 0x73, 0x84, 0xf8, 0x21, 0x52, 0x7a, 0x30, 0x29, 0x29, 0x91, 0xf4, 0xfc, 0xf4, 0x7c, 0x30,
|
||||
0x4f, 0x1f, 0xc4, 0x82, 0x48, 0x28, 0xd9, 0x70, 0x31, 0x3b, 0xe6, 0x55, 0x0a, 0x49, 0x72, 0x71,
|
||||
0x4f, 0x1f, 0xc4, 0x82, 0x48, 0x28, 0x79, 0x70, 0x31, 0x3b, 0xe6, 0x55, 0x0a, 0x49, 0x72, 0x71,
|
||||
0x94, 0x54, 0x16, 0xa4, 0xc6, 0x97, 0x16, 0xe5, 0x48, 0x30, 0x2a, 0x30, 0x6a, 0x70, 0x06, 0xb1,
|
||||
0x83, 0xf8, 0xa1, 0x45, 0x39, 0x42, 0x22, 0x5c, 0xac, 0x65, 0x89, 0x39, 0xa5, 0xa9, 0x12, 0x4c,
|
||||
0x0a, 0x8c, 0x1a, 0x3c, 0x41, 0x10, 0x8e, 0x15, 0xcb, 0x87, 0x85, 0xf2, 0x0c, 0x4e, 0xcd, 0x8c,
|
||||
0x37, 0x1e, 0xca, 0x31, 0x7c, 0x78, 0x28, 0xc7, 0xf8, 0xe3, 0xa1, 0x1c, 0x63, 0xc3, 0x23, 0x39,
|
||||
0xc6, 0x15, 0x8f, 0xe4, 0x18, 0x4f, 0x3c, 0x92, 0x63, 0xbc, 0xf0, 0x48, 0x8e, 0xf1, 0xc1, 0x23,
|
||||
0x39, 0xc6, 0x17, 0x8f, 0xe4, 0x18, 0x3e, 0x80, 0xc4, 0x1f, 0xcb, 0x31, 0x1e, 0x78, 0x2c, 0xc7,
|
||||
0x70, 0xe2, 0xb1, 0x1c, 0x23, 0x97, 0x70, 0x72, 0x7e, 0xae, 0x1e, 0x9a, 0xfb, 0x9c, 0x38, 0x1c,
|
||||
0xf3, 0x2a, 0x03, 0x40, 0x9c, 0x00, 0xc6, 0x28, 0x56, 0x90, 0xe5, 0xc5, 0x8b, 0x98, 0x98, 0xdd,
|
||||
0x03, 0x9c, 0x56, 0x31, 0xc9, 0xb9, 0x43, 0x94, 0x06, 0x40, 0x95, 0xea, 0x85, 0xa7, 0xe6, 0xe4,
|
||||
0x78, 0xe7, 0xe5, 0x97, 0xe7, 0x85, 0x80, 0x94, 0x25, 0xb1, 0x81, 0xcd, 0x30, 0x06, 0x04, 0x00,
|
||||
0x00, 0xff, 0xff, 0xe6, 0xfb, 0xa0, 0x21, 0x0e, 0x01, 0x00, 0x00,
|
||||
0x0a, 0x8c, 0x1a, 0x3c, 0x41, 0x10, 0x8e, 0x95, 0xc0, 0x8c, 0x05, 0xf2, 0x0c, 0x1b, 0x16, 0xc8,
|
||||
0x33, 0x7c, 0x58, 0x28, 0xcf, 0xd0, 0x70, 0x47, 0x81, 0xc1, 0xa9, 0x99, 0xf1, 0xc6, 0x43, 0x39,
|
||||
0x86, 0x0f, 0x0f, 0xe5, 0x18, 0x7f, 0x3c, 0x94, 0x63, 0x6c, 0x78, 0x24, 0xc7, 0xb8, 0xe2, 0x91,
|
||||
0x1c, 0xe3, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x3e, 0x78, 0x24, 0xc7, 0xf8, 0xe2,
|
||||
0x91, 0x1c, 0xc3, 0x07, 0x90, 0xf8, 0x63, 0x39, 0xc6, 0x03, 0x8f, 0xe5, 0x18, 0x4e, 0x3c, 0x96,
|
||||
0x63, 0xe4, 0x12, 0x4e, 0xce, 0xcf, 0xd5, 0x43, 0x73, 0xab, 0x13, 0x87, 0x63, 0x5e, 0x65, 0x00,
|
||||
0x88, 0x13, 0xc0, 0x18, 0xc5, 0x0a, 0x72, 0x48, 0xf1, 0x22, 0x26, 0x66, 0xf7, 0x00, 0xa7, 0x55,
|
||||
0x4c, 0x72, 0xee, 0x10, 0xa5, 0x01, 0x50, 0xa5, 0x7a, 0xe1, 0xa9, 0x39, 0x39, 0xde, 0x79, 0xf9,
|
||||
0xe5, 0x79, 0x21, 0x20, 0x65, 0x49, 0x6c, 0x60, 0x33, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff,
|
||||
0x4d, 0x91, 0x00, 0xa0, 0x1a, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (this *Any) Compare(that interface{}) int {
|
||||
|
@ -169,28 +168,6 @@ func (this *Any) Equal(that interface{}) bool {
|
|||
}
|
||||
return true
|
||||
}
|
||||
func (this *Any) GoString() string {
|
||||
if this == nil {
|
||||
return "nil"
|
||||
}
|
||||
s := make([]string, 0, 6)
|
||||
s = append(s, "&types.Any{")
|
||||
s = append(s, "TypeUrl: "+fmt.Sprintf("%#v", this.TypeUrl)+",\n")
|
||||
s = append(s, "Value: "+fmt.Sprintf("%#v", this.Value)+",\n")
|
||||
if this.XXX_unrecognized != nil {
|
||||
s = append(s, "XXX_unrecognized:"+fmt.Sprintf("%#v", this.XXX_unrecognized)+",\n")
|
||||
}
|
||||
s = append(s, "}")
|
||||
return strings.Join(s, "")
|
||||
}
|
||||
func valueToGoStringAny(v interface{}, typ string) string {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.IsNil() {
|
||||
return "nil"
|
||||
}
|
||||
pv := reflect.Indirect(rv).Interface()
|
||||
return fmt.Sprintf("func(v %v) *%v { return &v } ( %#v )", typ, typ, pv)
|
||||
}
|
||||
func (m *Any) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
|
@ -355,26 +332,6 @@ func sovAny(x uint64) (n int) {
|
|||
func sozAny(x uint64) (n int) {
|
||||
return sovAny(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (this *Any) String() string {
|
||||
if this == nil {
|
||||
return "nil"
|
||||
}
|
||||
s := strings.Join([]string{`&Any{`,
|
||||
`TypeUrl:` + fmt.Sprintf("%v", this.TypeUrl) + `,`,
|
||||
`Value:` + fmt.Sprintf("%v", this.Value) + `,`,
|
||||
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
|
||||
`}`,
|
||||
}, "")
|
||||
return s
|
||||
}
|
||||
func valueToStringAny(v interface{}) string {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.IsNil() {
|
||||
return "nil"
|
||||
}
|
||||
pv := reflect.Indirect(rv).Interface()
|
||||
return fmt.Sprintf("*%v", pv)
|
||||
}
|
||||
func (m *Any) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
|
@ -476,10 +433,7 @@ func (m *Any) Unmarshal(dAtA []byte) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthAny
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLengthAny
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
|
|
|
@ -51,3 +51,15 @@ func TestAnyPackUnpack(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
require.Equal(t, spot, animal)
|
||||
}
|
||||
|
||||
func TestString(t *testing.T) {
|
||||
require := require.New(t)
|
||||
spot := &Dog{Name: "Spot"}
|
||||
any, err := NewAnyWithValue(spot)
|
||||
require.NoError(err)
|
||||
|
||||
require.Equal("&Any{TypeUrl:/tests/dog,Value:[10 4 83 112 111 116],XXX_unrecognized:[]}", any.String())
|
||||
require.Equal(`&Any{TypeUrl: "/tests/dog",
|
||||
Value: []byte{0xa, 0x4, 0x53, 0x70, 0x6f, 0x74}
|
||||
}`, any.GoString())
|
||||
}
|
||||
|
|
|
@ -0,0 +1,68 @@
|
|||
package types_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/codec/types"
|
||||
"github.com/cosmos/cosmos-sdk/testutil/testdata"
|
||||
)
|
||||
|
||||
type errOnMarshal struct {
|
||||
testdata.Dog
|
||||
}
|
||||
|
||||
var _ proto.Message = (*errOnMarshal)(nil)
|
||||
|
||||
var errAlways = fmt.Errorf("always erroring")
|
||||
|
||||
func (eom *errOnMarshal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
return nil, errAlways
|
||||
}
|
||||
|
||||
const fauxURL = "/anyhere"
|
||||
|
||||
var eom = &errOnMarshal{}
|
||||
|
||||
// Ensure that returning an error doesn't suddenly allocate and waste bytes.
|
||||
// See https://github.com/cosmos/cosmos-sdk/issues/8537
|
||||
func TestNewAnyWithCustomTypeURLWithErrorNoAllocation(t *testing.T) {
|
||||
var ms1, ms2 runtime.MemStats
|
||||
runtime.ReadMemStats(&ms1)
|
||||
any, err := types.NewAnyWithCustomTypeURL(eom, fauxURL)
|
||||
runtime.ReadMemStats(&ms2)
|
||||
// Ensure that no fresh allocation was made.
|
||||
if diff := ms2.HeapAlloc - ms1.HeapAlloc; diff > 0 {
|
||||
t.Errorf("Unexpected allocation of %d bytes", diff)
|
||||
}
|
||||
if err == nil {
|
||||
t.Fatal("err wasn't returned")
|
||||
}
|
||||
if any != nil {
|
||||
t.Fatalf("Unexpectedly got a non-nil Any value: %v", any)
|
||||
}
|
||||
}
|
||||
|
||||
var sink interface{}
|
||||
|
||||
func BenchmarkNewAnyWithCustomTypeURLWithErrorReturned(b *testing.B) {
|
||||
b.ResetTimer()
|
||||
b.ReportAllocs()
|
||||
for i := 0; i < b.N; i++ {
|
||||
any, err := types.NewAnyWithCustomTypeURL(eom, fauxURL)
|
||||
if err == nil {
|
||||
b.Fatal("err wasn't returned")
|
||||
}
|
||||
if any != nil {
|
||||
b.Fatalf("Unexpectedly got a non-nil Any value: %v", any)
|
||||
}
|
||||
sink = any
|
||||
}
|
||||
if sink == nil {
|
||||
b.Fatal("benchmark didn't run")
|
||||
}
|
||||
sink = (interface{})(nil)
|
||||
}
|
|
@ -1,7 +1,7 @@
|
|||
[
|
||||
{
|
||||
"account_identifier": {
|
||||
"address":"cosmos158nkd0l9tyemv2crp579rmj8dg37qty8lzff88"
|
||||
"address":"cosmos1ujtnemf6jmfm995j000qdry064n5lq854gfe3j"
|
||||
},
|
||||
"currency":{
|
||||
"symbol":"stake",
|
||||
|
|
|
@ -45,7 +45,7 @@ sleep 10
|
|||
|
||||
# send transaction to deterministic address
|
||||
echo sending transaction with addr $addr
|
||||
simd tx bank send "$addr" cosmos1wjmt63j4fv9nqda92nsrp2jp2vsukcke4va3pt 100stake --yes --keyring-backend=test --broadcast-mode=block --chain-id=testing
|
||||
simd tx bank send "$addr" cosmos19g9cm8ymzchq2qkcdv3zgqtwayj9asv3hjv5u5 100stake --yes --keyring-backend=test --broadcast-mode=block --chain-id=testing
|
||||
|
||||
sleep 10
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@
|
|||
"constructor_dsl_file": "transfer.ros",
|
||||
"end_conditions": {
|
||||
"create_account": 1,
|
||||
"transfer": 3
|
||||
"transfer": 1
|
||||
}
|
||||
},
|
||||
"data": {
|
||||
|
|
|
@ -2,16 +2,6 @@
|
|||
|
||||
set -e
|
||||
|
||||
addr="abcd"
|
||||
|
||||
send_tx() {
|
||||
echo '12345678' | simd tx bank send $addr "$1" "$2"
|
||||
}
|
||||
|
||||
detect_account() {
|
||||
line=$1
|
||||
}
|
||||
|
||||
wait_for_rosetta() {
|
||||
timeout 30 sh -c 'until nc -z $0 $1; do sleep 1; done' rosetta 8080
|
||||
}
|
||||
|
@ -25,5 +15,3 @@ rosetta-cli check:data --configuration-file ./config/rosetta.json
|
|||
echo "checking construction API"
|
||||
rosetta-cli check:construction --configuration-file ./config/rosetta.json
|
||||
|
||||
echo "checking staking API"
|
||||
rosetta-cli check:construction --configuration-file ./config/staking.json
|
||||
|
|
|
@ -1,30 +0,0 @@
|
|||
{
|
||||
"network": {
|
||||
"blockchain": "app",
|
||||
"network": "network"
|
||||
},
|
||||
"online_url": "http://rosetta:8080",
|
||||
"data_directory": "",
|
||||
"http_timeout": 300,
|
||||
"max_retries": 5,
|
||||
"retry_elapsed_time": 0,
|
||||
"max_online_connections": 0,
|
||||
"max_sync_concurrency": 0,
|
||||
"tip_delay": 60,
|
||||
"log_configuration": true,
|
||||
"construction": {
|
||||
"offline_url": "http://rosetta:8080",
|
||||
"max_offline_connections": 0,
|
||||
"stale_depth": 0,
|
||||
"broadcast_limit": 0,
|
||||
"ignore_broadcast_failures": false,
|
||||
"clear_broadcasts": false,
|
||||
"broadcast_behind_tip": false,
|
||||
"block_broadcast_limit": 0,
|
||||
"rebroadcast_all": false,
|
||||
"constructor_dsl_file": "staking.ros",
|
||||
"end_conditions": {
|
||||
"staking": 3
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,147 +0,0 @@
|
|||
request_funds(1){
|
||||
find_account{
|
||||
currency = {"symbol":"stake", "decimals":0};
|
||||
random_account = find_balance({
|
||||
"minimum_balance":{
|
||||
"value": "0",
|
||||
"currency": {{currency}}
|
||||
},
|
||||
"create_limit":1
|
||||
});
|
||||
},
|
||||
send_funds{
|
||||
account_identifier = {{random_account.account_identifier}};
|
||||
address = {{account_identifier.address}};
|
||||
idk = http_request({
|
||||
"method": "POST",
|
||||
"url": "http:\/\/faucet:8000",
|
||||
"timeout": 10,
|
||||
"body": {{random_account.account_identifier.address}}
|
||||
});
|
||||
},
|
||||
// Create a separate scenario to request funds so that
|
||||
// the address we are using to request funds does not
|
||||
// get rolled back if funds do not yet exist.
|
||||
request{
|
||||
loaded_account = find_balance({
|
||||
"account_identifier": {{random_account.account_identifier}},
|
||||
"minimum_balance":{
|
||||
"value": "100",
|
||||
"currency": {{currency}}
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
create_account(1){
|
||||
create{
|
||||
network = {"network":"network", "blockchain":"app"};
|
||||
key = generate_key({"curve_type": "secp256k1"});
|
||||
account = derive({
|
||||
"network_identifier": {{network}},
|
||||
"public_key": {{key.public_key}}
|
||||
});
|
||||
// If the account is not saved, the key will be lost!
|
||||
save_account({
|
||||
"account_identifier": {{account.account_identifier}},
|
||||
"keypair": {{key}}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
staking(1){
|
||||
stake{
|
||||
stake.network = {"network":"network", "blockchain":"app"};
|
||||
currency = {"symbol":"stake", "decimals":0};
|
||||
sender = find_balance({
|
||||
"minimum_balance":{
|
||||
"value": "100",
|
||||
"currency": {{currency}}
|
||||
}
|
||||
});
|
||||
// Set the recipient_amount as some value <= sender.balance-max_fee
|
||||
max_fee = "0";
|
||||
fee_amount = "1";
|
||||
fee_value = 0 - {{fee_amount}};
|
||||
available_amount = {{sender.balance.value}} - {{max_fee}};
|
||||
recipient_amount = "1";
|
||||
print_message({"recipient_amount":{{recipient_amount}}});
|
||||
// Find recipient and construct operations
|
||||
recipient = {{sender.account_identifier}};
|
||||
sender_amount = 0 - {{recipient_amount}};
|
||||
stake.confirmation_depth = "1";
|
||||
stake.operations = [
|
||||
{
|
||||
"operation_identifier":{"index":0},
|
||||
"type":"fee",
|
||||
"account":{{sender.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{fee_value}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation_identifier":{"index":1},
|
||||
"type":"cosmos.staking.v1beta1.MsgDelegate",
|
||||
"account":{{sender.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{sender_amount}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation_identifier":{"index":2},
|
||||
"type":"cosmos.staking.v1beta1.MsgDelegate",
|
||||
"account": {
|
||||
"address": "staking_account",
|
||||
"sub_account": {
|
||||
"address" : "cosmosvaloper158nkd0l9tyemv2crp579rmj8dg37qty86kaut5"
|
||||
}
|
||||
},
|
||||
"amount":{
|
||||
"value":{{recipient_amount}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
}
|
||||
];
|
||||
},
|
||||
undelegate{
|
||||
print_message({"undelegate":{{sender}}});
|
||||
|
||||
undelegate.network = {"network":"network", "blockchain":"app"};
|
||||
undelegate.confirmation_depth = "1";
|
||||
undelegate.operations = [
|
||||
{
|
||||
"operation_identifier":{"index":0},
|
||||
"type":"fee",
|
||||
"account":{{sender.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{fee_value}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation_identifier":{"index":1},
|
||||
"type":"cosmos.staking.v1beta1.MsgUndelegate",
|
||||
"account":{{sender.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{recipient_amount}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation_identifier":{"index":2},
|
||||
"type":"cosmos.staking.v1beta1.MsgUndelegate",
|
||||
"account": {
|
||||
"address": "staking_account",
|
||||
"sub_account": {
|
||||
"address" : "cosmosvaloper158nkd0l9tyemv2crp579rmj8dg37qty86kaut5"
|
||||
}
|
||||
},
|
||||
"amount":{
|
||||
"value":{{sender_amount}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
}
|
||||
];
|
||||
}
|
||||
}
|
|
@ -26,7 +26,7 @@ request_funds(1){
|
|||
loaded_account = find_balance({
|
||||
"account_identifier": {{random_account.account_identifier}},
|
||||
"minimum_balance":{
|
||||
"value": "100",
|
||||
"value": "50",
|
||||
"currency": {{currency}}
|
||||
}
|
||||
});
|
||||
|
@ -57,6 +57,8 @@ transfer(3){
|
|||
"currency": {{currency}}
|
||||
}
|
||||
});
|
||||
acc_identifier = {{sender.account_identifier}};
|
||||
sender_address = {{acc_identifier.address}};
|
||||
// Set the recipient_amount as some value <= sender.balance-max_fee
|
||||
max_fee = "0";
|
||||
fee_amount = "1";
|
||||
|
@ -76,34 +78,28 @@ transfer(3){
|
|||
"create_probability": 50
|
||||
});
|
||||
transfer.confirmation_depth = "1";
|
||||
recipient_account_identifier = {{recipient.account_identifier}};
|
||||
recipient_address = {{recipient_account_identifier.address}};
|
||||
transfer.operations = [
|
||||
{
|
||||
"operation_identifier":{"index":0},
|
||||
"type":"fee",
|
||||
"account":{{sender.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{fee_value}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation_identifier":{"index":1},
|
||||
"type":"cosmos.bank.v1beta1.MsgSend",
|
||||
"account":{{sender.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{sender_amount}},
|
||||
"currency":{{currency}}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation_identifier":{"index":2},
|
||||
"type":"cosmos.bank.v1beta1.MsgSend",
|
||||
"account":{{recipient.account_identifier}},
|
||||
"amount":{
|
||||
"value":{{recipient_amount}},
|
||||
"currency":{{currency}}
|
||||
"metadata": {
|
||||
"amount": [
|
||||
{
|
||||
"amount": {{recipient_amount}},
|
||||
"denom": {{currency.symbol}}
|
||||
}
|
||||
],
|
||||
"from_address": {{sender_address}},
|
||||
"to_address": {{recipient_address}}
|
||||
}
|
||||
}
|
||||
];
|
||||
transfer.preprocess_metadata = {
|
||||
"gas_price": "1stake",
|
||||
"gas_limit": 250000
|
||||
};
|
||||
}
|
||||
}
|
||||
|
|
Binary file not shown.
|
@ -1,148 +0,0 @@
|
|||
# Cosmovisor
|
||||
|
||||
This is a tiny shim around Cosmos SDK binaries that use the upgrade
|
||||
module that allows for smooth and configurable management of upgrading
|
||||
binaries as a live chain is upgraded, and can be used to simplify validator
|
||||
devops while doing upgrades or to make syncing a full node for genesis
|
||||
simple. The `cosmovisor` will monitor the stdout of the daemon to look
|
||||
for messages from the upgrade module indicating a pending or required upgrade
|
||||
and act appropriately. (With better integrations possible in the future).
|
||||
|
||||
## Arguments
|
||||
|
||||
`cosmovisor` is a shim around a native binary. All arguments passed to the `cosmovisor`
|
||||
command will be passed to the current daemon binary (as a subprocess).
|
||||
It will return stdout and stderr of the subprocess as
|
||||
it's own. Because of that, it cannot accept any command line arguments, nor
|
||||
print anything to output (unless it dies before executing a binary).
|
||||
|
||||
Configuration will be passed in the following environmental variables:
|
||||
|
||||
* `DAEMON_HOME` is the location where upgrade binaries should be kept (can
|
||||
be `$HOME/.gaiad` or `$HOME/.xrnd`)
|
||||
* `DAEMON_NAME` is the name of the binary itself (eg. `xrnd`, `gaiad`, `simd`)
|
||||
* `DAEMON_ALLOW_DOWNLOAD_BINARIES` (optional) if set to `true` will enable auto-downloading of new binaries
|
||||
(for security reasons, this is intended for fullnodes rather than validators)
|
||||
* `DAEMON_RESTART_AFTER_UPGRADE` (optional) if set to `true` it will restart the sub-process with the same args
|
||||
(but new binary) after a successful upgrade. By default, the `cosmovisor` dies afterward and allows the cosmovisor
|
||||
to restart it if needed. Note that this will not auto-restart the child if there was an error.
|
||||
|
||||
## Folder Layout
|
||||
|
||||
`$DAEMON_HOME/cosmovisor` is expected to belong completely to the cosmovisor and
|
||||
subprocesses
|
||||
controlled by it. Under this folder, we will see the following:
|
||||
|
||||
```
|
||||
.
|
||||
├── current -> genesis or upgrades/<name>
|
||||
├── genesis
|
||||
│ └── bin
|
||||
│ └── $DAEMON_NAME
|
||||
└── upgrades
|
||||
└── <name>
|
||||
└── bin
|
||||
└── $DAEMON_NAME
|
||||
```
|
||||
|
||||
Each version of the chain is stored under either `genesis` or `upgrades/<name>`, which holds `bin/$DAEMON_NAME`
|
||||
along with any other needed files (maybe the cli client? maybe some dlls?). `current` is a symlink to the currently
|
||||
active folder (so `current/bin/$DAEMON_NAME` is the binary)
|
||||
|
||||
Note: the `<name>` after `upgrades` is the URI-encoded name of the upgrade as specified in the upgrade module plan.
|
||||
|
||||
Please note that `$DAEMON_HOME/cosmovisor` just stores the *binaries* and associated *program code*.
|
||||
The `cosmovisor` binary can be stored in any typical location (eg `/usr/local/bin`). The actual blockchain
|
||||
program will store it's data under `$GAIA_HOME` etc, which is independent of the `$DAEMON_HOME`. You can
|
||||
choose to export `GAIA_HOME=$DAEMON_HOME` and then end up with a configuation like the following, but this
|
||||
is left as a choice to the admin for best directory layout.
|
||||
|
||||
```
|
||||
.gaiad
|
||||
├── config
|
||||
├── data
|
||||
└── cosmovisor
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Basic Usage:
|
||||
|
||||
* The admin is responsible for installing the `cosmovisor` and setting it as a eg. systemd service to auto-restart, along with proper environmental variables
|
||||
* The admin is responsible for installing the `genesis` folder manually
|
||||
* The `cosmovisor` will set the `current` link to point to `genesis` at first start (when no `current` link exists)
|
||||
* The admin is (generally) responsible for installing the `upgrades/<name>` folders manually
|
||||
* The `cosmovisor` handles switching over the binaries at the correct points, so the admin can prepare days in advance and relax at upgrade time
|
||||
|
||||
Note that chains that wish to support upgrades may package up a genesis `cosmovisor` tar file with this info, just as they
|
||||
prepare the genesis binary tar file. In fact, they may offer a tar file will all upgrades up to current point for easy download
|
||||
for those who wish to sync a fullnode from start.
|
||||
|
||||
The `DAEMON` specific code, like the tendermint config, the application db, syncing blocks, etc is done as normal.
|
||||
The same eg. `GAIA_HOME` directives and command-line flags work, just the binary name is different.
|
||||
|
||||
## Upgradeable Binary Specification
|
||||
|
||||
In the basic version, the `cosmovisor` will read the stdout log messages
|
||||
to determine when an upgrade is needed. We are considering more complex solutions
|
||||
via signaling of some sort, but starting with the simple design:
|
||||
|
||||
* when an upgrade is needed the binary will print a line that matches this
|
||||
regular expression: `UPGRADE "(.*)" NEEDED at height (\d+):(.*)`.
|
||||
* the second match in the above regular expression can be a JSON object with
|
||||
a `binaries` key as described above
|
||||
|
||||
The name (first regexp) will be used to select the new binary to run. If it is present,
|
||||
the current subprocess will be killed, `current` will be upgraded to the new directory,
|
||||
and the new binary will be launched.
|
||||
|
||||
**Question** should we just kill the `cosmovisor` after it does the updates?
|
||||
so it gets a clean restart and just runs the new binary (under `current`).
|
||||
it should be safe to restart (as a service).
|
||||
|
||||
## Auto-Download
|
||||
|
||||
Generally, the system requires that the administrator place all relevant binaries
|
||||
on the disk before the upgrade happens. However, for people who don't need such
|
||||
control and want an easier setup (maybe they are syncing a non-validating fullnode
|
||||
and want to do little maintenance), there is another option.
|
||||
|
||||
If you set `DAEMON_ALLOW_DOWNLOAD_BINARIES=on` then when an upgrade is triggered and no local binary
|
||||
can be found, the `cosmovisor` will attempt to download and install the binary itself.
|
||||
The plan stored in the upgrade module has an info field for arbitrary json.
|
||||
This info is expected to be outputed on the halt log message. There are two
|
||||
valid format to specify a download in such a message:
|
||||
|
||||
1. Store an os/architecture -> binary URI map in the upgrade plan info field
|
||||
as JSON under the `"binaries"` key, eg:
|
||||
```json
|
||||
{
|
||||
"binaries": {
|
||||
"linux/amd64":"https://example.com/gaia.zip?checksum=sha256:aec070645fe53ee3b3763059376134f058cc337247c978add178b6ccdfb0019f"
|
||||
}
|
||||
}
|
||||
```
|
||||
The `"any"` key, if it exists, will be used as a default if there is not a specific os/architecture key.
|
||||
2. Store a link to a file that contains all information in the above format (eg. if you want
|
||||
to specify lots of binaries, changelog info, etc without filling up the blockchain).
|
||||
|
||||
e.g `https://example.com/testnet-1001-info.json?checksum=sha256:deaaa99fda9407c4dbe1d04bd49bab0cc3c1dd76fa392cd55a9425be074af01e`
|
||||
|
||||
This file contained in link will be retrieved by [go-getter](https://github.com/hashicorp/go-getter)
|
||||
and the "binaries" field will be parsed as above.
|
||||
|
||||
If there is no local binary, `DAEMON_ALLOW_DOWNLOAD_BINARIES=true`, and we can access a canonical url for the new binary,
|
||||
then the `cosmovisor` will download it with [go-getter](https://github.com/hashicorp/go-getter) and
|
||||
unpack it into the `upgrades/<name>` folder to be run as if we installed it manually
|
||||
|
||||
Note that for this mechanism to provide strong security guarantees, all URLs should include a
|
||||
sha{256,512} checksum. This ensures that no false binary is run, even if someone hacks the server
|
||||
or hijacks the dns. go-getter will always ensure the downloaded file matches the checksum if it
|
||||
is provided. And also handles unpacking archives into directories (so these download links should be
|
||||
a zip of all data in the bin directory).
|
||||
|
||||
To properly create a checksum on linux, you can use the `sha256sum` utility. eg.
|
||||
`sha256sum ./testdata/repo/zip_directory/autod.zip`
|
||||
which should return `29139e1381b8177aec909fab9a75d11381cab5adf7d3af0c05ff1c9c117743a7`.
|
||||
You can also use `sha512sum` if you like longer hashes, or `md5sum` if you like to use broken hashes.
|
||||
Make sure to set the hash algorithm properly in the checksum argument to the url.
|
|
@ -0,0 +1 @@
|
|||
../docs/run-node/cosmovisor.md
|
|
@ -73,7 +73,7 @@ func TestArmorUnarmorPubKey(t *testing.T) {
|
|||
cstore := keyring.NewInMemory()
|
||||
|
||||
// Add keys and see they return in alphabetical order
|
||||
info, _, err := cstore.NewMnemonic("Bob", keyring.English, types.FullFundraiserPath, hd.Secp256k1)
|
||||
info, _, err := cstore.NewMnemonic("Bob", keyring.English, types.FullFundraiserPath, keyring.DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
armored := crypto.ArmorPubKeyBytes(legacy.Cdc.Amino.MustMarshalBinaryBare(info.GetPubKey()), "")
|
||||
pubBytes, algo, err := crypto.UnarmorPubKeyBytes(armored)
|
||||
|
@ -158,12 +158,11 @@ func TestUnarmorInfoBytesErrors(t *testing.T) {
|
|||
}
|
||||
|
||||
func BenchmarkBcryptGenerateFromPassword(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
|
||||
passphrase := []byte("passphrase")
|
||||
for securityParam := 9; securityParam < 16; securityParam++ {
|
||||
param := securityParam
|
||||
b.Run(fmt.Sprintf("benchmark-security-param-%d", param), func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
saltBytes := tmcrypto.CRandBytes(16)
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
|
|
|
@ -26,7 +26,7 @@ func RegisterCrypto(cdc *codec.LegacyAmino) {
|
|||
cdc.RegisterInterface((*cryptotypes.PrivKey)(nil), nil)
|
||||
cdc.RegisterConcrete(sr25519.PrivKey{},
|
||||
sr25519.PrivKeyName, nil)
|
||||
cdc.RegisterConcrete(&ed25519.PrivKey{},
|
||||
cdc.RegisterConcrete(&ed25519.PrivKey{}, //nolint:staticcheck
|
||||
ed25519.PrivKeyName, nil)
|
||||
cdc.RegisterConcrete(&secp256k1.PrivKey{},
|
||||
secp256k1.PrivKeyName, nil)
|
||||
|
|
|
@ -5,13 +5,16 @@ import (
|
|||
"github.com/cosmos/cosmos-sdk/crypto/keys/ed25519"
|
||||
"github.com/cosmos/cosmos-sdk/crypto/keys/multisig"
|
||||
"github.com/cosmos/cosmos-sdk/crypto/keys/secp256k1"
|
||||
"github.com/cosmos/cosmos-sdk/crypto/keys/secp256r1"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
)
|
||||
|
||||
// RegisterInterfaces registers the sdk.Tx interface.
|
||||
func RegisterInterfaces(registry codectypes.InterfaceRegistry) {
|
||||
registry.RegisterInterface("cosmos.crypto.PubKey", (*cryptotypes.PubKey)(nil))
|
||||
registry.RegisterImplementations((*cryptotypes.PubKey)(nil), &ed25519.PubKey{})
|
||||
registry.RegisterImplementations((*cryptotypes.PubKey)(nil), &secp256k1.PubKey{})
|
||||
registry.RegisterImplementations((*cryptotypes.PubKey)(nil), &multisig.LegacyAminoPubKey{})
|
||||
var pk *cryptotypes.PubKey
|
||||
registry.RegisterInterface("cosmos.crypto.PubKey", pk)
|
||||
registry.RegisterImplementations(pk, &ed25519.PubKey{})
|
||||
registry.RegisterImplementations(pk, &secp256k1.PubKey{})
|
||||
registry.RegisterImplementations(pk, &multisig.LegacyAminoPubKey{})
|
||||
secp256r1.RegisterInterfaces(registry)
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"encoding/binary"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
|
@ -177,6 +178,9 @@ func ComputeMastersFromSeed(seed []byte) (secret [32]byte, chainCode [32]byte) {
|
|||
// DerivePrivateKeyForPath derives the private key by following the BIP 32/44 path from privKeyBytes,
|
||||
// using the given chainCode.
|
||||
func DerivePrivateKeyForPath(privKeyBytes, chainCode [32]byte, path string) ([]byte, error) {
|
||||
// First step is to trim the right end path separator lest we panic.
|
||||
// See issue https://github.com/cosmos/cosmos-sdk/issues/8557
|
||||
path = strings.TrimRightFunc(path, func(r rune) bool { return r == filepath.Separator })
|
||||
data := privKeyBytes
|
||||
parts := strings.Split(path, "/")
|
||||
|
||||
|
@ -187,7 +191,10 @@ func DerivePrivateKeyForPath(privKeyBytes, chainCode [32]byte, path string) ([]b
|
|||
parts = parts[1:]
|
||||
}
|
||||
|
||||
for _, part := range parts {
|
||||
for i, part := range parts {
|
||||
if part == "" {
|
||||
return nil, fmt.Errorf("path %q with split element #%d is an empty string", part, i)
|
||||
}
|
||||
// do we have an apostrophe?
|
||||
harden := part[len(part)-1:] == "'"
|
||||
// harden == private derivation, else public derivation:
|
||||
|
|
|
@ -281,3 +281,26 @@ func ExampleSomeBIP32TestVecs() {
|
|||
//
|
||||
// c4c11d8c03625515905d7e89d25dfc66126fbc629ecca6db489a1a72fc4bda78
|
||||
}
|
||||
|
||||
// Ensuring that we don't crash if values have trailing slashes
|
||||
// See issue https://github.com/cosmos/cosmos-sdk/issues/8557.
|
||||
func TestDerivePrivateKeyForPathDoNotCrash(t *testing.T) {
|
||||
paths := []string{
|
||||
"m/5/",
|
||||
"m/5",
|
||||
"/44",
|
||||
"m//5",
|
||||
"m/0/7",
|
||||
"/",
|
||||
" m /0/7", // Test case from fuzzer
|
||||
" / ", // Test case from fuzzer
|
||||
"m///7//////",
|
||||
}
|
||||
|
||||
for _, path := range paths {
|
||||
path := path
|
||||
t.Run(path, func(t *testing.T) {
|
||||
hd.DerivePrivateKeyForPath([32]byte{}, [32]byte{}, path)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -64,14 +64,17 @@ type Keyring interface {
|
|||
Delete(uid string) error
|
||||
DeleteByAddress(address sdk.Address) error
|
||||
|
||||
// NewMnemonic generates a new mnemonic, derives a hierarchical deterministic
|
||||
// key from that, and persists it to the storage. Returns the generated mnemonic and the key
|
||||
// Info. It returns an error if it fails to generate a key for the given algo type, or if
|
||||
// another key is already stored under the same name.
|
||||
NewMnemonic(uid string, language Language, hdPath string, algo SignatureAlgo) (Info, string, error)
|
||||
// NewMnemonic generates a new mnemonic, derives a hierarchical deterministic key from it, and
|
||||
// persists the key to storage. Returns the generated mnemonic and the key Info.
|
||||
// It returns an error if it fails to generate a key for the given algo type, or if
|
||||
// another key is already stored under the same name or address.
|
||||
//
|
||||
// A passphrase set to the empty string will set the passphrase to the DefaultBIP39Passphrase value.
|
||||
NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (Info, string, error)
|
||||
|
||||
// NewAccount converts a mnemonic to a private key and BIP-39 HD Path and persists it.
|
||||
NewAccount(uid, mnemonic, bip39Passwd, hdPath string, algo SignatureAlgo) (Info, error)
|
||||
// It fails if there is an existing key Info with the same address.
|
||||
NewAccount(uid, mnemonic, bip39Passphrase, hdPath string, algo SignatureAlgo) (Info, error)
|
||||
|
||||
// SaveLedgerKey retrieves a public key reference from a Ledger device and persists it.
|
||||
SaveLedgerKey(uid string, algo SignatureAlgo, hrp string, coinType, account, index uint32) (Info, error)
|
||||
|
@ -113,13 +116,20 @@ type Importer interface {
|
|||
ImportPubKey(uid string, armor string) error
|
||||
}
|
||||
|
||||
// LegacyInfoImporter is implemented by key stores that support import of Info types.
|
||||
type LegacyInfoImporter interface {
|
||||
// ImportInfo import a keyring.Info into the current keyring.
|
||||
// It is used to migrate multisig, ledger, and public key Info structure.
|
||||
ImportInfo(oldInfo Info) error
|
||||
}
|
||||
|
||||
// Exporter is implemented by key stores that support export of public and private keys.
|
||||
type Exporter interface {
|
||||
// Export public key
|
||||
ExportPubKeyArmor(uid string) (string, error)
|
||||
ExportPubKeyArmorByAddress(address sdk.Address) (string, error)
|
||||
|
||||
// ExportPrivKey returns a private key in ASCII armored format.
|
||||
// ExportPrivKeyArmor returns a private key in ASCII armored format.
|
||||
// It returns an error if the key does not exist or a wrong encryption passphrase is supplied.
|
||||
ExportPrivKeyArmor(uid, encryptPassphrase string) (armor string, err error)
|
||||
ExportPrivKeyArmorByAddress(address sdk.Address, encryptPassphrase string) (armor string, err error)
|
||||
|
@ -318,6 +328,15 @@ func (ks keystore) ImportPubKey(uid string, armor string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// ImportInfo implements Importer.MigrateInfo.
|
||||
func (ks keystore) ImportInfo(oldInfo Info) error {
|
||||
if _, err := ks.Key(oldInfo.GetName()); err == nil {
|
||||
return fmt.Errorf("cannot overwrite key: %s", oldInfo.GetName())
|
||||
}
|
||||
|
||||
return ks.writeInfo(oldInfo)
|
||||
}
|
||||
|
||||
func (ks keystore) Sign(uid string, msg []byte) ([]byte, types.PubKey, error) {
|
||||
info, err := ks.Key(uid)
|
||||
if err != nil {
|
||||
|
@ -477,7 +496,7 @@ func (ks keystore) List() ([]Info, error) {
|
|||
return res, nil
|
||||
}
|
||||
|
||||
func (ks keystore) NewMnemonic(uid string, language Language, hdPath string, algo SignatureAlgo) (Info, string, error) {
|
||||
func (ks keystore) NewMnemonic(uid string, language Language, hdPath, bip39Passphrase string, algo SignatureAlgo) (Info, string, error) {
|
||||
if language != English {
|
||||
return nil, "", ErrUnsupportedLanguage
|
||||
}
|
||||
|
@ -498,12 +517,16 @@ func (ks keystore) NewMnemonic(uid string, language Language, hdPath string, alg
|
|||
return nil, "", err
|
||||
}
|
||||
|
||||
info, err := ks.NewAccount(uid, mnemonic, DefaultBIP39Passphrase, hdPath, algo)
|
||||
if bip39Passphrase == "" {
|
||||
bip39Passphrase = DefaultBIP39Passphrase
|
||||
}
|
||||
|
||||
info, err := ks.NewAccount(uid, mnemonic, bip39Passphrase, hdPath, algo)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return info, mnemonic, err
|
||||
return info, mnemonic, nil
|
||||
}
|
||||
|
||||
func (ks keystore) NewAccount(uid string, mnemonic string, bip39Passphrase string, hdPath string, algo SignatureAlgo) (Info, error) {
|
||||
|
@ -519,6 +542,13 @@ func (ks keystore) NewAccount(uid string, mnemonic string, bip39Passphrase strin
|
|||
|
||||
privKey := algo.Generate()(derivedPriv)
|
||||
|
||||
// check if the a key already exists with the same address and return an error
|
||||
// if found
|
||||
address := sdk.AccAddress(privKey.PubKey().Address())
|
||||
if _, err := ks.KeyByAddress(address); err == nil {
|
||||
return nil, fmt.Errorf("account with address %s already exists in keyring, delete the key first if you want to recreate it", address)
|
||||
}
|
||||
|
||||
return ks.writeLocalKey(uid, privKey, algo.Name())
|
||||
}
|
||||
|
||||
|
@ -577,9 +607,10 @@ func SignWithLedger(info Info, msg []byte) (sig []byte, pub types.PubKey, err er
|
|||
|
||||
func newOSBackendKeyringConfig(appName, dir string, buf io.Reader) keyring.Config {
|
||||
return keyring.Config{
|
||||
ServiceName: appName,
|
||||
FileDir: dir,
|
||||
FilePasswordFunc: newRealPrompt(dir, buf),
|
||||
ServiceName: appName,
|
||||
FileDir: dir,
|
||||
KeychainTrustApplication: true,
|
||||
FilePasswordFunc: newRealPrompt(dir, buf),
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ func TestSignVerifyKeyRingWithLedger(t *testing.T) {
|
|||
require.True(t, i1.GetPubKey().VerifySignature(d1, s1))
|
||||
require.True(t, bytes.Equal(s1, s2))
|
||||
|
||||
localInfo, _, err := kb.NewMnemonic("test", English, types.FullFundraiserPath, hd.Secp256k1)
|
||||
localInfo, _, err := kb.NewMnemonic("test", English, types.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
_, _, err = SignWithLedger(localInfo, d1)
|
||||
require.Error(t, err)
|
||||
|
|
|
@ -42,7 +42,7 @@ func TestNewKeyring(t *testing.T) {
|
|||
require.Equal(t, "unknown keyring backend fuzzy", err.Error())
|
||||
|
||||
mockIn.Reset("password\npassword\n")
|
||||
info, _, err := kr.NewMnemonic("foo", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
info, _, err := kr.NewMnemonic("foo", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "foo", info.GetName())
|
||||
}
|
||||
|
@ -59,17 +59,17 @@ func TestKeyManagementKeyRing(t *testing.T) {
|
|||
require.Nil(t, err)
|
||||
require.Empty(t, l)
|
||||
|
||||
_, _, err = kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, notSupportedAlgo{})
|
||||
_, _, err = kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, notSupportedAlgo{})
|
||||
require.Error(t, err, "ed25519 keys are currently not supported by keybase")
|
||||
|
||||
// create some keys
|
||||
_, err = kb.Key(n1)
|
||||
require.Error(t, err)
|
||||
i, _, err := kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
i, _, err := kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, n1, i.GetName())
|
||||
_, _, err = kb.NewMnemonic(n2, English, sdk.FullFundraiserPath, algo)
|
||||
_, _, err = kb.NewMnemonic(n2, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.NoError(t, err)
|
||||
|
||||
// we can get these keys
|
||||
|
@ -137,10 +137,10 @@ func TestSignVerifyKeyRing(t *testing.T) {
|
|||
n1, n2, n3 := "some dude", "a dudette", "dude-ish"
|
||||
|
||||
// create two users and get their info
|
||||
i1, _, err := kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
i1, _, err := kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err)
|
||||
|
||||
i2, _, err := kb.NewMnemonic(n2, English, sdk.FullFundraiserPath, algo)
|
||||
i2, _, err := kb.NewMnemonic(n2, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err)
|
||||
|
||||
// let's try to sign some messages
|
||||
|
@ -209,7 +209,7 @@ func TestExportImportKeyRing(t *testing.T) {
|
|||
kb, err := New("keybasename", "test", t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
info, _, err := kb.NewMnemonic("john", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
info, _, err := kb.NewMnemonic("john", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, info.GetName(), "john")
|
||||
|
||||
|
@ -243,7 +243,7 @@ func TestExportImportPubKeyKeyRing(t *testing.T) {
|
|||
algo := hd.Secp256k1
|
||||
|
||||
// CreateMnemonic a private-public key pair and ensure consistency
|
||||
info, _, err := kb.NewMnemonic("john", English, sdk.FullFundraiserPath, algo)
|
||||
info, _, err := kb.NewMnemonic("john", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err)
|
||||
require.NotEqual(t, info, "")
|
||||
require.Equal(t, info.GetName(), "john")
|
||||
|
@ -285,7 +285,7 @@ func TestAdvancedKeyManagementKeyRing(t *testing.T) {
|
|||
n1, n2 := "old-name", "new name"
|
||||
|
||||
// make sure key works with initial password
|
||||
_, _, err = kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
_, _, err = kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
|
||||
_, err = kb.ExportPubKeyArmor(n1 + ".notreal")
|
||||
|
@ -320,7 +320,7 @@ func TestSeedPhraseKeyRing(t *testing.T) {
|
|||
n1, n2 := "lost-key", "found-again"
|
||||
|
||||
// make sure key works with initial password
|
||||
info, mnemonic, err := kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
info, mnemonic, err := kb.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
require.Equal(t, n1, info.GetName())
|
||||
require.NotEmpty(t, mnemonic)
|
||||
|
@ -345,7 +345,7 @@ func TestKeyringKeybaseExportImportPrivKey(t *testing.T) {
|
|||
kb, err := New("keybasename", "test", t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
_, _, err = kb.NewMnemonic("john", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = kb.NewMnemonic("john", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
keystr, err := kb.ExportPrivKeyArmor("john", "somepassword")
|
||||
|
@ -372,7 +372,7 @@ func TestKeyringKeybaseExportImportPrivKey(t *testing.T) {
|
|||
|
||||
func TestInMemoryLanguage(t *testing.T) {
|
||||
kb := NewInMemory()
|
||||
_, _, err := kb.NewMnemonic("something", Japanese, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err := kb.NewMnemonic("something", Japanese, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.Error(t, err)
|
||||
require.Equal(t, "unsupported language: only english is supported", err.Error())
|
||||
}
|
||||
|
@ -412,17 +412,17 @@ func TestInMemoryKeyManagement(t *testing.T) {
|
|||
require.Nil(t, err)
|
||||
require.Empty(t, l)
|
||||
|
||||
_, _, err = cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, notSupportedAlgo{})
|
||||
_, _, err = cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, notSupportedAlgo{})
|
||||
require.Error(t, err, "ed25519 keys are currently not supported by keybase")
|
||||
|
||||
// create some keys
|
||||
_, err = cstore.Key(n1)
|
||||
require.Error(t, err)
|
||||
i, _, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
i, _, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, n1, i.GetName())
|
||||
_, _, err = cstore.NewMnemonic(n2, English, sdk.FullFundraiserPath, algo)
|
||||
_, _, err = cstore.NewMnemonic(n2, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.NoError(t, err)
|
||||
|
||||
// we can get these keys
|
||||
|
@ -492,10 +492,10 @@ func TestInMemorySignVerify(t *testing.T) {
|
|||
n1, n2, n3 := "some dude", "a dudette", "dude-ish"
|
||||
|
||||
// create two users and get their info
|
||||
i1, _, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
i1, _, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err)
|
||||
|
||||
i2, _, err := cstore.NewMnemonic(n2, English, sdk.FullFundraiserPath, algo)
|
||||
i2, _, err := cstore.NewMnemonic(n2, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err)
|
||||
|
||||
// let's try to sign some messages
|
||||
|
@ -566,7 +566,7 @@ func TestInMemoryExportImport(t *testing.T) {
|
|||
// make the storage with reasonable defaults
|
||||
cstore := NewInMemory()
|
||||
|
||||
info, _, err := cstore.NewMnemonic("john", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
info, _, err := cstore.NewMnemonic("john", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, info.GetName(), "john")
|
||||
|
||||
|
@ -596,7 +596,7 @@ func TestInMemoryExportImport(t *testing.T) {
|
|||
func TestInMemoryExportImportPrivKey(t *testing.T) {
|
||||
kb := NewInMemory()
|
||||
|
||||
info, _, err := kb.NewMnemonic("john", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
info, _, err := kb.NewMnemonic("john", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, info.GetName(), "john")
|
||||
priv1, err := kb.Key("john")
|
||||
|
@ -624,7 +624,7 @@ func TestInMemoryExportImportPubKey(t *testing.T) {
|
|||
cstore := NewInMemory()
|
||||
|
||||
// CreateMnemonic a private-public key pair and ensure consistency
|
||||
info, _, err := cstore.NewMnemonic("john", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
info, _, err := cstore.NewMnemonic("john", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.Nil(t, err)
|
||||
require.NotEqual(t, info, "")
|
||||
require.Equal(t, info.GetName(), "john")
|
||||
|
@ -663,7 +663,7 @@ func TestInMemoryAdvancedKeyManagement(t *testing.T) {
|
|||
n1, n2 := "old-name", "new name"
|
||||
|
||||
// make sure key works with initial password
|
||||
_, _, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
_, _, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
|
||||
// exporting requires the proper name and passphrase
|
||||
|
@ -698,7 +698,7 @@ func TestInMemorySeedPhrase(t *testing.T) {
|
|||
n1, n2 := "lost-key", "found-again"
|
||||
|
||||
// make sure key works with initial password
|
||||
info, mnemonic, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, algo)
|
||||
info, mnemonic, err := cstore.NewMnemonic(n1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, algo)
|
||||
require.Nil(t, err, "%+v", err)
|
||||
require.Equal(t, n1, info.GetName())
|
||||
require.NotEmpty(t, mnemonic)
|
||||
|
@ -724,7 +724,7 @@ func TestKeyChain_ShouldFailWhenAddingSameGeneratedAccount(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
// Given we create a mnemonic
|
||||
_, seed, err := kr.NewMnemonic("test", English, "", hd.Secp256k1)
|
||||
_, seed, err := kr.NewMnemonic("test", English, "", DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, kr.Delete("test"))
|
||||
|
@ -745,7 +745,7 @@ func ExampleNew() {
|
|||
sec := hd.Secp256k1
|
||||
|
||||
// Add keys and see they return in alphabetical order
|
||||
bob, _, err := cstore.NewMnemonic("Bob", English, sdk.FullFundraiserPath, sec)
|
||||
bob, _, err := cstore.NewMnemonic("Bob", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, sec)
|
||||
if err != nil {
|
||||
// this should never happen
|
||||
fmt.Println(err)
|
||||
|
@ -753,8 +753,8 @@ func ExampleNew() {
|
|||
// return info here just like in List
|
||||
fmt.Println(bob.GetName())
|
||||
}
|
||||
_, _, _ = cstore.NewMnemonic("Alice", English, sdk.FullFundraiserPath, sec)
|
||||
_, _, _ = cstore.NewMnemonic("Carl", English, sdk.FullFundraiserPath, sec)
|
||||
_, _, _ = cstore.NewMnemonic("Alice", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, sec)
|
||||
_, _, _ = cstore.NewMnemonic("Carl", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, sec)
|
||||
info, _ := cstore.List()
|
||||
for _, i := range info {
|
||||
fmt.Println(i.GetName())
|
||||
|
@ -799,16 +799,16 @@ func TestAltKeyring_List(t *testing.T) {
|
|||
require.Empty(t, list)
|
||||
|
||||
// Fails on creating unsupported pubKeyType
|
||||
_, _, err = keyring.NewMnemonic("failing", English, sdk.FullFundraiserPath, notSupportedAlgo{})
|
||||
_, _, err = keyring.NewMnemonic("failing", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, notSupportedAlgo{})
|
||||
require.EqualError(t, err, ErrUnsupportedSigningAlgo.Error())
|
||||
|
||||
// Create 3 keys
|
||||
uid1, uid2, uid3 := "Zkey", "Bkey", "Rkey"
|
||||
_, _, err = keyring.NewMnemonic(uid1, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid1, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
_, _, err = keyring.NewMnemonic(uid2, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid2, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
_, _, err = keyring.NewMnemonic(uid3, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid3, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
list, err = keyring.List()
|
||||
|
@ -852,7 +852,7 @@ func TestAltKeyring_Get(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := someKey
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
key, err := keyring.Key(uid)
|
||||
|
@ -865,7 +865,7 @@ func TestAltKeyring_KeyByAddress(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := someKey
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
key, err := keyring.KeyByAddress(mnemonic.GetAddress())
|
||||
|
@ -878,7 +878,7 @@ func TestAltKeyring_Delete(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := someKey
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
list, err := keyring.List()
|
||||
|
@ -898,7 +898,7 @@ func TestAltKeyring_DeleteByAddress(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := someKey
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
list, err := keyring.List()
|
||||
|
@ -940,9 +940,9 @@ func TestAltKeyring_SaveMultisig(t *testing.T) {
|
|||
keyring, err := New(t.Name(), BackendTest, t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
mnemonic1, _, err := keyring.NewMnemonic("key1", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic1, _, err := keyring.NewMnemonic("key1", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
mnemonic2, _, err := keyring.NewMnemonic("key2", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic2, _, err := keyring.NewMnemonic("key2", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
key := "multi"
|
||||
|
@ -969,7 +969,7 @@ func TestAltKeyring_Sign(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := "jack"
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
msg := []byte("some message")
|
||||
|
@ -985,7 +985,7 @@ func TestAltKeyring_SignByAddress(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := "jack"
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
msg := []byte("some message")
|
||||
|
@ -1001,7 +1001,7 @@ func TestAltKeyring_ImportExportPrivKey(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := theID
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
passphrase := "somePass"
|
||||
|
@ -1027,7 +1027,7 @@ func TestAltKeyring_ImportExportPrivKey_ByAddress(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := theID
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
passphrase := "somePass"
|
||||
|
@ -1054,7 +1054,7 @@ func TestAltKeyring_ImportExportPubKey(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := theID
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
armor, err := keyring.ExportPubKeyArmor(uid)
|
||||
|
@ -1076,7 +1076,7 @@ func TestAltKeyring_ImportExportPubKey_ByAddress(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
uid := theID
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
mnemonic, _, err := keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
armor, err := keyring.ExportPubKeyArmorByAddress(mnemonic.GetAddress())
|
||||
|
@ -1099,7 +1099,7 @@ func TestAltKeyring_UnsafeExportPrivKeyHex(t *testing.T) {
|
|||
|
||||
uid := theID
|
||||
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic(uid, English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
unsafeKeyring := NewUnsafe(keyring)
|
||||
|
@ -1121,11 +1121,11 @@ func TestAltKeyring_ConstructorSupportedAlgos(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
// should fail when using unsupported signing algorythm.
|
||||
_, _, err = keyring.NewMnemonic("test", English, sdk.FullFundraiserPath, notSupportedAlgo{})
|
||||
_, _, err = keyring.NewMnemonic("test", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, notSupportedAlgo{})
|
||||
require.EqualError(t, err, "unsupported signing algo")
|
||||
|
||||
// but works with default signing algo.
|
||||
_, _, err = keyring.NewMnemonic("test", English, sdk.FullFundraiserPath, hd.Secp256k1)
|
||||
_, _, err = keyring.NewMnemonic("test", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, hd.Secp256k1)
|
||||
require.NoError(t, err)
|
||||
|
||||
// but we can create a new keybase with our provided algos.
|
||||
|
@ -1137,7 +1137,7 @@ func TestAltKeyring_ConstructorSupportedAlgos(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
|
||||
// now this new keyring does not fail when signing with provided algo
|
||||
_, _, err = keyring2.NewMnemonic("test", English, sdk.FullFundraiserPath, notSupportedAlgo{})
|
||||
_, _, err = keyring2.NewMnemonic("test", English, sdk.FullFundraiserPath, DefaultBIP39Passphrase, notSupportedAlgo{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ var _ LegacyKeybase = dbKeybase{}
|
|||
// dbKeybase combines encryption and storage implementation to provide a
|
||||
// full-featured key manager.
|
||||
//
|
||||
// NOTE: dbKeybase will be deprecated in favor of keyringKeybase.
|
||||
// Deprecated: dbKeybase will be removed in favor of keyringKeybase.
|
||||
type dbKeybase struct {
|
||||
db dbm.DB
|
||||
}
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
package ed25519
|
||||
|
||||
/*
|
||||
This package contains a wrapper around crypto/ed22519 to make it comply with the crypto interfaces.
|
||||
|
||||
This package employs zip215 rules. We use https://github.com/hdevalence/ed25519consensus verification function. Ths is done in order to keep compatibility with Tendermints ed25519 implementation.
|
||||
- https://github.com/tendermint/tendermint/blob/master/crypto/ed25519/ed25519.go#L155
|
||||
|
||||
This package works with correctly generated signatures. To read more about what this means see https://hdevalence.ca/blog/2020-10-04-its-25519am
|
||||
|
||||
*/
|
|
@ -6,6 +6,7 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/hdevalence/ed25519consensus"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
|
||||
|
@ -116,7 +117,8 @@ func (privKey *PrivKey) UnmarshalAminoJSON(bz []byte) error {
|
|||
return privKey.UnmarshalAmino(bz)
|
||||
}
|
||||
|
||||
// GenPrivKey generates a new ed25519 private key.
|
||||
// GenPrivKey generates a new ed25519 private key. These ed25519 keys must not
|
||||
// be used in SDK apps except in a tendermint validator context.
|
||||
// It uses OS randomness in conjunction with the current global random seed
|
||||
// in tendermint/libs/common to generate the private key.
|
||||
func GenPrivKey() *PrivKey {
|
||||
|
@ -137,6 +139,7 @@ func genPrivKey(rand io.Reader) *PrivKey {
|
|||
|
||||
// GenPrivKeyFromSecret hashes the secret with SHA2, and uses
|
||||
// that 32 byte output to create the private key.
|
||||
// NOTE: ed25519 keys must not be used in SDK apps except in a tendermint validator context.
|
||||
// NOTE: secret should be the output of a KDF like bcrypt,
|
||||
// if it's derived from user input.
|
||||
func GenPrivKeyFromSecret(secret []byte) *PrivKey {
|
||||
|
@ -151,10 +154,14 @@ var _ cryptotypes.PubKey = &PubKey{}
|
|||
var _ codec.AminoMarshaler = &PubKey{}
|
||||
|
||||
// Address is the SHA256-20 of the raw pubkey bytes.
|
||||
// It doesn't implement ADR-28 addresses and it must not be used
|
||||
// in SDK except in a tendermint validator context.
|
||||
func (pubKey *PubKey) Address() crypto.Address {
|
||||
if len(pubKey.Key) != PubKeySize {
|
||||
panic("pubkey is incorrect size")
|
||||
}
|
||||
// For ADR-28 compatible address we would need to
|
||||
// return address.Hash(proto.MessageName(pubKey), pubKey.Key)
|
||||
return crypto.Address(tmhash.SumTruncated(pubKey.Key))
|
||||
}
|
||||
|
||||
|
@ -169,7 +176,8 @@ func (pubKey *PubKey) VerifySignature(msg []byte, sig []byte) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
return ed25519.Verify(pubKey.Key, msg, sig)
|
||||
// uses https://github.com/hdevalence/ed25519consensus.Verify to comply with zip215 verification rules
|
||||
return ed25519consensus.Verify(pubKey.Key, msg, sig)
|
||||
}
|
||||
|
||||
func (pubKey *PubKey) String() string {
|
||||
|
|
|
@ -84,6 +84,12 @@ func TestPubKeyEquals(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestAddressEd25519(t *testing.T) {
|
||||
pk := ed25519.PubKey{[]byte{125, 80, 29, 208, 159, 53, 119, 198, 73, 53, 187, 33, 199, 144, 62, 255, 1, 235, 117, 96, 128, 211, 17, 45, 34, 64, 189, 165, 33, 182, 54, 206}}
|
||||
addr := pk.Address()
|
||||
require.Len(t, addr, 20, "Address must be 20 bytes long")
|
||||
}
|
||||
|
||||
func TestPrivKeyEquals(t *testing.T) {
|
||||
ed25519PrivKey := ed25519.GenPrivKey()
|
||||
|
||||
|
|
|
@ -24,11 +24,11 @@ var _ = math.Inf
|
|||
// proto package needs to be updated.
|
||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
// PubKey defines a ed25519 public key
|
||||
// Key is the compressed form of the pubkey. The first byte depends is a 0x02 byte
|
||||
// if the y-coordinate is the lexicographically largest of the two associated with
|
||||
// the x-coordinate. Otherwise the first byte is a 0x03.
|
||||
// This prefix is followed with the x-coordinate.
|
||||
// PubKey is an ed25519 public key for handling Tendermint keys in SDK.
|
||||
// It's needed for Any serialization and SDK compatibility.
|
||||
// It must not be used in a non Tendermint key context because it doesn't implement
|
||||
// ADR-28. Nevertheless, you will like to use ed25519 in app user level
|
||||
// then you must create a new proto message and follow ADR-28 for Address construction.
|
||||
type PubKey struct {
|
||||
Key crypto_ed25519.PublicKey `protobuf:"bytes,1,opt,name=key,proto3,casttype=crypto/ed25519.PublicKey" json:"key,omitempty"`
|
||||
}
|
||||
|
@ -72,7 +72,8 @@ func (m *PubKey) GetKey() crypto_ed25519.PublicKey {
|
|||
return nil
|
||||
}
|
||||
|
||||
// PrivKey defines a ed25519 private key.
|
||||
// Deprecated: PrivKey defines a ed25519 private key.
|
||||
// NOTE: ed25519 keys must not be used in SDK apps except in a tendermint validator context.
|
||||
type PrivKey struct {
|
||||
Key crypto_ed25519.PrivateKey `protobuf:"bytes,1,opt,name=key,proto3,casttype=crypto/ed25519.PrivateKey" json:"key,omitempty"`
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package benchmarking
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
|
@ -13,22 +14,12 @@ import (
|
|||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found at the bottom of this file.
|
||||
|
||||
type zeroReader struct{}
|
||||
|
||||
func (zeroReader) Read(buf []byte) (int, error) {
|
||||
for i := range buf {
|
||||
buf[i] = 0
|
||||
}
|
||||
return len(buf), nil
|
||||
}
|
||||
|
||||
// BenchmarkKeyGeneration benchmarks the given key generation algorithm using
|
||||
// a dummy reader.
|
||||
func BenchmarkKeyGeneration(b *testing.B, generateKey func(reader io.Reader) types.PrivKey) {
|
||||
b.ReportAllocs()
|
||||
var zero zeroReader
|
||||
for i := 0; i < b.N; i++ {
|
||||
generateKey(zero)
|
||||
generateKey(rand.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
// Package ECDSA implements Cosmos-SDK compatible ECDSA public and private key. The keys
|
||||
// can be serialized.
|
||||
package ecdsa
|
|
@ -0,0 +1,69 @@
|
|||
package ecdsa
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
"math/big"
|
||||
)
|
||||
|
||||
// GenPrivKey generates a new secp256r1 private key. It uses operating system randomness.
|
||||
func GenPrivKey(curve elliptic.Curve) (PrivKey, error) {
|
||||
key, err := ecdsa.GenerateKey(curve, rand.Reader)
|
||||
if err != nil {
|
||||
return PrivKey{}, err
|
||||
}
|
||||
return PrivKey{*key}, nil
|
||||
}
|
||||
|
||||
type PrivKey struct {
|
||||
ecdsa.PrivateKey
|
||||
}
|
||||
|
||||
// PubKey returns ECDSA public key associated with this private key.
|
||||
func (sk *PrivKey) PubKey() PubKey {
|
||||
return PubKey{sk.PublicKey, nil}
|
||||
}
|
||||
|
||||
// Bytes serialize the private key using big-endian.
|
||||
func (sk *PrivKey) Bytes() []byte {
|
||||
if sk == nil {
|
||||
return nil
|
||||
}
|
||||
fieldSize := (sk.Curve.Params().BitSize + 7) / 8
|
||||
bz := make([]byte, fieldSize)
|
||||
sk.D.FillBytes(bz)
|
||||
return bz
|
||||
}
|
||||
|
||||
// Sign hashes and signs the message usign ECDSA. Implements SDK PrivKey interface.
|
||||
func (sk *PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
digest := sha256.Sum256(msg)
|
||||
return sk.PrivateKey.Sign(rand.Reader, digest[:], nil)
|
||||
}
|
||||
|
||||
// String returns a string representation of the public key based on the curveName.
|
||||
func (sk *PrivKey) String(name string) string {
|
||||
return name + "{-}"
|
||||
}
|
||||
|
||||
// MarshalTo implements proto.Marshaler interface.
|
||||
func (sk *PrivKey) MarshalTo(dAtA []byte) (int, error) {
|
||||
bz := sk.Bytes()
|
||||
copy(dAtA, bz)
|
||||
return len(bz), nil
|
||||
}
|
||||
|
||||
// Unmarshal implements proto.Marshaler interface.
|
||||
func (sk *PrivKey) Unmarshal(bz []byte, curve elliptic.Curve, expectedSize int) error {
|
||||
if len(bz) != expectedSize {
|
||||
return fmt.Errorf("wrong ECDSA SK bytes, expecting %d bytes", expectedSize)
|
||||
}
|
||||
|
||||
sk.Curve = curve
|
||||
sk.D = new(big.Int).SetBytes(bz)
|
||||
sk.X, sk.Y = curve.ScalarBaseMult(bz)
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,66 @@
|
|||
package ecdsa
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
|
||||
"github.com/stretchr/testify/suite"
|
||||
)
|
||||
|
||||
func TestSKSuite(t *testing.T) {
|
||||
suite.Run(t, new(SKSuite))
|
||||
}
|
||||
|
||||
type SKSuite struct{ CommonSuite }
|
||||
|
||||
func (suite *SKSuite) TestString() {
|
||||
const prefix = "abc"
|
||||
suite.Require().Equal(prefix+"{-}", suite.sk.String(prefix))
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestPubKey() {
|
||||
pk := suite.sk.PubKey()
|
||||
suite.True(suite.sk.PublicKey.Equal(&pk.PublicKey))
|
||||
}
|
||||
|
||||
func (suite *SKSuite) Bytes() {
|
||||
bz := suite.sk.Bytes()
|
||||
suite.Len(bz, 32)
|
||||
var sk *PrivKey
|
||||
suite.Nil(sk.Bytes())
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestMarshal() {
|
||||
require := suite.Require()
|
||||
const size = 32
|
||||
|
||||
var buffer = make([]byte, size)
|
||||
suite.sk.MarshalTo(buffer)
|
||||
|
||||
var sk = new(PrivKey)
|
||||
err := sk.Unmarshal(buffer, secp256r1, size)
|
||||
require.NoError(err)
|
||||
require.True(sk.Equal(&suite.sk.PrivateKey))
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestSign() {
|
||||
require := suite.Require()
|
||||
|
||||
msg := crypto.CRandBytes(1000)
|
||||
sig, err := suite.sk.Sign(msg)
|
||||
require.NoError(err)
|
||||
sigCpy := make([]byte, len(sig))
|
||||
copy(sigCpy, sig)
|
||||
require.True(suite.pk.VerifySignature(msg, sigCpy))
|
||||
|
||||
// Mutate the signature
|
||||
for i := range sig {
|
||||
sigCpy[i] ^= byte(i + 1)
|
||||
require.False(suite.pk.VerifySignature(msg, sigCpy))
|
||||
}
|
||||
|
||||
// Mutate the message
|
||||
msg[1] ^= byte(2)
|
||||
require.False(suite.pk.VerifySignature(msg, sig))
|
||||
}
|
|
@ -0,0 +1,83 @@
|
|||
package ecdsa
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/sha256"
|
||||
"encoding/asn1"
|
||||
"fmt"
|
||||
"math/big"
|
||||
|
||||
tmcrypto "github.com/tendermint/tendermint/crypto"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/types/address"
|
||||
"github.com/cosmos/cosmos-sdk/types/errors"
|
||||
)
|
||||
|
||||
// signature holds the r and s values of an ECDSA signature.
|
||||
type signature struct {
|
||||
R, S *big.Int
|
||||
}
|
||||
|
||||
type PubKey struct {
|
||||
ecdsa.PublicKey
|
||||
|
||||
// cache
|
||||
address tmcrypto.Address
|
||||
}
|
||||
|
||||
// Address creates an ADR-28 address for ECDSA keys. protoName is a concrete proto structure id.
|
||||
func (pk *PubKey) Address(protoName string) tmcrypto.Address {
|
||||
if pk.address == nil {
|
||||
pk.address = address.Hash(protoName, pk.Bytes())
|
||||
}
|
||||
return pk.address
|
||||
}
|
||||
|
||||
// Bytes returns the byte representation of the public key using a compressed form
|
||||
// specified in section 4.3.6 of ANSI X9.62 with first byte being the curve type.
|
||||
func (pk *PubKey) Bytes() []byte {
|
||||
if pk == nil {
|
||||
return nil
|
||||
}
|
||||
return elliptic.MarshalCompressed(pk.Curve, pk.X, pk.Y)
|
||||
}
|
||||
|
||||
// VerifySignature checks if sig is a valid ECDSA signature for msg.
|
||||
func (pk *PubKey) VerifySignature(msg []byte, sig []byte) bool {
|
||||
s := new(signature)
|
||||
if _, err := asn1.Unmarshal(sig, s); err != nil || s == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
h := sha256.Sum256(msg)
|
||||
return ecdsa.Verify(&pk.PublicKey, h[:], s.R, s.S)
|
||||
}
|
||||
|
||||
// String returns a string representation of the public key based on the curveName.
|
||||
func (pk *PubKey) String(curveName string) string {
|
||||
return fmt.Sprintf("%s{%X}", curveName, pk.Bytes())
|
||||
}
|
||||
|
||||
// **** Proto Marshaler ****
|
||||
|
||||
// MarshalTo implements proto.Marshaler interface.
|
||||
func (pk *PubKey) MarshalTo(dAtA []byte) (int, error) {
|
||||
bz := pk.Bytes()
|
||||
copy(dAtA, bz)
|
||||
return len(bz), nil
|
||||
}
|
||||
|
||||
// Unmarshal implements proto.Marshaler interface.
|
||||
func (pk *PubKey) Unmarshal(bz []byte, curve elliptic.Curve, expectedSize int) error {
|
||||
if len(bz) != expectedSize {
|
||||
return errors.Wrapf(errors.ErrInvalidPubKey, "wrong ECDSA PK bytes, expecting %d bytes, got %d", expectedSize, len(bz))
|
||||
}
|
||||
cpk := ecdsa.PublicKey{Curve: curve}
|
||||
cpk.X, cpk.Y = elliptic.UnmarshalCompressed(curve, bz)
|
||||
if cpk.X == nil || cpk.Y == nil {
|
||||
return errors.Wrapf(errors.ErrInvalidPubKey, "wrong ECDSA PK bytes, unknown curve type: %d", bz[0])
|
||||
}
|
||||
pk.PublicKey = cpk
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,69 @@
|
|||
package ecdsa
|
||||
|
||||
import (
|
||||
"crypto/elliptic"
|
||||
"encoding/hex"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/suite"
|
||||
)
|
||||
|
||||
var secp256r1 = elliptic.P256()
|
||||
|
||||
func GenSecp256r1() (PrivKey, error) {
|
||||
return GenPrivKey(secp256r1)
|
||||
}
|
||||
|
||||
func TestPKSuite(t *testing.T) {
|
||||
suite.Run(t, new(PKSuite))
|
||||
}
|
||||
|
||||
type CommonSuite struct {
|
||||
suite.Suite
|
||||
pk PubKey
|
||||
sk PrivKey
|
||||
}
|
||||
|
||||
func (suite *CommonSuite) SetupSuite() {
|
||||
sk, err := GenSecp256r1()
|
||||
suite.Require().NoError(err)
|
||||
suite.sk = sk
|
||||
suite.pk = sk.PubKey()
|
||||
}
|
||||
|
||||
type PKSuite struct{ CommonSuite }
|
||||
|
||||
func (suite *PKSuite) TestString() {
|
||||
assert := suite.Assert()
|
||||
require := suite.Require()
|
||||
|
||||
prefix := "abc"
|
||||
pkStr := suite.pk.String(prefix)
|
||||
assert.Equal(prefix+"{", pkStr[:len(prefix)+1])
|
||||
assert.EqualValues('}', pkStr[len(pkStr)-1])
|
||||
|
||||
bz, err := hex.DecodeString(pkStr[len(prefix)+1 : len(pkStr)-1])
|
||||
require.NoError(err)
|
||||
assert.EqualValues(suite.pk.Bytes(), bz)
|
||||
}
|
||||
|
||||
func (suite *PKSuite) TestBytes() {
|
||||
require := suite.Require()
|
||||
var pk *PubKey
|
||||
require.Nil(pk.Bytes())
|
||||
}
|
||||
|
||||
func (suite *PKSuite) TestMarshal() {
|
||||
require := suite.Require()
|
||||
const size = 33 // secp256r1 size
|
||||
|
||||
var buffer = make([]byte, size)
|
||||
n, err := suite.pk.MarshalTo(buffer)
|
||||
require.NoError(err)
|
||||
require.Equal(size, n)
|
||||
|
||||
var pk = new(PubKey)
|
||||
err = pk.Unmarshal(buffer, secp256r1, size)
|
||||
require.NoError(err)
|
||||
require.True(pk.PublicKey.Equal(&suite.pk.PublicKey))
|
||||
}
|
|
@ -0,0 +1,88 @@
|
|||
package multisig
|
||||
|
||||
import (
|
||||
types "github.com/cosmos/cosmos-sdk/codec/types"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
sdkerrors "github.com/cosmos/cosmos-sdk/types/errors"
|
||||
)
|
||||
|
||||
// tmMultisig implements a K of N threshold multisig. It is used for
|
||||
// Amino JSON marshaling of LegacyAminoPubKey (see below for details).
|
||||
//
|
||||
// This struct is copy-pasted from:
|
||||
// https://github.com/tendermint/tendermint/blob/v0.33.9/crypto/multisig/threshold_pubkey.go
|
||||
//
|
||||
// This struct was used in the SDK <=0.39. In 0.40 and the switch to protobuf,
|
||||
// it has been converted to LegacyAminoPubKey. However, there's one difference:
|
||||
// the threshold field was an `uint` before, and an `uint32` after. This caused
|
||||
// amino marshaling to be breaking: amino marshals `uint32` as a JSON number,
|
||||
// and `uint` as a JSON string.
|
||||
//
|
||||
// In this file, we're overriding LegacyAminoPubKey's default JSON Amino
|
||||
// marshaling by using this struct. Please note that we are NOT overriding the
|
||||
// Amino binary marshaling, as that _might_ introduce breaking changes in the
|
||||
// keyring, where multisigs are amino-binary-encoded.
|
||||
//
|
||||
// ref: https://github.com/cosmos/cosmos-sdk/issues/8776
|
||||
type tmMultisig struct {
|
||||
K uint `json:"threshold"`
|
||||
PubKeys []cryptotypes.PubKey `json:"pubkeys"`
|
||||
}
|
||||
|
||||
// protoToTm converts a LegacyAminoPubKey into a tmMultisig.
|
||||
func protoToTm(protoPk *LegacyAminoPubKey) (tmMultisig, error) {
|
||||
var ok bool
|
||||
pks := make([]cryptotypes.PubKey, len(protoPk.PubKeys))
|
||||
for i, pk := range protoPk.PubKeys {
|
||||
pks[i], ok = pk.GetCachedValue().(cryptotypes.PubKey)
|
||||
if !ok {
|
||||
return tmMultisig{}, sdkerrors.Wrapf(sdkerrors.ErrInvalidType, "expected %T, got %T", (cryptotypes.PubKey)(nil), pk.GetCachedValue())
|
||||
}
|
||||
}
|
||||
|
||||
return tmMultisig{
|
||||
K: uint(protoPk.Threshold),
|
||||
PubKeys: pks,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// tmToProto converts a tmMultisig into a LegacyAminoPubKey.
|
||||
func tmToProto(tmPk tmMultisig) (*LegacyAminoPubKey, error) {
|
||||
var err error
|
||||
pks := make([]*types.Any, len(tmPk.PubKeys))
|
||||
for i, pk := range tmPk.PubKeys {
|
||||
pks[i], err = types.NewAnyWithValue(pk)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return &LegacyAminoPubKey{
|
||||
Threshold: uint32(tmPk.K),
|
||||
PubKeys: pks,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// MarshalAminoJSON overrides amino JSON unmarshaling.
|
||||
func (m LegacyAminoPubKey) MarshalAminoJSON() (tmMultisig, error) { //nolint:golint
|
||||
return protoToTm(&m)
|
||||
}
|
||||
|
||||
// UnmarshalAminoJSON overrides amino JSON unmarshaling.
|
||||
func (m *LegacyAminoPubKey) UnmarshalAminoJSON(tmPk tmMultisig) error {
|
||||
protoPk, err := tmToProto(tmPk)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Instead of just doing `*m = *protoPk`, we prefer to modify in-place the
|
||||
// existing Anys inside `m` (instead of allocating new Anys), as so not to
|
||||
// break the `.compat` fields in the existing Anys.
|
||||
for i := range m.PubKeys {
|
||||
m.PubKeys[i].TypeUrl = protoPk.PubKeys[i].TypeUrl
|
||||
m.PubKeys[i].Value = protoPk.PubKeys[i].Value
|
||||
}
|
||||
m.Threshold = protoPk.Threshold
|
||||
|
||||
return nil
|
||||
}
|
|
@ -15,6 +15,9 @@ const (
|
|||
PubKeyAminoRoute = "tendermint/PubKeyMultisigThreshold"
|
||||
)
|
||||
|
||||
//nolint
|
||||
// Deprecated: Amino is being deprecated in the SDK. But even if you need to
|
||||
// use Amino for some reason, please use `codec/legacy.Cdc` instead.
|
||||
var AminoCdc = codec.NewLegacyAmino()
|
||||
|
||||
func init() {
|
||||
|
|
|
@ -15,6 +15,8 @@ var _ multisigtypes.PubKey = &LegacyAminoPubKey{}
|
|||
var _ types.UnpackInterfacesMessage = &LegacyAminoPubKey{}
|
||||
|
||||
// NewLegacyAminoPubKey returns a new LegacyAminoPubKey.
|
||||
// Multisig can be constructed with multiple same keys - it will increase the power of
|
||||
// the owner of that key (he will still need to add multiple signatures in the right order).
|
||||
// Panics if len(pubKeys) < k or 0 >= k.
|
||||
func NewLegacyAminoPubKey(k int, pubKeys []cryptotypes.PubKey) *LegacyAminoPubKey {
|
||||
if k <= 0 {
|
||||
|
@ -40,7 +42,11 @@ func (m *LegacyAminoPubKey) Bytes() []byte {
|
|||
return AminoCdc.MustMarshalBinaryBare(m)
|
||||
}
|
||||
|
||||
// VerifyMultisignature implements the multisigtypes.PubKey VerifyMultisignature method
|
||||
// VerifyMultisignature implements the multisigtypes.PubKey VerifyMultisignature method.
|
||||
// The signatures must be added in an order corresponding to the public keys order in
|
||||
// LegacyAminoPubKey. It's OK to have multiple same keys in the multisig - it will increase
|
||||
// the power of the owner of that key - in that case the signer will still need to append
|
||||
// multiple same signatures in the right order.
|
||||
func (m *LegacyAminoPubKey) VerifyMultisignature(getSignBytes multisigtypes.GetSignBytesFunc, sig *signing.MultiSignatureData) error {
|
||||
bitarray := sig.BitArray
|
||||
sigs := sig.Signatures
|
||||
|
@ -48,7 +54,7 @@ func (m *LegacyAminoPubKey) VerifyMultisignature(getSignBytes multisigtypes.GetS
|
|||
pubKeys := m.GetPubKeys()
|
||||
// ensure bit array is the correct size
|
||||
if len(pubKeys) != size {
|
||||
return fmt.Errorf("bit array size is incorrect %d", len(pubKeys))
|
||||
return fmt.Errorf("bit array size is incorrect, expecting: %d", len(pubKeys))
|
||||
}
|
||||
// ensure size of signature list
|
||||
if len(sigs) < int(m.Threshold) || len(sigs) > size {
|
||||
|
@ -56,7 +62,7 @@ func (m *LegacyAminoPubKey) VerifyMultisignature(getSignBytes multisigtypes.GetS
|
|||
}
|
||||
// ensure at least k signatures are set
|
||||
if bitarray.NumTrueBitsBefore(size) < int(m.Threshold) {
|
||||
return fmt.Errorf("minimum number of signatures not set, have %d, expected %d", bitarray.NumTrueBitsBefore(size), int(m.Threshold))
|
||||
return fmt.Errorf("not enough signatures set, have %d, expected %d", bitarray.NumTrueBitsBefore(size), int(m.Threshold))
|
||||
}
|
||||
// index in the list of signatures which we are concerned with.
|
||||
sigIndex := 0
|
||||
|
|
|
@ -6,7 +6,9 @@ import (
|
|||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/codec"
|
||||
"github.com/cosmos/cosmos-sdk/codec/legacy"
|
||||
"github.com/cosmos/cosmos-sdk/codec/types"
|
||||
cryptocodec "github.com/cosmos/cosmos-sdk/crypto/codec"
|
||||
kmultisig "github.com/cosmos/cosmos-sdk/crypto/keys/multisig"
|
||||
"github.com/cosmos/cosmos-sdk/crypto/keys/secp256k1"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
|
@ -15,6 +17,15 @@ import (
|
|||
"github.com/cosmos/cosmos-sdk/x/auth/legacy/legacytx"
|
||||
)
|
||||
|
||||
func TestNewMultiSig(t *testing.T) {
|
||||
require := require.New(t)
|
||||
pk1 := secp256k1.GenPrivKey().PubKey()
|
||||
pks := []cryptotypes.PubKey{pk1, pk1}
|
||||
|
||||
require.NotNil(kmultisig.NewLegacyAminoPubKey(1, pks),
|
||||
"Should support not unique public keys")
|
||||
}
|
||||
|
||||
func TestAddress(t *testing.T) {
|
||||
msg := []byte{1, 2, 3, 4}
|
||||
pubKeys, _ := generatePubKeysAndSignatures(5, msg)
|
||||
|
@ -85,21 +96,20 @@ func TestVerifyMultisignature(t *testing.T) {
|
|||
|
||||
testCases := []struct {
|
||||
msg string
|
||||
malleate func()
|
||||
malleate func(*require.Assertions)
|
||||
expectPass bool
|
||||
}{
|
||||
{
|
||||
"nested multisignature",
|
||||
func() {
|
||||
func(require *require.Assertions) {
|
||||
genPk, genSig := generateNestedMultiSignature(3, msg)
|
||||
sig = genSig
|
||||
pk = genPk
|
||||
},
|
||||
true,
|
||||
},
|
||||
{
|
||||
}, {
|
||||
"wrong size for sig bit array",
|
||||
func() {
|
||||
func(require *require.Assertions) {
|
||||
pubKeys, _ := generatePubKeysAndSignatures(3, msg)
|
||||
pk = kmultisig.NewLegacyAminoPubKey(3, pubKeys)
|
||||
sig = multisig.NewMultisig(1)
|
||||
|
@ -108,7 +118,7 @@ func TestVerifyMultisignature(t *testing.T) {
|
|||
},
|
||||
{
|
||||
"single signature data, expects the first k signatures to be valid",
|
||||
func() {
|
||||
func(require *require.Assertions) {
|
||||
k := 2
|
||||
signingIndices := []int{0, 3, 1}
|
||||
pubKeys, sigs := generatePubKeysAndSignatures(5, msg)
|
||||
|
@ -119,32 +129,26 @@ func TestVerifyMultisignature(t *testing.T) {
|
|||
for i := 0; i < k-1; i++ {
|
||||
signingIndex := signingIndices[i]
|
||||
require.NoError(
|
||||
t,
|
||||
multisig.AddSignatureFromPubKey(sig, sigs[signingIndex], pubKeys[signingIndex], pubKeys),
|
||||
)
|
||||
require.Error(
|
||||
t,
|
||||
pk.VerifyMultisignature(signBytesFn, sig),
|
||||
"multisig passed when i < k, i %d", i,
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
multisig.AddSignatureFromPubKey(sig, sigs[signingIndex], pubKeys[signingIndex], pubKeys),
|
||||
)
|
||||
require.Equal(
|
||||
t,
|
||||
i+1,
|
||||
len(sig.Signatures),
|
||||
"adding a signature for the same pubkey twice increased signature count by 2, index %d", i,
|
||||
)
|
||||
}
|
||||
require.Error(
|
||||
t,
|
||||
pk.VerifyMultisignature(signBytesFn, sig),
|
||||
"multisig passed with k - 1 sigs",
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
multisig.AddSignatureFromPubKey(
|
||||
sig,
|
||||
sigs[signingIndices[k]],
|
||||
|
@ -153,30 +157,50 @@ func TestVerifyMultisignature(t *testing.T) {
|
|||
),
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
pk.VerifyMultisignature(signBytesFn, sig),
|
||||
"multisig failed after k good signatures",
|
||||
)
|
||||
},
|
||||
true,
|
||||
},
|
||||
{
|
||||
}, {
|
||||
"duplicate signatures",
|
||||
func() {
|
||||
func(require *require.Assertions) {
|
||||
pubKeys, sigs := generatePubKeysAndSignatures(5, msg)
|
||||
pk = kmultisig.NewLegacyAminoPubKey(2, pubKeys)
|
||||
sig = multisig.NewMultisig(5)
|
||||
|
||||
require.Error(t, pk.VerifyMultisignature(signBytesFn, sig))
|
||||
require.Error(pk.VerifyMultisignature(signBytesFn, sig))
|
||||
multisig.AddSignatureFromPubKey(sig, sigs[0], pubKeys[0], pubKeys)
|
||||
// Add second signature manually
|
||||
sig.Signatures = append(sig.Signatures, sigs[0])
|
||||
},
|
||||
false,
|
||||
},
|
||||
{
|
||||
}, {
|
||||
"duplicated key",
|
||||
func(require *require.Assertions) {
|
||||
// here we test an edge case where we create a multi sig with two same
|
||||
// keys. It should work.
|
||||
pubkeys, sigs := generatePubKeysAndSignatures(3, msg)
|
||||
pubkeys[1] = pubkeys[0]
|
||||
pk = kmultisig.NewLegacyAminoPubKey(2, pubkeys)
|
||||
sig = multisig.NewMultisig(len(pubkeys))
|
||||
multisig.AddSignature(sig, sigs[0], 0)
|
||||
multisig.AddSignature(sig, sigs[0], 1)
|
||||
},
|
||||
true,
|
||||
}, {
|
||||
"same key used twice",
|
||||
func(require *require.Assertions) {
|
||||
pubkeys, sigs := generatePubKeysAndSignatures(3, msg)
|
||||
pk = kmultisig.NewLegacyAminoPubKey(2, pubkeys)
|
||||
sig = multisig.NewMultisig(len(pubkeys))
|
||||
multisig.AddSignature(sig, sigs[0], 0)
|
||||
multisig.AddSignature(sig, sigs[0], 1)
|
||||
},
|
||||
false,
|
||||
}, {
|
||||
"unable to verify signature",
|
||||
func() {
|
||||
func(require *require.Assertions) {
|
||||
pubKeys, _ := generatePubKeysAndSignatures(2, msg)
|
||||
_, sigs := generatePubKeysAndSignatures(2, msg)
|
||||
pk = kmultisig.NewLegacyAminoPubKey(2, pubKeys)
|
||||
|
@ -190,7 +214,7 @@ func TestVerifyMultisignature(t *testing.T) {
|
|||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.msg, func(t *testing.T) {
|
||||
tc.malleate()
|
||||
tc.malleate(require.New(t))
|
||||
err := pk.VerifyMultisignature(signBytesFn, sig)
|
||||
if tc.expectPass {
|
||||
require.NoError(t, err)
|
||||
|
@ -250,12 +274,12 @@ func TestPubKeyMultisigThresholdAminoToIface(t *testing.T) {
|
|||
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
|
||||
multisigKey := kmultisig.NewLegacyAminoPubKey(2, pubkeys)
|
||||
|
||||
ab, err := kmultisig.AminoCdc.MarshalBinaryLengthPrefixed(multisigKey)
|
||||
ab, err := legacy.Cdc.MarshalBinaryLengthPrefixed(multisigKey)
|
||||
require.NoError(t, err)
|
||||
// like other cryptotypes.Pubkey implementations (e.g. ed25519.PubKey),
|
||||
// LegacyAminoPubKey should be deserializable into a cryptotypes.LegacyAminoPubKey:
|
||||
var pubKey kmultisig.LegacyAminoPubKey
|
||||
err = kmultisig.AminoCdc.UnmarshalBinaryLengthPrefixed(ab, &pubKey)
|
||||
err = legacy.Cdc.UnmarshalBinaryLengthPrefixed(ab, &pubKey)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, multisigKey.Equals(&pubKey), true)
|
||||
|
@ -307,3 +331,75 @@ func reorderPubKey(pk *kmultisig.LegacyAminoPubKey) (other *kmultisig.LegacyAmin
|
|||
other = &kmultisig.LegacyAminoPubKey{Threshold: 2, PubKeys: pubkeysCpy}
|
||||
return
|
||||
}
|
||||
|
||||
func TestAminoBinary(t *testing.T) {
|
||||
pubKey1 := secp256k1.GenPrivKey().PubKey()
|
||||
pubKey2 := secp256k1.GenPrivKey().PubKey()
|
||||
multisigKey := kmultisig.NewLegacyAminoPubKey(2, []cryptotypes.PubKey{pubKey1, pubKey2})
|
||||
|
||||
// Do a round-trip key->bytes->key.
|
||||
bz, err := legacy.Cdc.MarshalBinaryBare(multisigKey)
|
||||
require.NoError(t, err)
|
||||
var newMultisigKey cryptotypes.PubKey
|
||||
err = legacy.Cdc.UnmarshalBinaryBare(bz, &newMultisigKey)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, multisigKey.Threshold, newMultisigKey.(*kmultisig.LegacyAminoPubKey).Threshold)
|
||||
}
|
||||
|
||||
func TestAminoMarshalJSON(t *testing.T) {
|
||||
pubKey1 := secp256k1.GenPrivKey().PubKey()
|
||||
pubKey2 := secp256k1.GenPrivKey().PubKey()
|
||||
multisigKey := kmultisig.NewLegacyAminoPubKey(2, []cryptotypes.PubKey{pubKey1, pubKey2})
|
||||
|
||||
bz, err := legacy.Cdc.MarshalJSON(multisigKey)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Note the quotes around `"2"`. They are present because we are overriding
|
||||
// the Amino JSON marshaling of LegacyAminoPubKey (using tmMultisig).
|
||||
// Without the override, there would not be any quotes.
|
||||
require.Contains(t, string(bz), "\"threshold\":\"2\"")
|
||||
}
|
||||
|
||||
func TestAminoUnmarshalJSON(t *testing.T) {
|
||||
// This is a real multisig from the Akash chain. It has been exported from
|
||||
// v0.39, hence the `threshold` field as a string.
|
||||
// We are testing that when unmarshaling this JSON into a LegacyAminoPubKey
|
||||
// with amino, there's no error.
|
||||
// ref: https://github.com/cosmos/cosmos-sdk/issues/8776
|
||||
pkJSON := `{
|
||||
"type": "tendermint/PubKeyMultisigThreshold",
|
||||
"value": {
|
||||
"pubkeys": [
|
||||
{
|
||||
"type": "tendermint/PubKeySecp256k1",
|
||||
"value": "AzYxq2VNeD10TyABwOgV36OVWDIMn8AtI4OFA0uQX2MK"
|
||||
},
|
||||
{
|
||||
"type": "tendermint/PubKeySecp256k1",
|
||||
"value": "A39cdsrm00bTeQ3RVZVqjkH8MvIViO9o99c8iLiNO35h"
|
||||
},
|
||||
{
|
||||
"type": "tendermint/PubKeySecp256k1",
|
||||
"value": "A/uLLCZph8MkFg2tCxqSMGwFfPHdt1kkObmmrqy9aiYD"
|
||||
},
|
||||
{
|
||||
"type": "tendermint/PubKeySecp256k1",
|
||||
"value": "A4mOMhM5gPDtBAkAophjRs6uDGZm4tD4Dbok3ai4qJi8"
|
||||
},
|
||||
{
|
||||
"type": "tendermint/PubKeySecp256k1",
|
||||
"value": "A90icFucrjNNz2SAdJWMApfSQcARIqt+M2x++t6w5fFs"
|
||||
}
|
||||
],
|
||||
"threshold": "3"
|
||||
}
|
||||
}`
|
||||
|
||||
cdc := codec.NewLegacyAmino()
|
||||
cryptocodec.RegisterCrypto(cdc)
|
||||
|
||||
var pk cryptotypes.PubKey
|
||||
err := cdc.UnmarshalJSON([]byte(pkJSON), &pk)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint32(3), pk.(*kmultisig.LegacyAminoPubKey).Threshold)
|
||||
}
|
||||
|
|
|
@ -151,12 +151,9 @@ func (pubKey *PubKey) Address() crypto.Address {
|
|||
panic("length of pubkey is incorrect")
|
||||
}
|
||||
|
||||
hasherSHA256 := sha256.New()
|
||||
hasherSHA256.Write(pubKey.Key) // does not error
|
||||
sha := hasherSHA256.Sum(nil)
|
||||
|
||||
sha := sha256.Sum256(pubKey.Key)
|
||||
hasherRIPEMD160 := ripemd160.New()
|
||||
hasherRIPEMD160.Write(sha) // does not error
|
||||
hasherRIPEMD160.Write(sha[:]) // does not error
|
||||
return crypto.Address(hasherRIPEMD160.Sum(nil))
|
||||
}
|
||||
|
||||
|
|
|
@ -0,0 +1,35 @@
|
|||
// Package secp256r1 implements Cosmos-SDK compatible ECDSA public and private key. The keys
|
||||
// can be protobuf serialized and packed in Any.
|
||||
package secp256r1
|
||||
|
||||
import (
|
||||
"crypto/elliptic"
|
||||
"fmt"
|
||||
|
||||
codectypes "github.com/cosmos/cosmos-sdk/codec/types"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
)
|
||||
|
||||
const (
|
||||
// fieldSize is the curve domain size.
|
||||
fieldSize = 32
|
||||
pubKeySize = fieldSize + 1
|
||||
|
||||
name = "secp256r1"
|
||||
)
|
||||
|
||||
var secp256r1 elliptic.Curve
|
||||
|
||||
func init() {
|
||||
secp256r1 = elliptic.P256()
|
||||
// pubKeySize is ceil of field bit size + 1 for the sign
|
||||
expected := (secp256r1.Params().BitSize + 7) / 8
|
||||
if expected != fieldSize {
|
||||
panic(fmt.Sprintf("Wrong secp256r1 curve fieldSize=%d, expecting=%d", fieldSize, expected))
|
||||
}
|
||||
}
|
||||
|
||||
// RegisterInterfaces adds secp256r1 PubKey to pubkey registry
|
||||
func RegisterInterfaces(registry codectypes.InterfaceRegistry) {
|
||||
registry.RegisterImplementations((*cryptotypes.PubKey)(nil), &PubKey{})
|
||||
}
|
|
@ -0,0 +1,503 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: cosmos/crypto/secp256r1/keys.proto
|
||||
|
||||
package secp256r1
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
// PubKey defines a secp256r1 ECDSA public key.
|
||||
type PubKey struct {
|
||||
// Point on secp256r1 curve in a compressed representation as specified in section
|
||||
// 4.3.6 of ANSI X9.62: https://webstore.ansi.org/standards/ascx9/ansix9621998
|
||||
Key *ecdsaPK `protobuf:"bytes,1,opt,name=key,proto3,customtype=ecdsaPK" json:"key,omitempty"`
|
||||
}
|
||||
|
||||
func (m *PubKey) Reset() { *m = PubKey{} }
|
||||
func (*PubKey) ProtoMessage() {}
|
||||
func (*PubKey) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_b90c18415095c0c3, []int{0}
|
||||
}
|
||||
func (m *PubKey) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *PubKey) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_PubKey.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *PubKey) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_PubKey.Merge(m, src)
|
||||
}
|
||||
func (m *PubKey) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *PubKey) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_PubKey.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_PubKey proto.InternalMessageInfo
|
||||
|
||||
func (*PubKey) XXX_MessageName() string {
|
||||
return "cosmos.crypto.secp256r1.PubKey"
|
||||
}
|
||||
|
||||
// PrivKey defines a secp256r1 ECDSA private key.
|
||||
type PrivKey struct {
|
||||
// secret number serialized using big-endian encoding
|
||||
Secret *ecdsaSK `protobuf:"bytes,1,opt,name=secret,proto3,customtype=ecdsaSK" json:"secret,omitempty"`
|
||||
}
|
||||
|
||||
func (m *PrivKey) Reset() { *m = PrivKey{} }
|
||||
func (*PrivKey) ProtoMessage() {}
|
||||
func (*PrivKey) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_b90c18415095c0c3, []int{1}
|
||||
}
|
||||
func (m *PrivKey) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *PrivKey) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_PrivKey.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *PrivKey) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_PrivKey.Merge(m, src)
|
||||
}
|
||||
func (m *PrivKey) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *PrivKey) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_PrivKey.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_PrivKey proto.InternalMessageInfo
|
||||
|
||||
func (*PrivKey) XXX_MessageName() string {
|
||||
return "cosmos.crypto.secp256r1.PrivKey"
|
||||
}
|
||||
func init() {
|
||||
proto.RegisterType((*PubKey)(nil), "cosmos.crypto.secp256r1.PubKey")
|
||||
proto.RegisterType((*PrivKey)(nil), "cosmos.crypto.secp256r1.PrivKey")
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterFile("cosmos/crypto/secp256r1/keys.proto", fileDescriptor_b90c18415095c0c3)
|
||||
}
|
||||
|
||||
var fileDescriptor_b90c18415095c0c3 = []byte{
|
||||
// 221 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x52, 0x4a, 0xce, 0x2f, 0xce,
|
||||
0xcd, 0x2f, 0xd6, 0x4f, 0x2e, 0xaa, 0x2c, 0x28, 0xc9, 0xd7, 0x2f, 0x4e, 0x4d, 0x2e, 0x30, 0x32,
|
||||
0x35, 0x2b, 0x32, 0xd4, 0xcf, 0x4e, 0xad, 0x2c, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x12,
|
||||
0x87, 0xa8, 0xd1, 0x83, 0xa8, 0xd1, 0x83, 0xab, 0x91, 0x12, 0x49, 0xcf, 0x4f, 0xcf, 0x07, 0xab,
|
||||
0xd1, 0x07, 0xb1, 0x20, 0xca, 0x95, 0xd4, 0xb9, 0xd8, 0x02, 0x4a, 0x93, 0xbc, 0x53, 0x2b, 0x85,
|
||||
0x64, 0xb9, 0x98, 0xb3, 0x53, 0x2b, 0x25, 0x18, 0x15, 0x18, 0x35, 0x78, 0x9c, 0xb8, 0x6f, 0xdd,
|
||||
0x93, 0x67, 0x4f, 0x4d, 0x4e, 0x29, 0x4e, 0x0c, 0xf0, 0x0e, 0x02, 0x89, 0x2b, 0xe9, 0x71, 0xb1,
|
||||
0x07, 0x14, 0x65, 0x96, 0x81, 0x54, 0x2a, 0x73, 0xb1, 0x15, 0xa7, 0x26, 0x17, 0xa5, 0x96, 0x60,
|
||||
0x28, 0x0e, 0xf6, 0x0e, 0x82, 0x4a, 0x39, 0x45, 0x9c, 0x78, 0x28, 0xc7, 0x70, 0xe3, 0xa1, 0x1c,
|
||||
0xc3, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x3e, 0x78, 0x24, 0xc7, 0x38, 0xe1, 0xb1,
|
||||
0x1c, 0xc3, 0x89, 0xc7, 0x72, 0x8c, 0x17, 0x1e, 0xcb, 0x31, 0xdc, 0x78, 0x2c, 0xc7, 0x10, 0x65,
|
||||
0x94, 0x9e, 0x59, 0x92, 0x51, 0x9a, 0xa4, 0x97, 0x9c, 0x9f, 0xab, 0x0f, 0xf3, 0x1c, 0x98, 0xd2,
|
||||
0x2d, 0x4e, 0xc9, 0x86, 0xf9, 0x13, 0xe4, 0x3b, 0x84, 0x67, 0x93, 0xd8, 0xc0, 0x2e, 0x37, 0x06,
|
||||
0x04, 0x00, 0x00, 0xff, 0xff, 0xe0, 0x65, 0x08, 0x5c, 0x0e, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *PubKey) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *PubKey) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *PubKey) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.Key != nil {
|
||||
{
|
||||
size := m.Key.Size()
|
||||
i -= size
|
||||
if _, err := m.Key.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintKeys(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *PrivKey) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *PrivKey) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *PrivKey) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.Secret != nil {
|
||||
{
|
||||
size := m.Secret.Size()
|
||||
i -= size
|
||||
if _, err := m.Secret.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintKeys(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintKeys(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovKeys(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *PubKey) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Key != nil {
|
||||
l = m.Key.Size()
|
||||
n += 1 + l + sovKeys(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *PrivKey) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Secret != nil {
|
||||
l = m.Secret.Size()
|
||||
n += 1 + l + sovKeys(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovKeys(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozKeys(x uint64) (n int) {
|
||||
return sovKeys(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *PubKey) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: PubKey: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: PubKey: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthKeys
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthKeys
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
var v ecdsaPK
|
||||
m.Key = &v
|
||||
if err := m.Key.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipKeys(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLengthKeys
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *PrivKey) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: PrivKey: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: PrivKey: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Secret", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthKeys
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthKeys
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
var v ecdsaSK
|
||||
m.Secret = &v
|
||||
if err := m.Secret.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipKeys(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLengthKeys
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipKeys(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowKeys
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthKeys
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupKeys
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthKeys
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthKeys = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowKeys = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupKeys = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
|
@ -0,0 +1,63 @@
|
|||
package secp256r1
|
||||
|
||||
import (
|
||||
"github.com/cosmos/cosmos-sdk/crypto/keys/internal/ecdsa"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
)
|
||||
|
||||
// GenPrivKey generates a new secp256r1 private key. It uses operating system randomness.
|
||||
func GenPrivKey() (*PrivKey, error) {
|
||||
key, err := ecdsa.GenPrivKey(secp256r1)
|
||||
return &PrivKey{&ecdsaSK{key}}, err
|
||||
}
|
||||
|
||||
// PubKey implements SDK PrivKey interface.
|
||||
func (m *PrivKey) PubKey() cryptotypes.PubKey {
|
||||
return &PubKey{&ecdsaPK{m.Secret.PubKey()}}
|
||||
}
|
||||
|
||||
// String implements SDK proto.Message interface.
|
||||
func (m *PrivKey) String() string {
|
||||
return m.Secret.String(name)
|
||||
}
|
||||
|
||||
// Type returns key type name. Implements SDK PrivKey interface.
|
||||
func (m *PrivKey) Type() string {
|
||||
return name
|
||||
}
|
||||
|
||||
// Sign hashes and signs the message usign ECDSA. Implements sdk.PrivKey interface.
|
||||
func (m *PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
return m.Secret.Sign(msg)
|
||||
}
|
||||
|
||||
// Bytes serialize the private key.
|
||||
func (m *PrivKey) Bytes() []byte {
|
||||
return m.Secret.Bytes()
|
||||
}
|
||||
|
||||
// Equals implements SDK PrivKey interface.
|
||||
func (m *PrivKey) Equals(other cryptotypes.LedgerPrivKey) bool {
|
||||
sk2, ok := other.(*PrivKey)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
return m.Secret.Equal(&sk2.Secret.PrivateKey)
|
||||
}
|
||||
|
||||
type ecdsaSK struct {
|
||||
ecdsa.PrivKey
|
||||
}
|
||||
|
||||
// Size implements proto.Marshaler interface
|
||||
func (sk *ecdsaSK) Size() int {
|
||||
if sk == nil {
|
||||
return 0
|
||||
}
|
||||
return fieldSize
|
||||
}
|
||||
|
||||
// Unmarshal implements proto.Marshaler interface
|
||||
func (sk *ecdsaSK) Unmarshal(bz []byte) error {
|
||||
return sk.PrivKey.Unmarshal(bz, secp256r1, fieldSize)
|
||||
}
|
|
@ -0,0 +1,115 @@
|
|||
package secp256r1
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/codec"
|
||||
"github.com/cosmos/cosmos-sdk/codec/types"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
"github.com/stretchr/testify/suite"
|
||||
)
|
||||
|
||||
var _ cryptotypes.PrivKey = &PrivKey{}
|
||||
|
||||
func TestSKSuite(t *testing.T) {
|
||||
suite.Run(t, new(SKSuite))
|
||||
}
|
||||
|
||||
type SKSuite struct{ CommonSuite }
|
||||
|
||||
func (suite *SKSuite) TestString() {
|
||||
suite.Require().Equal("secp256r1{-}", suite.sk.String())
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestEquals() {
|
||||
require := suite.Require()
|
||||
|
||||
skOther, err := GenPrivKey()
|
||||
require.NoError(err)
|
||||
require.False(suite.sk.Equals(skOther))
|
||||
|
||||
skOther2 := &PrivKey{skOther.Secret}
|
||||
require.True(skOther.Equals(skOther2))
|
||||
require.True(skOther2.Equals(skOther), "Equals must be reflexive")
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestPubKey() {
|
||||
pk := suite.sk.PubKey()
|
||||
suite.True(suite.sk.(*PrivKey).Secret.PublicKey.Equal(&pk.(*PubKey).Key.PublicKey))
|
||||
}
|
||||
|
||||
func (suite *SKSuite) Bytes() {
|
||||
bz := suite.sk.Bytes()
|
||||
suite.Len(bz, fieldSize)
|
||||
var sk *PrivKey
|
||||
suite.Nil(sk.Bytes())
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestMarshalProto() {
|
||||
require := suite.Require()
|
||||
|
||||
/**** test structure marshalling ****/
|
||||
|
||||
var sk PrivKey
|
||||
bz, err := proto.Marshal(suite.sk)
|
||||
require.NoError(err)
|
||||
require.NoError(proto.Unmarshal(bz, &sk))
|
||||
require.True(sk.Equals(suite.sk))
|
||||
|
||||
/**** test structure marshalling with codec ****/
|
||||
|
||||
sk = PrivKey{}
|
||||
registry := types.NewInterfaceRegistry()
|
||||
cdc := codec.NewProtoCodec(registry)
|
||||
bz, err = cdc.MarshalBinaryBare(suite.sk.(*PrivKey))
|
||||
require.NoError(err)
|
||||
require.NoError(cdc.UnmarshalBinaryBare(bz, &sk))
|
||||
require.True(sk.Equals(suite.sk))
|
||||
|
||||
const bufSize = 100
|
||||
bz2 := make([]byte, bufSize)
|
||||
skCpy := suite.sk.(*PrivKey)
|
||||
_, err = skCpy.MarshalTo(bz2)
|
||||
require.NoError(err)
|
||||
require.Len(bz2, bufSize)
|
||||
require.Equal(bz, bz2[:sk.Size()])
|
||||
|
||||
bz2 = make([]byte, bufSize)
|
||||
_, err = skCpy.MarshalToSizedBuffer(bz2)
|
||||
require.NoError(err)
|
||||
require.Len(bz2, bufSize)
|
||||
require.Equal(bz, bz2[(bufSize-sk.Size()):])
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestSign() {
|
||||
require := suite.Require()
|
||||
|
||||
msg := crypto.CRandBytes(1000)
|
||||
sig, err := suite.sk.Sign(msg)
|
||||
require.NoError(err)
|
||||
sigCpy := make([]byte, len(sig))
|
||||
copy(sigCpy, sig)
|
||||
require.True(suite.pk.VerifySignature(msg, sigCpy))
|
||||
|
||||
// Mutate the signature
|
||||
for i := range sig {
|
||||
sigCpy[i] ^= byte(i + 1)
|
||||
require.False(suite.pk.VerifySignature(msg, sigCpy))
|
||||
}
|
||||
|
||||
// Mutate the message
|
||||
msg[1] ^= byte(2)
|
||||
require.False(suite.pk.VerifySignature(msg, sig))
|
||||
}
|
||||
|
||||
func (suite *SKSuite) TestSize() {
|
||||
require := suite.Require()
|
||||
var pk ecdsaSK
|
||||
require.Equal(pk.Size(), len(suite.sk.Bytes()))
|
||||
|
||||
var nilPk *ecdsaSK
|
||||
require.Equal(0, nilPk.Size(), "nil value must have zero size")
|
||||
}
|
|
@ -0,0 +1,60 @@
|
|||
package secp256r1
|
||||
|
||||
import (
|
||||
"github.com/gogo/protobuf/proto"
|
||||
tmcrypto "github.com/tendermint/tendermint/crypto"
|
||||
|
||||
ecdsa "github.com/cosmos/cosmos-sdk/crypto/keys/internal/ecdsa"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
)
|
||||
|
||||
// String implements proto.Message interface.
|
||||
func (m *PubKey) String() string {
|
||||
return m.Key.String(name)
|
||||
}
|
||||
|
||||
// Bytes implements SDK PubKey interface.
|
||||
func (m *PubKey) Bytes() []byte {
|
||||
return m.Key.Bytes()
|
||||
}
|
||||
|
||||
// Equals implements SDK PubKey interface.
|
||||
func (m *PubKey) Equals(other cryptotypes.PubKey) bool {
|
||||
pk2, ok := other.(*PubKey)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
return m.Key.Equal(&pk2.Key.PublicKey)
|
||||
}
|
||||
|
||||
// Address implements SDK PubKey interface.
|
||||
func (m *PubKey) Address() tmcrypto.Address {
|
||||
return m.Key.Address(proto.MessageName(m))
|
||||
}
|
||||
|
||||
// Type returns key type name. Implements SDK PubKey interface.
|
||||
func (m *PubKey) Type() string {
|
||||
return name
|
||||
}
|
||||
|
||||
// VerifySignature implements SDK PubKey interface.
|
||||
func (m *PubKey) VerifySignature(msg []byte, sig []byte) bool {
|
||||
return m.Key.VerifySignature(msg, sig)
|
||||
}
|
||||
|
||||
type ecdsaPK struct {
|
||||
ecdsa.PubKey
|
||||
}
|
||||
|
||||
// Size implements proto.Marshaler interface
|
||||
func (pk *ecdsaPK) Size() int {
|
||||
if pk == nil {
|
||||
return 0
|
||||
}
|
||||
return pubKeySize
|
||||
}
|
||||
|
||||
// Unmarshal implements proto.Marshaler interface
|
||||
func (pk *ecdsaPK) Unmarshal(bz []byte) error {
|
||||
return pk.PubKey.Unmarshal(bz, secp256r1, pubKeySize)
|
||||
}
|
|
@ -0,0 +1,118 @@
|
|||
package secp256r1
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
proto "github.com/gogo/protobuf/proto"
|
||||
"github.com/stretchr/testify/suite"
|
||||
|
||||
"github.com/cosmos/cosmos-sdk/codec"
|
||||
"github.com/cosmos/cosmos-sdk/codec/types"
|
||||
cryptotypes "github.com/cosmos/cosmos-sdk/crypto/types"
|
||||
)
|
||||
|
||||
var _ cryptotypes.PubKey = (*PubKey)(nil)
|
||||
|
||||
func TestPKSuite(t *testing.T) {
|
||||
suite.Run(t, new(PKSuite))
|
||||
}
|
||||
|
||||
type CommonSuite struct {
|
||||
suite.Suite
|
||||
pk *PubKey // cryptotypes.PubKey
|
||||
sk cryptotypes.PrivKey
|
||||
}
|
||||
|
||||
func (suite *CommonSuite) SetupSuite() {
|
||||
sk, err := GenPrivKey()
|
||||
suite.Require().NoError(err)
|
||||
suite.sk = sk
|
||||
suite.pk = sk.PubKey().(*PubKey)
|
||||
}
|
||||
|
||||
type PKSuite struct{ CommonSuite }
|
||||
|
||||
func (suite *PKSuite) TestString() {
|
||||
require := suite.Require()
|
||||
|
||||
pkStr := suite.pk.String()
|
||||
prefix := "secp256r1{"
|
||||
require.Equal(prefix, pkStr[:len(prefix)])
|
||||
}
|
||||
|
||||
func (suite *PKSuite) TestType() {
|
||||
suite.Require().Equal(name, suite.pk.Type())
|
||||
}
|
||||
|
||||
func (suite *PKSuite) TestEquals() {
|
||||
require := suite.Require()
|
||||
|
||||
skOther, err := GenPrivKey()
|
||||
require.NoError(err)
|
||||
pkOther := skOther.PubKey()
|
||||
pkOther2 := &PubKey{&ecdsaPK{skOther.Secret.PubKey()}}
|
||||
|
||||
require.False(suite.pk.Equals(pkOther))
|
||||
require.True(pkOther.Equals(pkOther2))
|
||||
require.True(pkOther2.Equals(pkOther))
|
||||
require.True(pkOther.Equals(pkOther), "Equals must be reflexive")
|
||||
}
|
||||
|
||||
func (suite *PKSuite) TestMarshalProto() {
|
||||
require := suite.Require()
|
||||
|
||||
/**** test structure marshalling ****/
|
||||
|
||||
var pk PubKey
|
||||
bz, err := proto.Marshal(suite.pk)
|
||||
require.NoError(err)
|
||||
require.NoError(proto.Unmarshal(bz, &pk))
|
||||
require.True(pk.Equals(suite.pk))
|
||||
|
||||
/**** test structure marshalling with codec ****/
|
||||
|
||||
pk = PubKey{}
|
||||
registry := types.NewInterfaceRegistry()
|
||||
cdc := codec.NewProtoCodec(registry)
|
||||
bz, err = cdc.MarshalBinaryBare(suite.pk)
|
||||
require.NoError(err)
|
||||
require.NoError(cdc.UnmarshalBinaryBare(bz, &pk))
|
||||
require.True(pk.Equals(suite.pk))
|
||||
|
||||
const bufSize = 100
|
||||
bz2 := make([]byte, bufSize)
|
||||
pkCpy := suite.pk
|
||||
_, err = pkCpy.MarshalTo(bz2)
|
||||
require.NoError(err)
|
||||
require.Len(bz2, bufSize)
|
||||
require.Equal(bz, bz2[:pk.Size()])
|
||||
|
||||
bz2 = make([]byte, bufSize)
|
||||
_, err = pkCpy.MarshalToSizedBuffer(bz2)
|
||||
require.NoError(err)
|
||||
require.Len(bz2, bufSize)
|
||||
require.Equal(bz, bz2[(bufSize-pk.Size()):])
|
||||
|
||||
/**** test interface marshalling ****/
|
||||
bz, err = cdc.MarshalInterface(suite.pk)
|
||||
require.NoError(err)
|
||||
var pkI cryptotypes.PubKey
|
||||
err = cdc.UnmarshalInterface(bz, &pkI)
|
||||
require.EqualError(err, "no registered implementations of type types.PubKey")
|
||||
|
||||
RegisterInterfaces(registry)
|
||||
require.NoError(cdc.UnmarshalInterface(bz, &pkI))
|
||||
require.True(pkI.Equals(suite.pk))
|
||||
|
||||
cdc.UnmarshalInterface(bz, nil)
|
||||
require.Error(err, "nil should fail")
|
||||
}
|
||||
|
||||
func (suite *PKSuite) TestSize() {
|
||||
require := suite.Require()
|
||||
var pk ecdsaPK
|
||||
require.Equal(pk.Size(), len(suite.pk.Bytes()))
|
||||
|
||||
var nilPk *ecdsaPK
|
||||
require.Equal(0, nilPk.Size(), "nil value must have zero size")
|
||||
}
|
|
@ -30,14 +30,14 @@ func NewCompactBitArray(bits int) *CompactBitArray {
|
|||
func (bA *CompactBitArray) Count() int {
|
||||
if bA == nil {
|
||||
return 0
|
||||
} else if bA.ExtraBitsStored == uint32(0) {
|
||||
} else if bA.ExtraBitsStored == 0 {
|
||||
return len(bA.Elems) * 8
|
||||
}
|
||||
|
||||
return (len(bA.Elems)-1)*8 + int(bA.ExtraBitsStored)
|
||||
}
|
||||
|
||||
// GetIndex returns the bit at index i within the bit array.
|
||||
// GetIndex returns true if the bit at index i is set; returns false otherwise.
|
||||
// The behavior is undefined if i >= bA.Count()
|
||||
func (bA *CompactBitArray) GetIndex(i int) bool {
|
||||
if bA == nil {
|
||||
|
@ -47,11 +47,11 @@ func (bA *CompactBitArray) GetIndex(i int) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
return bA.Elems[i>>3]&(uint8(1)<<uint8(7-(i%8))) > 0
|
||||
return bA.Elems[i>>3]&(1<<uint8(7-(i%8))) > 0
|
||||
}
|
||||
|
||||
// SetIndex sets the bit at index i within the bit array.
|
||||
// The behavior is undefined if i >= bA.Count()
|
||||
// SetIndex sets the bit at index i within the bit array. Returns true if and only if the
|
||||
// operation succeeded. The behavior is undefined if i >= bA.Count()
|
||||
func (bA *CompactBitArray) SetIndex(i int, v bool) bool {
|
||||
if bA == nil {
|
||||
return false
|
||||
|
@ -62,9 +62,9 @@ func (bA *CompactBitArray) SetIndex(i int, v bool) bool {
|
|||
}
|
||||
|
||||
if v {
|
||||
bA.Elems[i>>3] |= (uint8(1) << uint8(7-(i%8)))
|
||||
bA.Elems[i>>3] |= (1 << uint8(7-(i%8)))
|
||||
} else {
|
||||
bA.Elems[i>>3] &= ^(uint8(1) << uint8(7-(i%8)))
|
||||
bA.Elems[i>>3] &= ^(1 << uint8(7-(i%8)))
|
||||
}
|
||||
|
||||
return true
|
||||
|
@ -75,13 +75,23 @@ func (bA *CompactBitArray) SetIndex(i int, v bool) bool {
|
|||
// there are two bits set to true before index 4.
|
||||
func (bA *CompactBitArray) NumTrueBitsBefore(index int) int {
|
||||
numTrueValues := 0
|
||||
for i := 0; i < index; i++ {
|
||||
if bA.GetIndex(i) {
|
||||
numTrueValues++
|
||||
max := bA.Count()
|
||||
if index > max {
|
||||
index = max
|
||||
}
|
||||
// below we iterate over the bytes then over bits (in low endian) and count bits set to 1
|
||||
var i = 0
|
||||
for elem := 0; ; elem++ {
|
||||
for b := 7; b >= 0; b-- {
|
||||
if i >= index {
|
||||
return numTrueValues
|
||||
}
|
||||
i++
|
||||
if (bA.Elems[elem]>>b)&1 == 1 {
|
||||
numTrueValues++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return numTrueValues
|
||||
}
|
||||
|
||||
// Copy returns a copy of the provided bit array.
|
||||
|
@ -99,6 +109,18 @@ func (bA *CompactBitArray) Copy() *CompactBitArray {
|
|||
}
|
||||
}
|
||||
|
||||
// Equal checks if both bit arrays are equal. If both arrays are nil then it returns true.
|
||||
func (bA *CompactBitArray) Equal(other *CompactBitArray) bool {
|
||||
if bA == other {
|
||||
return true
|
||||
}
|
||||
if bA == nil || other == nil {
|
||||
return false
|
||||
}
|
||||
return bA.ExtraBitsStored == other.ExtraBitsStored &&
|
||||
bytes.Equal(bA.Elems, other.Elems)
|
||||
}
|
||||
|
||||
// String returns a string representation of CompactBitArray: BA{<bit-string>},
|
||||
// where <bit-string> is a sequence of 'x' (1) and '_' (0).
|
||||
// The <bit-string> includes spaces and newlines to help people.
|
||||
|
|
|
@ -36,6 +36,34 @@ func TestNewBitArrayNeverCrashesOnNegatives(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestBitArrayEqual(t *testing.T) {
|
||||
empty := new(CompactBitArray)
|
||||
big1, _ := randCompactBitArray(1000)
|
||||
big1Cpy := *big1
|
||||
big2, _ := randCompactBitArray(1000)
|
||||
big2.SetIndex(500, !big1.GetIndex(500)) // ensure they are different
|
||||
cases := []struct {
|
||||
name string
|
||||
b1 *CompactBitArray
|
||||
b2 *CompactBitArray
|
||||
eq bool
|
||||
}{
|
||||
{name: "both nil are equal", b1: nil, b2: nil, eq: true},
|
||||
{name: "if one is nil then not equal", b1: nil, b2: empty, eq: false},
|
||||
{name: "nil and empty not equal", b1: empty, b2: nil, eq: false},
|
||||
{name: "empty and empty equal", b1: empty, b2: new(CompactBitArray), eq: true},
|
||||
{name: "same bits should be equal", b1: big1, b2: &big1Cpy, eq: true},
|
||||
{name: "different should not be equal", b1: big1, b2: big2, eq: false},
|
||||
}
|
||||
for _, tc := range cases {
|
||||
tc := tc
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
eq := tc.b1.Equal(tc.b2)
|
||||
require.Equal(t, tc.eq, eq)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestJSONMarshalUnmarshal(t *testing.T) {
|
||||
|
||||
bA1 := NewCompactBitArray(0)
|
||||
|
@ -200,3 +228,14 @@ func TestCompactBitArrayGetSetIndex(t *testing.T) {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkNumTrueBitsBefore(b *testing.B) {
|
||||
ba, _ := randCompactBitArray(100)
|
||||
|
||||
b.Run("new", func(b *testing.B) {
|
||||
b.ReportAllocs()
|
||||
for i := 0; i < b.N; i++ {
|
||||
ba.NumTrueBitsBefore(90)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -34,7 +34,8 @@ func getIndex(pk types.PubKey, keys []types.PubKey) int {
|
|||
return -1
|
||||
}
|
||||
|
||||
// AddSignature adds a signature to the multisig, at the corresponding index.
|
||||
// AddSignature adds a signature to the multisig, at the corresponding index. The index must
|
||||
// represent the pubkey index in the LegacyAmingPubKey structure, which verifies this signature.
|
||||
// If the signature already exists, replace it.
|
||||
func AddSignature(mSig *signing.MultiSignatureData, sig signing.SignatureData, index int) {
|
||||
newSigIndex := mSig.BitArray.NumTrueBitsBefore(index)
|
||||
|
|
|
@ -42,6 +42,10 @@ module.exports = {
|
|||
"label": "v0.39",
|
||||
"key": "v0.39"
|
||||
},
|
||||
{
|
||||
"label": "v0.42",
|
||||
"key": "v0.42"
|
||||
},
|
||||
{
|
||||
"label": "master",
|
||||
"key": "master"
|
||||
|
|
|
@ -2,5 +2,6 @@ export default ({ router }) => {
|
|||
router.addRoutes([
|
||||
{ path: '/master/spec/*', redirect: '/master/modules/' },
|
||||
{ path: '/master/spec/governance/', redirect: '/master/modules/gov/' },
|
||||
{ path: '/v0.41/', redirect: '/v0.42/' },
|
||||
])
|
||||
}
|
|
@ -72,5 +72,8 @@ Read about the [PROCESS](./PROCESS.md).
|
|||
- [ADR 027: Deterministic Protobuf Serialization](./adr-027-deterministic-protobuf-serialization.md)
|
||||
- [ADR 028: Public Key Addresses](./adr-028-public-key-addresses.md)
|
||||
- [ADR 032: Typed Events](./adr-032-typed-events.md)
|
||||
- [ADR 033: Inter-module RPC](./adr-033-protobuf-inter-module-comm.md)
|
||||
- [ADR 035: Rosetta API Support](./adr-035-rosetta-api-support.md)
|
||||
- [ADR 037: Governance Split Votes](./adr-037-gov-split-vote.md)
|
||||
- [ADR 037: Governance Split Votes](./adr-037-gov-split-vote.md)
|
||||
- [ADR 038: State Listening](./adr-038-state-listening.md)
|
||||
- [ADR 039: Epoched Staking](./adr-039-epoched-staking.md)
|
|
@ -7,6 +7,7 @@
|
|||
- 2020 Apr 27: Convert usages of `oneof` for interfaces to `Any`
|
||||
- 2020 May 15: Describe `cosmos_proto` extensions and amino compatibility
|
||||
- 2020 Dec 4: Move and rename `MarshalAny` and `UnmarshalAny` into the `codec.Marshaler` interface.
|
||||
- 2021 Feb 24: Remove mentions of `HybridCodec`, which has been abandoned in [#6843](https://github.com/cosmos/cosmos-sdk/pull/6843).
|
||||
|
||||
## Status
|
||||
|
||||
|
@ -59,24 +60,26 @@ We will adopt [Protocol Buffers](https://developers.google.com/protocol-buffers)
|
|||
persisted structured data in the Cosmos SDK while providing a clean mechanism and developer UX for
|
||||
applications wishing to continue to use Amino. We will provide this mechanism by updating modules to
|
||||
accept a codec interface, `Marshaler`, instead of a concrete Amino codec. Furthermore, the Cosmos SDK
|
||||
will provide three concrete implementations of the `Marshaler` interface: `AminoCodec`, `ProtoCodec`,
|
||||
and `HybridCodec`.
|
||||
will provide two concrete implementations of the `Marshaler` interface: `AminoCodec` and `ProtoCodec`.
|
||||
|
||||
- `AminoCodec`: Uses Amino for both binary and JSON encoding.
|
||||
- `ProtoCodec`: Uses Protobuf for or both binary and JSON encoding.
|
||||
- `HybridCodec`: Uses Amino for JSON encoding and Protobuf for binary encoding.
|
||||
- `ProtoCodec`: Uses Protobuf for both binary and JSON encoding.
|
||||
|
||||
Until the client migration landscape is fully understood and designed, modules will use a `HybridCodec`
|
||||
as the concrete codec it accepts and/or extends. This means that all client JSON encoding, including
|
||||
genesis state, will still use Amino. The ultimate goal will be to replace Amino JSON encoding with
|
||||
Protbuf encoding and thus have modules accept and/or extend `ProtoCodec`.
|
||||
Modules will use whichever codec that is instantiated in the app. By default, the SDK's `simapp`
|
||||
instantiates a `ProtoCodec` as the concrete implementation of `Marshaler`, inside the `MakeTestEncodingConfig`
|
||||
function. This can be easily overwritten by app developers if they so desire.
|
||||
|
||||
The ultimate goal will be to replace Amino JSON encoding with Protobuf encoding and thus have
|
||||
modules accept and/or extend `ProtoCodec`. Until then, Amino JSON is still provided for legacy use-cases.
|
||||
A handful of places in the SDK still have Amino JSON hardcoded, such as the Legacy API REST endpoints
|
||||
and the `x/params` store. They are planned to be converted to Protobuf in a gradual manner.
|
||||
|
||||
### Module Codecs
|
||||
|
||||
Modules that do not require the ability to work with and serialize interfaces, the path to Protobuf
|
||||
migration is pretty straightforward. These modules are to simply migrate any existing types that
|
||||
are encoded and persisted via their concrete Amino codec to Protobuf and have their keeper accept a
|
||||
`Marshaler` that will be a `HybridCodec`. This migration is simple as things will just work as-is.
|
||||
`Marshaler` that will be a `ProtoCodec`. This migration is simple as things will just work as-is.
|
||||
|
||||
Note, any business logic that needs to encode primitive types like `bool` or `int64` should use
|
||||
[gogoprotobuf](https://github.com/gogo/protobuf) Value types.
|
||||
|
@ -207,7 +210,7 @@ Note that `InterfaceRegistry` usage does not deviate from standard protobuf
|
|||
usage of `Any`, it just introduces a security and introspection layer for
|
||||
golang usage.
|
||||
|
||||
`InterfaceRegistry` will be a member of `ProtoCodec` and `HybridCodec` as
|
||||
`InterfaceRegistry` will be a member of `ProtoCodec`
|
||||
described above. In order for modules to register interface types, app modules
|
||||
can optionally implement the following interface:
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue