Merge pull request #2467 from tendermint/release/v0.25.0
Release/v0.25.0
This commit is contained in:
commit
cd172acee8
|
@ -14,6 +14,7 @@ test/p2p/data/
|
||||||
test/logs
|
test/logs
|
||||||
coverage.txt
|
coverage.txt
|
||||||
docs/_build
|
docs/_build
|
||||||
|
docs/dist
|
||||||
*.log
|
*.log
|
||||||
abci-cli
|
abci-cli
|
||||||
docs/node_modules/
|
docs/node_modules/
|
||||||
|
@ -25,6 +26,8 @@ scripts/cutWALUntil/cutWALUntil
|
||||||
.idea/
|
.idea/
|
||||||
*.iml
|
*.iml
|
||||||
|
|
||||||
|
.vscode/
|
||||||
|
|
||||||
libs/pubsub/query/fuzz_test/output
|
libs/pubsub/query/fuzz_test/output
|
||||||
shunit2
|
shunit2
|
||||||
|
|
||||||
|
@ -38,4 +41,4 @@ terraform.tfstate
|
||||||
terraform.tfstate.backup
|
terraform.tfstate.backup
|
||||||
terraform.tfstate.d
|
terraform.tfstate.d
|
||||||
|
|
||||||
.vscode
|
.vscode
|
||||||
|
|
55
CHANGELOG.md
55
CHANGELOG.md
|
@ -1,5 +1,60 @@
|
||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## v0.25.0
|
||||||
|
|
||||||
|
*September 22, 2018*
|
||||||
|
|
||||||
|
Special thanks to external contributors on this release:
|
||||||
|
@scriptionist, @bradyjoestar, @WALL-E
|
||||||
|
|
||||||
|
This release is mostly about the ConsensusParams - removing fields and enforcing MaxGas.
|
||||||
|
It also addresses some issues found via security audit, removes various unused
|
||||||
|
functions from `libs/common`, and implements
|
||||||
|
[ADR-012](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-012-peer-transport.md).
|
||||||
|
|
||||||
|
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
|
||||||
|
|
||||||
|
BREAKING CHANGES:
|
||||||
|
|
||||||
|
* CLI/RPC/Config
|
||||||
|
* [rpc] [\#2391](https://github.com/tendermint/tendermint/issues/2391) /status `result.node_info.other` became a map
|
||||||
|
* [types] [\#2364](https://github.com/tendermint/tendermint/issues/2364) Remove `TxSize` and `BlockGossip` from `ConsensusParams`
|
||||||
|
* Maximum tx size is now set implicitly via the `BlockSize.MaxBytes`
|
||||||
|
* The size of block parts in the consensus is now fixed to 64kB
|
||||||
|
|
||||||
|
* Apps
|
||||||
|
* [mempool] [\#2360](https://github.com/tendermint/tendermint/issues/2360) Mempool tracks the `ResponseCheckTx.GasWanted` and
|
||||||
|
`ConsensusParams.BlockSize.MaxGas` and enforces:
|
||||||
|
- `GasWanted <= MaxGas` for every tx
|
||||||
|
- `(sum of GasWanted in block) <= MaxGas` for block proposal
|
||||||
|
|
||||||
|
* Go API
|
||||||
|
* [libs/common] [\#2431](https://github.com/tendermint/tendermint/issues/2431) Remove Word256 due to lack of use
|
||||||
|
* [libs/common] [\#2452](https://github.com/tendermint/tendermint/issues/2452) Remove the following functions due to lack of use:
|
||||||
|
* byteslice.go: cmn.IsZeros, cmn.RightPadBytes, cmn.LeftPadBytes, cmn.PrefixEndBytes
|
||||||
|
* strings.go: cmn.IsHex, cmn.StripHex
|
||||||
|
* int.go: Uint64Slice, all put/get int64 methods
|
||||||
|
|
||||||
|
FEATURES:
|
||||||
|
- [rpc] [\#2415](https://github.com/tendermint/tendermint/issues/2415) New `/consensus_params?height=X` endpoint to query the consensus
|
||||||
|
params at any height (@scriptonist)
|
||||||
|
- [types] [\#1714](https://github.com/tendermint/tendermint/issues/1714) Add Address to GenesisValidator
|
||||||
|
- [metrics] [\#2337](https://github.com/tendermint/tendermint/issues/2337) `consensus.block_interval_metrics` is now gauge, not histogram (you will be able to see spikes, if any)
|
||||||
|
- [libs] [\#2286](https://github.com/tendermint/tendermint/issues/2286) Panic if `autofile` or `db/fsdb` permissions change from 0600.
|
||||||
|
|
||||||
|
IMPROVEMENTS:
|
||||||
|
- [libs/db] [\#2371](https://github.com/tendermint/tendermint/issues/2371) Output error instead of panic when the given `db_backend` is not initialised (@bradyjoestar)
|
||||||
|
- [mempool] [\#2399](https://github.com/tendermint/tendermint/issues/2399) Make mempool cache a proper LRU (@bradyjoestar)
|
||||||
|
- [p2p] [\#2126](https://github.com/tendermint/tendermint/issues/2126) Introduce PeerTransport interface to improve isolation of concerns
|
||||||
|
- [libs/common] [\#2326](https://github.com/tendermint/tendermint/issues/2326) Service returns ErrNotStarted
|
||||||
|
|
||||||
|
BUG FIXES:
|
||||||
|
- [node] [\#2294](https://github.com/tendermint/tendermint/issues/2294) Delay starting node until Genesis time
|
||||||
|
- [consensus] [\#2048](https://github.com/tendermint/tendermint/issues/2048) Correct peer statistics for marking peer as good
|
||||||
|
- [rpc] [\#2460](https://github.com/tendermint/tendermint/issues/2460) StartHTTPAndTLSServer() now passes StartTLS() errors back to the caller rather than hanging forever.
|
||||||
|
- [p2p] [\#2047](https://github.com/tendermint/tendermint/issues/2047) Accept new connections asynchronously
|
||||||
|
- [tm-bench] [\#2410](https://github.com/tendermint/tendermint/issues/2410) Enforce minimum transaction size (@WALL-E)
|
||||||
|
|
||||||
## 0.24.0
|
## 0.24.0
|
||||||
|
|
||||||
*September 6th, 2018*
|
*September 6th, 2018*
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Pending
|
# Pending
|
||||||
|
|
||||||
Special thanks to external contributors with PRs included in this release:
|
Special thanks to external contributors on this release:
|
||||||
|
|
||||||
BREAKING CHANGES:
|
BREAKING CHANGES:
|
||||||
|
|
||||||
|
@ -10,11 +10,6 @@ BREAKING CHANGES:
|
||||||
|
|
||||||
* Go API
|
* Go API
|
||||||
|
|
||||||
* Blockchain Protocol
|
|
||||||
|
|
||||||
* P2P Protocol
|
|
||||||
|
|
||||||
|
|
||||||
FEATURES:
|
FEATURES:
|
||||||
|
|
||||||
IMPROVEMENTS:
|
IMPROVEMENTS:
|
||||||
|
|
5
Makefile
5
Makefile
|
@ -23,11 +23,14 @@ check: check_tools get_vendor_deps
|
||||||
build:
|
build:
|
||||||
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint/
|
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint/
|
||||||
|
|
||||||
|
build_c:
|
||||||
|
CGO_ENABLED=1 go build $(BUILD_FLAGS) -tags "$(BUILD_TAGS) gcc" -o build/tendermint ./cmd/tendermint/
|
||||||
|
|
||||||
build_race:
|
build_race:
|
||||||
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint
|
CGO_ENABLED=0 go build -race $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o build/tendermint ./cmd/tendermint
|
||||||
|
|
||||||
install:
|
install:
|
||||||
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
|
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
|
||||||
|
|
||||||
########################################
|
########################################
|
||||||
### Protobuf
|
### Protobuf
|
||||||
|
|
43
README.md
43
README.md
|
@ -8,7 +8,7 @@ Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
|
||||||
[![API Reference](
|
[![API Reference](
|
||||||
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
|
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
|
||||||
)](https://godoc.org/github.com/tendermint/tendermint)
|
)](https://godoc.org/github.com/tendermint/tendermint)
|
||||||
[![Go version](https://img.shields.io/badge/go-1.9.2-blue.svg)](https://github.com/moovweb/gvm)
|
[![Go version](https://img.shields.io/badge/go-1.10.4-blue.svg)](https://github.com/moovweb/gvm)
|
||||||
[![riot.im](https://img.shields.io/badge/riot.im-JOIN%20CHAT-green.svg)](https://riot.im/app/#/room/#tendermint:matrix.org)
|
[![riot.im](https://img.shields.io/badge/riot.im-JOIN%20CHAT-green.svg)](https://riot.im/app/#/room/#tendermint:matrix.org)
|
||||||
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE)
|
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE)
|
||||||
[![](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
|
[![](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
|
||||||
|
@ -22,7 +22,10 @@ develop | [![CircleCI](https://circleci.com/gh/tendermint/tendermint/tree/deve
|
||||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
|
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
|
||||||
and securely replicates it on many machines.
|
and securely replicates it on many machines.
|
||||||
|
|
||||||
For protocol details, see [the specification](/docs/spec). For a consensus proof and detailed protocol analysis checkout our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
|
For protocol details, see [the specification](/docs/spec).
|
||||||
|
|
||||||
|
For detailed analysis of the consensus protocol, including safety and liveness proofs,
|
||||||
|
see our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
|
||||||
|
|
||||||
## A Note on Production Readiness
|
## A Note on Production Readiness
|
||||||
|
|
||||||
|
@ -30,7 +33,7 @@ While Tendermint is being used in production in private, permissioned
|
||||||
environments, we are still working actively to harden and audit it in preparation
|
environments, we are still working actively to harden and audit it in preparation
|
||||||
for use in public blockchains, such as the [Cosmos Network](https://cosmos.network/).
|
for use in public blockchains, such as the [Cosmos Network](https://cosmos.network/).
|
||||||
We are also still making breaking changes to the protocol and the APIs.
|
We are also still making breaking changes to the protocol and the APIs.
|
||||||
Thus we tag the releases as *alpha software*.
|
Thus, we tag the releases as *alpha software*.
|
||||||
|
|
||||||
In any case, if you intend to run Tendermint in production,
|
In any case, if you intend to run Tendermint in production,
|
||||||
please [contact us](https://riot.im/app/#/room/#tendermint:matrix.org) :)
|
please [contact us](https://riot.im/app/#/room/#tendermint:matrix.org) :)
|
||||||
|
@ -46,7 +49,7 @@ For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.
|
||||||
|
|
||||||
Requirement|Notes
|
Requirement|Notes
|
||||||
---|---
|
---|---
|
||||||
Go version | Go1.9 or higher
|
Go version | Go1.10 or higher
|
||||||
|
|
||||||
## Install
|
## Install
|
||||||
|
|
||||||
|
@ -54,10 +57,10 @@ See the [install instructions](/docs/introduction/install.md)
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
- [Single node](/docs/using-tendermint.md)
|
- [Single node](/docs/tendermint-core/using-tendermint.md)
|
||||||
- [Local cluster using docker-compose](/networks/local)
|
- [Local cluster using docker-compose](/networks/local)
|
||||||
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
|
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
|
||||||
- [Join the public testnet](https://cosmos.network/testnet)
|
- [Join the Cosmos testnet](https://cosmos.network/testnet)
|
||||||
|
|
||||||
## Resources
|
## Resources
|
||||||
|
|
||||||
|
@ -66,30 +69,31 @@ See the [install instructions](/docs/introduction/install.md)
|
||||||
For details about the blockchain data structures and the p2p protocols, see the
|
For details about the blockchain data structures and the p2p protocols, see the
|
||||||
the [Tendermint specification](/docs/spec).
|
the [Tendermint specification](/docs/spec).
|
||||||
|
|
||||||
For details on using the software, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
|
For details on using the software, see the [documentation](/docs/) which is also
|
||||||
Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
|
hosted at: https://tendermint.com/docs/
|
||||||
|
|
||||||
|
### Tools
|
||||||
|
|
||||||
|
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
|
||||||
|
Their code is found [here](/tools) and these binaries need to be built seperately.
|
||||||
|
Additional documentation is found [here](/docs/tools).
|
||||||
|
|
||||||
### Sub-projects
|
### Sub-projects
|
||||||
|
|
||||||
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
|
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
|
||||||
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
|
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
|
||||||
|
|
||||||
### Tools
|
|
||||||
* [Deployment, Benchmarking, and Monitoring](http://tendermint.readthedocs.io/projects/tools/en/develop/index.html#tendermint-tools)
|
|
||||||
|
|
||||||
### Applications
|
### Applications
|
||||||
|
|
||||||
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
||||||
* [Ethermint](http://github.com/tendermint/ethermint); Ethereum on Tendermint
|
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
|
||||||
* [Many more](https://tendermint.readthedocs.io/en/master/ecosystem.html#abci-applications)
|
* [Many more](https://tendermint.com/ecosystem)
|
||||||
|
|
||||||
### More
|
### Research
|
||||||
|
|
||||||
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
|
||||||
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
|
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
|
||||||
* [Tendermint Blog](https://blog.cosmos.network/tendermint/home)
|
* [Blog](https://blog.cosmos.network/tendermint/home)
|
||||||
* [Cosmos Blog](https://blog.cosmos.network)
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
|
@ -114,6 +118,11 @@ CHANGELOG even if they don't lead to MINOR version bumps:
|
||||||
- rpc/client
|
- rpc/client
|
||||||
- config
|
- config
|
||||||
- node
|
- node
|
||||||
|
- libs/bech32
|
||||||
|
- libs/common
|
||||||
|
- libs/db
|
||||||
|
- libs/errors
|
||||||
|
- libs/log
|
||||||
|
|
||||||
Exported objects in these packages that are not covered by the versioning scheme
|
Exported objects in these packages that are not covered by the versioning scheme
|
||||||
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any
|
are explicitly marked by `// UNSTABLE` in their go doc comment and may change at any
|
||||||
|
@ -130,6 +139,8 @@ data into the new chain.
|
||||||
However, any bump in the PATCH version should be compatible with existing histories
|
However, any bump in the PATCH version should be compatible with existing histories
|
||||||
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
|
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
|
||||||
|
|
||||||
|
For more information on upgrading, see [here](./UPGRADING.md)
|
||||||
|
|
||||||
## Code of Conduct
|
## Code of Conduct
|
||||||
|
|
||||||
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
|
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).
|
||||||
|
|
|
@ -3,6 +3,12 @@
|
||||||
This guide provides steps to be followed when you upgrade your applications to
|
This guide provides steps to be followed when you upgrade your applications to
|
||||||
a newer version of Tendermint Core.
|
a newer version of Tendermint Core.
|
||||||
|
|
||||||
|
## v0.25.0
|
||||||
|
|
||||||
|
This release has minimal impact.
|
||||||
|
|
||||||
|
If you use GasWanted in ABCI and want to enforce it, set the MaxGas in the genesis file (default is no max).
|
||||||
|
|
||||||
## v0.24.0
|
## v0.24.0
|
||||||
|
|
||||||
New 0.24.0 release contains a lot of changes to the state and types. It's not
|
New 0.24.0 release contains a lot of changes to the state and types. It's not
|
||||||
|
|
|
@ -29,10 +29,10 @@ Vagrant.configure("2") do |config|
|
||||||
usermod -a -G docker vagrant
|
usermod -a -G docker vagrant
|
||||||
|
|
||||||
# install go
|
# install go
|
||||||
wget -q https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz
|
wget -q https://dl.google.com/go/go1.11.linux-amd64.tar.gz
|
||||||
tar -xvf go1.10.1.linux-amd64.tar.gz
|
tar -xvf go1.11.linux-amd64.tar.gz
|
||||||
mv go /usr/local
|
mv go /usr/local
|
||||||
rm -f go1.10.1.linux-amd64.tar.gz
|
rm -f go1.11.linux-amd64.tar.gz
|
||||||
|
|
||||||
# cleanup
|
# cleanup
|
||||||
apt-get autoremove -y
|
apt-get autoremove -y
|
||||||
|
|
|
@ -88,7 +88,7 @@ func (app *KVStoreApplication) DeliverTx(tx []byte) types.ResponseDeliverTx {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (app *KVStoreApplication) CheckTx(tx []byte) types.ResponseCheckTx {
|
func (app *KVStoreApplication) CheckTx(tx []byte) types.ResponseCheckTx {
|
||||||
return types.ResponseCheckTx{Code: code.CodeTypeOK}
|
return types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (app *KVStoreApplication) Commit() types.ResponseCommit {
|
func (app *KVStoreApplication) Commit() types.ResponseCommit {
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -200,27 +200,21 @@ message ResponseCommit {
|
||||||
// that can be adjusted by the abci app
|
// that can be adjusted by the abci app
|
||||||
message ConsensusParams {
|
message ConsensusParams {
|
||||||
BlockSize block_size = 1;
|
BlockSize block_size = 1;
|
||||||
TxSize tx_size = 2;
|
EvidenceParams evidence_params = 2;
|
||||||
BlockGossip block_gossip = 3;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// BlockSize contains limits on the block size.
|
// BlockSize contains limits on the block size.
|
||||||
message BlockSize {
|
message BlockSize {
|
||||||
int32 max_bytes = 1;
|
// Note: must be greater than 0
|
||||||
|
int64 max_bytes = 1;
|
||||||
|
// Note: must be greater or equal to -1
|
||||||
int64 max_gas = 2;
|
int64 max_gas = 2;
|
||||||
}
|
}
|
||||||
|
|
||||||
// TxSize contains limits on the tx size.
|
// EvidenceParams contains limits on the evidence.
|
||||||
message TxSize {
|
message EvidenceParams {
|
||||||
int32 max_bytes = 1;
|
// Note: must be greater than 0
|
||||||
int64 max_gas = 2;
|
int64 max_age = 1;
|
||||||
}
|
|
||||||
|
|
||||||
// BlockGossip determine consensus critical
|
|
||||||
// elements of how blocks are gossiped
|
|
||||||
message BlockGossip {
|
|
||||||
// Note: must not be 0
|
|
||||||
int32 block_part_size_bytes = 1;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
message LastCommitInfo {
|
message LastCommitInfo {
|
||||||
|
|
|
@ -1534,15 +1534,15 @@ func TestBlockSizeMarshalTo(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxSizeProto(t *testing.T) {
|
func TestEvidenceParamsProto(t *testing.T) {
|
||||||
seed := time.Now().UnixNano()
|
seed := time.Now().UnixNano()
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
popr := math_rand.New(math_rand.NewSource(seed))
|
||||||
p := NewPopulatedTxSize(popr, false)
|
p := NewPopulatedEvidenceParams(popr, false)
|
||||||
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
|
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
msg := &TxSize{}
|
msg := &EvidenceParams{}
|
||||||
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
|
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
|
@ -1565,10 +1565,10 @@ func TestTxSizeProto(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxSizeMarshalTo(t *testing.T) {
|
func TestEvidenceParamsMarshalTo(t *testing.T) {
|
||||||
seed := time.Now().UnixNano()
|
seed := time.Now().UnixNano()
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
popr := math_rand.New(math_rand.NewSource(seed))
|
||||||
p := NewPopulatedTxSize(popr, false)
|
p := NewPopulatedEvidenceParams(popr, false)
|
||||||
size := p.Size()
|
size := p.Size()
|
||||||
dAtA := make([]byte, size)
|
dAtA := make([]byte, size)
|
||||||
for i := range dAtA {
|
for i := range dAtA {
|
||||||
|
@ -1578,63 +1578,7 @@ func TestTxSizeMarshalTo(t *testing.T) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
msg := &TxSize{}
|
msg := &EvidenceParams{}
|
||||||
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
for i := range dAtA {
|
|
||||||
dAtA[i] = byte(popr.Intn(256))
|
|
||||||
}
|
|
||||||
if !p.Equal(msg) {
|
|
||||||
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBlockGossipProto(t *testing.T) {
|
|
||||||
seed := time.Now().UnixNano()
|
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
|
||||||
p := NewPopulatedBlockGossip(popr, false)
|
|
||||||
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
msg := &BlockGossip{}
|
|
||||||
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
littlefuzz := make([]byte, len(dAtA))
|
|
||||||
copy(littlefuzz, dAtA)
|
|
||||||
for i := range dAtA {
|
|
||||||
dAtA[i] = byte(popr.Intn(256))
|
|
||||||
}
|
|
||||||
if !p.Equal(msg) {
|
|
||||||
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
|
|
||||||
}
|
|
||||||
if len(littlefuzz) > 0 {
|
|
||||||
fuzzamount := 100
|
|
||||||
for i := 0; i < fuzzamount; i++ {
|
|
||||||
littlefuzz[popr.Intn(len(littlefuzz))] = byte(popr.Intn(256))
|
|
||||||
littlefuzz = append(littlefuzz, byte(popr.Intn(256)))
|
|
||||||
}
|
|
||||||
// shouldn't panic
|
|
||||||
_ = github_com_gogo_protobuf_proto.Unmarshal(littlefuzz, msg)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBlockGossipMarshalTo(t *testing.T) {
|
|
||||||
seed := time.Now().UnixNano()
|
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
|
||||||
p := NewPopulatedBlockGossip(popr, false)
|
|
||||||
size := p.Size()
|
|
||||||
dAtA := make([]byte, size)
|
|
||||||
for i := range dAtA {
|
|
||||||
dAtA[i] = byte(popr.Intn(256))
|
|
||||||
}
|
|
||||||
_, err := p.MarshalTo(dAtA)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
msg := &BlockGossip{}
|
|
||||||
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
|
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
|
@ -2636,34 +2580,16 @@ func TestBlockSizeJSON(t *testing.T) {
|
||||||
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
|
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
func TestTxSizeJSON(t *testing.T) {
|
func TestEvidenceParamsJSON(t *testing.T) {
|
||||||
seed := time.Now().UnixNano()
|
seed := time.Now().UnixNano()
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
popr := math_rand.New(math_rand.NewSource(seed))
|
||||||
p := NewPopulatedTxSize(popr, true)
|
p := NewPopulatedEvidenceParams(popr, true)
|
||||||
marshaler := github_com_gogo_protobuf_jsonpb.Marshaler{}
|
marshaler := github_com_gogo_protobuf_jsonpb.Marshaler{}
|
||||||
jsondata, err := marshaler.MarshalToString(p)
|
jsondata, err := marshaler.MarshalToString(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
msg := &TxSize{}
|
msg := &EvidenceParams{}
|
||||||
err = github_com_gogo_protobuf_jsonpb.UnmarshalString(jsondata, msg)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
if !p.Equal(msg) {
|
|
||||||
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
func TestBlockGossipJSON(t *testing.T) {
|
|
||||||
seed := time.Now().UnixNano()
|
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
|
||||||
p := NewPopulatedBlockGossip(popr, true)
|
|
||||||
marshaler := github_com_gogo_protobuf_jsonpb.Marshaler{}
|
|
||||||
jsondata, err := marshaler.MarshalToString(p)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
msg := &BlockGossip{}
|
|
||||||
err = github_com_gogo_protobuf_jsonpb.UnmarshalString(jsondata, msg)
|
err = github_com_gogo_protobuf_jsonpb.UnmarshalString(jsondata, msg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
|
@ -3590,12 +3516,12 @@ func TestBlockSizeProtoCompactText(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxSizeProtoText(t *testing.T) {
|
func TestEvidenceParamsProtoText(t *testing.T) {
|
||||||
seed := time.Now().UnixNano()
|
seed := time.Now().UnixNano()
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
popr := math_rand.New(math_rand.NewSource(seed))
|
||||||
p := NewPopulatedTxSize(popr, true)
|
p := NewPopulatedEvidenceParams(popr, true)
|
||||||
dAtA := github_com_gogo_protobuf_proto.MarshalTextString(p)
|
dAtA := github_com_gogo_protobuf_proto.MarshalTextString(p)
|
||||||
msg := &TxSize{}
|
msg := &EvidenceParams{}
|
||||||
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
|
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
|
@ -3604,40 +3530,12 @@ func TestTxSizeProtoText(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxSizeProtoCompactText(t *testing.T) {
|
func TestEvidenceParamsProtoCompactText(t *testing.T) {
|
||||||
seed := time.Now().UnixNano()
|
seed := time.Now().UnixNano()
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
popr := math_rand.New(math_rand.NewSource(seed))
|
||||||
p := NewPopulatedTxSize(popr, true)
|
p := NewPopulatedEvidenceParams(popr, true)
|
||||||
dAtA := github_com_gogo_protobuf_proto.CompactTextString(p)
|
dAtA := github_com_gogo_protobuf_proto.CompactTextString(p)
|
||||||
msg := &TxSize{}
|
msg := &EvidenceParams{}
|
||||||
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
if !p.Equal(msg) {
|
|
||||||
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBlockGossipProtoText(t *testing.T) {
|
|
||||||
seed := time.Now().UnixNano()
|
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
|
||||||
p := NewPopulatedBlockGossip(popr, true)
|
|
||||||
dAtA := github_com_gogo_protobuf_proto.MarshalTextString(p)
|
|
||||||
msg := &BlockGossip{}
|
|
||||||
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
if !p.Equal(msg) {
|
|
||||||
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBlockGossipProtoCompactText(t *testing.T) {
|
|
||||||
seed := time.Now().UnixNano()
|
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
|
||||||
p := NewPopulatedBlockGossip(popr, true)
|
|
||||||
dAtA := github_com_gogo_protobuf_proto.CompactTextString(p)
|
|
||||||
msg := &BlockGossip{}
|
|
||||||
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
|
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
t.Fatalf("seed = %d, err = %v", seed, err)
|
||||||
}
|
}
|
||||||
|
@ -4492,32 +4390,10 @@ func TestBlockSizeSize(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxSizeSize(t *testing.T) {
|
func TestEvidenceParamsSize(t *testing.T) {
|
||||||
seed := time.Now().UnixNano()
|
seed := time.Now().UnixNano()
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
popr := math_rand.New(math_rand.NewSource(seed))
|
||||||
p := NewPopulatedTxSize(popr, true)
|
p := NewPopulatedEvidenceParams(popr, true)
|
||||||
size2 := github_com_gogo_protobuf_proto.Size(p)
|
|
||||||
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("seed = %d, err = %v", seed, err)
|
|
||||||
}
|
|
||||||
size := p.Size()
|
|
||||||
if len(dAtA) != size {
|
|
||||||
t.Errorf("seed = %d, size %v != marshalled size %v", seed, size, len(dAtA))
|
|
||||||
}
|
|
||||||
if size2 != size {
|
|
||||||
t.Errorf("seed = %d, size %v != before marshal proto.Size %v", seed, size, size2)
|
|
||||||
}
|
|
||||||
size3 := github_com_gogo_protobuf_proto.Size(p)
|
|
||||||
if size3 != size {
|
|
||||||
t.Errorf("seed = %d, size %v != after marshal proto.Size %v", seed, size, size3)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBlockGossipSize(t *testing.T) {
|
|
||||||
seed := time.Now().UnixNano()
|
|
||||||
popr := math_rand.New(math_rand.NewSource(seed))
|
|
||||||
p := NewPopulatedBlockGossip(popr, true)
|
|
||||||
size2 := github_com_gogo_protobuf_proto.Size(p)
|
size2 := github_com_gogo_protobuf_proto.Size(p)
|
||||||
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
|
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -24,7 +24,10 @@ func BenchmarkEncodeStatusWire(b *testing.B) {
|
||||||
Network: "SOMENAME",
|
Network: "SOMENAME",
|
||||||
ListenAddr: "SOMEADDR",
|
ListenAddr: "SOMEADDR",
|
||||||
Version: "SOMEVER",
|
Version: "SOMEVER",
|
||||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
Other: p2p.NodeInfoOther{
|
||||||
|
AminoVersion: "SOMESTRING",
|
||||||
|
P2PVersion: "OTHERSTRING",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
SyncInfo: ctypes.SyncInfo{
|
SyncInfo: ctypes.SyncInfo{
|
||||||
LatestBlockHash: []byte("SOMEBYTES"),
|
LatestBlockHash: []byte("SOMEBYTES"),
|
||||||
|
@ -59,7 +62,10 @@ func BenchmarkEncodeNodeInfoWire(b *testing.B) {
|
||||||
Network: "SOMENAME",
|
Network: "SOMENAME",
|
||||||
ListenAddr: "SOMEADDR",
|
ListenAddr: "SOMEADDR",
|
||||||
Version: "SOMEVER",
|
Version: "SOMEVER",
|
||||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
Other: p2p.NodeInfoOther{
|
||||||
|
AminoVersion: "SOMESTRING",
|
||||||
|
P2PVersion: "OTHERSTRING",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
b.StartTimer()
|
b.StartTimer()
|
||||||
|
|
||||||
|
@ -84,7 +90,10 @@ func BenchmarkEncodeNodeInfoBinary(b *testing.B) {
|
||||||
Network: "SOMENAME",
|
Network: "SOMENAME",
|
||||||
ListenAddr: "SOMEADDR",
|
ListenAddr: "SOMEADDR",
|
||||||
Version: "SOMEVER",
|
Version: "SOMEVER",
|
||||||
Other: []string{"SOMESTRING", "OTHERSTRING"},
|
Other: p2p.NodeInfoOther{
|
||||||
|
AminoVersion: "SOMESTRING",
|
||||||
|
P2PVersion: "OTHERSTRING",
|
||||||
|
},
|
||||||
}
|
}
|
||||||
b.StartTimer()
|
b.StartTimer()
|
||||||
|
|
||||||
|
|
|
@ -290,7 +290,7 @@ FOR_LOOP:
|
||||||
didProcessCh <- struct{}{}
|
didProcessCh <- struct{}{}
|
||||||
}
|
}
|
||||||
|
|
||||||
firstParts := first.MakePartSet(state.ConsensusParams.BlockPartSizeBytes)
|
firstParts := first.MakePartSet(types.BlockPartSizeBytes)
|
||||||
firstPartsHeader := firstParts.Header()
|
firstPartsHeader := firstParts.Header()
|
||||||
firstID := types.BlockID{first.Hash(), firstPartsHeader}
|
firstID := types.BlockID{first.Hash(), firstPartsHeader}
|
||||||
// Finally, verify the first block using the second's commit
|
// Finally, verify the first block using the second's commit
|
||||||
|
|
|
@ -42,13 +42,13 @@ func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainRe
|
||||||
bcReactor.SetLogger(logger.With("module", "blockchain"))
|
bcReactor.SetLogger(logger.With("module", "blockchain"))
|
||||||
|
|
||||||
// Next: we need to set a switch in order for peers to be added in
|
// Next: we need to set a switch in order for peers to be added in
|
||||||
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig())
|
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig(), nil)
|
||||||
|
|
||||||
// Lastly: let's add some blocks in
|
// Lastly: let's add some blocks in
|
||||||
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
|
||||||
firstBlock := makeBlock(blockHeight, state)
|
firstBlock := makeBlock(blockHeight, state)
|
||||||
secondBlock := makeBlock(blockHeight+1, state)
|
secondBlock := makeBlock(blockHeight+1, state)
|
||||||
firstParts := firstBlock.MakePartSet(state.ConsensusParams.BlockGossip.BlockPartSizeBytes)
|
firstParts := firstBlock.MakePartSet(types.BlockPartSizeBytes)
|
||||||
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
|
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -58,8 +58,9 @@ func initFilesWithConfig(config *cfg.Config) error {
|
||||||
ConsensusParams: types.DefaultConsensusParams(),
|
ConsensusParams: types.DefaultConsensusParams(),
|
||||||
}
|
}
|
||||||
genDoc.Validators = []types.GenesisValidator{{
|
genDoc.Validators = []types.GenesisValidator{{
|
||||||
PubKey: pv.GetPubKey(),
|
Address: pv.GetPubKey().Address(),
|
||||||
Power: 10,
|
PubKey: pv.GetPubKey(),
|
||||||
|
Power: 10,
|
||||||
}}
|
}}
|
||||||
|
|
||||||
if err := genDoc.SaveAs(genFile); err != nil {
|
if err := genDoc.SaveAs(genFile); err != nil {
|
||||||
|
|
|
@ -91,9 +91,10 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
|
||||||
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
|
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
|
||||||
pv := privval.LoadFilePV(pvFile)
|
pv := privval.LoadFilePV(pvFile)
|
||||||
genVals[i] = types.GenesisValidator{
|
genVals[i] = types.GenesisValidator{
|
||||||
PubKey: pv.GetPubKey(),
|
Address: pv.GetPubKey().Address(),
|
||||||
Power: 1,
|
PubKey: pv.GetPubKey(),
|
||||||
Name: nodeDirName,
|
Power: 1,
|
||||||
|
Name: nodeDirName,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -113,7 +113,7 @@ type BaseConfig struct {
|
||||||
// and verifying their commits
|
// and verifying their commits
|
||||||
FastSync bool `mapstructure:"fast_sync"`
|
FastSync bool `mapstructure:"fast_sync"`
|
||||||
|
|
||||||
// Database backend: leveldb | memdb
|
// Database backend: leveldb | memdb | cleveldb
|
||||||
DBBackend string `mapstructure:"db_backend"`
|
DBBackend string `mapstructure:"db_backend"`
|
||||||
|
|
||||||
// Database directory
|
// Database directory
|
||||||
|
@ -587,15 +587,15 @@ type TxIndexConfig struct {
|
||||||
// Comma-separated list of tags to index (by default the only tag is "tx.hash")
|
// Comma-separated list of tags to index (by default the only tag is "tx.hash")
|
||||||
//
|
//
|
||||||
// You can also index transactions by height by adding "tx.height" tag here.
|
// You can also index transactions by height by adding "tx.height" tag here.
|
||||||
//
|
//
|
||||||
// It's recommended to index only a subset of tags due to possible memory
|
// It's recommended to index only a subset of tags due to possible memory
|
||||||
// bloat. This is, of course, depends on the indexer's DB and the volume of
|
// bloat. This is, of course, depends on the indexer's DB and the volume of
|
||||||
// transactions.
|
// transactions.
|
||||||
IndexTags string `mapstructure:"index_tags"`
|
IndexTags string `mapstructure:"index_tags"`
|
||||||
|
|
||||||
// When set to true, tells indexer to index all tags (predefined tags:
|
// When set to true, tells indexer to index all tags (predefined tags:
|
||||||
// "tx.hash", "tx.height" and all tags from DeliverTx responses).
|
// "tx.hash", "tx.height" and all tags from DeliverTx responses).
|
||||||
//
|
//
|
||||||
// Note this may be not desirable (see the comment above). IndexTags has a
|
// Note this may be not desirable (see the comment above). IndexTags has a
|
||||||
// precedence over IndexAllTags (i.e. when given both, IndexTags will be
|
// precedence over IndexAllTags (i.e. when given both, IndexTags will be
|
||||||
// indexed).
|
// indexed).
|
||||||
|
|
|
@ -77,7 +77,7 @@ moniker = "{{ .BaseConfig.Moniker }}"
|
||||||
# and verifying their commits
|
# and verifying their commits
|
||||||
fast_sync = {{ .BaseConfig.FastSync }}
|
fast_sync = {{ .BaseConfig.FastSync }}
|
||||||
|
|
||||||
# Database backend: leveldb | memdb
|
# Database backend: leveldb | memdb | cleveldb
|
||||||
db_backend = "{{ .BaseConfig.DBBackend }}"
|
db_backend = "{{ .BaseConfig.DBBackend }}"
|
||||||
|
|
||||||
# Database directory
|
# Database directory
|
||||||
|
|
|
@ -39,7 +39,13 @@ func TestByzantine(t *testing.T) {
|
||||||
switches := make([]*p2p.Switch, N)
|
switches := make([]*p2p.Switch, N)
|
||||||
p2pLogger := logger.With("module", "p2p")
|
p2pLogger := logger.With("module", "p2p")
|
||||||
for i := 0; i < N; i++ {
|
for i := 0; i < N; i++ {
|
||||||
switches[i] = p2p.NewSwitch(config.P2P)
|
switches[i] = p2p.MakeSwitch(
|
||||||
|
config.P2P,
|
||||||
|
i,
|
||||||
|
"foo", "1.0.0",
|
||||||
|
func(i int, sw *p2p.Switch) *p2p.Switch {
|
||||||
|
return sw
|
||||||
|
})
|
||||||
switches[i].SetLogger(p2pLogger.With("validator", i))
|
switches[i].SetLogger(p2pLogger.With("validator", i))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -148,7 +148,7 @@ func TestMempoolRmBadTx(t *testing.T) {
|
||||||
|
|
||||||
// check for the tx
|
// check for the tx
|
||||||
for {
|
for {
|
||||||
txs := cs.mempool.ReapMaxBytes(len(txBytes))
|
txs := cs.mempool.ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
|
||||||
if len(txs) == 0 {
|
if len(txs) == 0 {
|
||||||
emptyMempoolCh <- struct{}{}
|
emptyMempoolCh <- struct{}{}
|
||||||
return
|
return
|
||||||
|
|
|
@ -30,7 +30,7 @@ type Metrics struct {
|
||||||
ByzantineValidatorsPower metrics.Gauge
|
ByzantineValidatorsPower metrics.Gauge
|
||||||
|
|
||||||
// Time between this and the last block.
|
// Time between this and the last block.
|
||||||
BlockIntervalSeconds metrics.Histogram
|
BlockIntervalSeconds metrics.Gauge
|
||||||
|
|
||||||
// Number of transactions.
|
// Number of transactions.
|
||||||
NumTxs metrics.Gauge
|
NumTxs metrics.Gauge
|
||||||
|
@ -85,11 +85,10 @@ func PrometheusMetrics() *Metrics {
|
||||||
Help: "Total power of the byzantine validators.",
|
Help: "Total power of the byzantine validators.",
|
||||||
}, []string{}),
|
}, []string{}),
|
||||||
|
|
||||||
BlockIntervalSeconds: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
|
BlockIntervalSeconds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||||
Subsystem: "consensus",
|
Subsystem: "consensus",
|
||||||
Name: "block_interval_seconds",
|
Name: "block_interval_seconds",
|
||||||
Help: "Time between this and the last block.",
|
Help: "Time between this and the last block.",
|
||||||
Buckets: []float64{1, 2.5, 5, 10, 60},
|
|
||||||
}, []string{}),
|
}, []string{}),
|
||||||
|
|
||||||
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
|
||||||
|
@ -124,7 +123,7 @@ func NopMetrics() *Metrics {
|
||||||
ByzantineValidators: discard.NewGauge(),
|
ByzantineValidators: discard.NewGauge(),
|
||||||
ByzantineValidatorsPower: discard.NewGauge(),
|
ByzantineValidatorsPower: discard.NewGauge(),
|
||||||
|
|
||||||
BlockIntervalSeconds: discard.NewHistogram(),
|
BlockIntervalSeconds: discard.NewGauge(),
|
||||||
|
|
||||||
NumTxs: discard.NewGauge(),
|
NumTxs: discard.NewGauge(),
|
||||||
BlockSizeBytes: discard.NewGauge(),
|
BlockSizeBytes: discard.NewGauge(),
|
||||||
|
|
|
@ -29,6 +29,7 @@ const (
|
||||||
maxMsgSize = 1048576 // 1MB; NOTE/TODO: keep in sync with types.PartSet sizes.
|
maxMsgSize = 1048576 // 1MB; NOTE/TODO: keep in sync with types.PartSet sizes.
|
||||||
|
|
||||||
blocksToContributeToBecomeGoodPeer = 10000
|
blocksToContributeToBecomeGoodPeer = 10000
|
||||||
|
votesToContributeToBecomeGoodPeer = 10000
|
||||||
)
|
)
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
|
@ -60,6 +61,9 @@ func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *Consens
|
||||||
func (conR *ConsensusReactor) OnStart() error {
|
func (conR *ConsensusReactor) OnStart() error {
|
||||||
conR.Logger.Info("ConsensusReactor ", "fastSync", conR.FastSync())
|
conR.Logger.Info("ConsensusReactor ", "fastSync", conR.FastSync())
|
||||||
|
|
||||||
|
// start routine that computes peer statistics for evaluating peer quality
|
||||||
|
go conR.peerStatsRoutine()
|
||||||
|
|
||||||
conR.subscribeToBroadcastEvents()
|
conR.subscribeToBroadcastEvents()
|
||||||
|
|
||||||
if !conR.FastSync() {
|
if !conR.FastSync() {
|
||||||
|
@ -258,9 +262,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||||
ps.ApplyProposalPOLMessage(msg)
|
ps.ApplyProposalPOLMessage(msg)
|
||||||
case *BlockPartMessage:
|
case *BlockPartMessage:
|
||||||
ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index)
|
ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index)
|
||||||
if numBlocks := ps.RecordBlockPart(msg); numBlocks%blocksToContributeToBecomeGoodPeer == 0 {
|
|
||||||
conR.Switch.MarkPeerAsGood(src)
|
|
||||||
}
|
|
||||||
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
|
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
|
||||||
default:
|
default:
|
||||||
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
|
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
|
||||||
|
@ -280,9 +282,6 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
|
||||||
ps.EnsureVoteBitArrays(height, valSize)
|
ps.EnsureVoteBitArrays(height, valSize)
|
||||||
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
|
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
|
||||||
ps.SetHasVote(msg.Vote)
|
ps.SetHasVote(msg.Vote)
|
||||||
if blocks := ps.RecordVote(msg.Vote); blocks%blocksToContributeToBecomeGoodPeer == 0 {
|
|
||||||
conR.Switch.MarkPeerAsGood(src)
|
|
||||||
}
|
|
||||||
|
|
||||||
cs.peerMsgQueue <- msgInfo{msg, src.ID()}
|
cs.peerMsgQueue <- msgInfo{msg, src.ID()}
|
||||||
|
|
||||||
|
@ -794,6 +793,43 @@ OUTER_LOOP:
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (conR *ConsensusReactor) peerStatsRoutine() {
|
||||||
|
for {
|
||||||
|
if !conR.IsRunning() {
|
||||||
|
conR.Logger.Info("Stopping peerStatsRoutine")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
select {
|
||||||
|
case msg := <-conR.conS.statsMsgQueue:
|
||||||
|
// Get peer
|
||||||
|
peer := conR.Switch.Peers().Get(msg.PeerID)
|
||||||
|
if peer == nil {
|
||||||
|
conR.Logger.Debug("Attempt to update stats for non-existent peer",
|
||||||
|
"peer", msg.PeerID)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Get peer state
|
||||||
|
ps := peer.Get(types.PeerStateKey).(*PeerState)
|
||||||
|
switch msg.Msg.(type) {
|
||||||
|
case *VoteMessage:
|
||||||
|
if numVotes := ps.RecordVote(); numVotes%votesToContributeToBecomeGoodPeer == 0 {
|
||||||
|
conR.Switch.MarkPeerAsGood(peer)
|
||||||
|
}
|
||||||
|
case *BlockPartMessage:
|
||||||
|
if numParts := ps.RecordBlockPart(); numParts%blocksToContributeToBecomeGoodPeer == 0 {
|
||||||
|
conR.Switch.MarkPeerAsGood(peer)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case <-conR.conS.Quit():
|
||||||
|
return
|
||||||
|
|
||||||
|
case <-conR.Quit():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// String returns a string representation of the ConsensusReactor.
|
// String returns a string representation of the ConsensusReactor.
|
||||||
// NOTE: For now, it is just a hard-coded string to avoid accessing unprotected shared variables.
|
// NOTE: For now, it is just a hard-coded string to avoid accessing unprotected shared variables.
|
||||||
// TODO: improve!
|
// TODO: improve!
|
||||||
|
@ -836,15 +872,13 @@ type PeerState struct {
|
||||||
|
|
||||||
// peerStateStats holds internal statistics for a peer.
|
// peerStateStats holds internal statistics for a peer.
|
||||||
type peerStateStats struct {
|
type peerStateStats struct {
|
||||||
LastVoteHeight int64 `json:"last_vote_height"`
|
Votes int `json:"votes"`
|
||||||
Votes int `json:"votes"`
|
BlockParts int `json:"block_parts"`
|
||||||
LastBlockPartHeight int64 `json:"last_block_part_height"`
|
|
||||||
BlockParts int `json:"block_parts"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (pss peerStateStats) String() string {
|
func (pss peerStateStats) String() string {
|
||||||
return fmt.Sprintf("peerStateStats{lvh: %d, votes: %d, lbph: %d, blockParts: %d}",
|
return fmt.Sprintf("peerStateStats{votes: %d, blockParts: %d}",
|
||||||
pss.LastVoteHeight, pss.Votes, pss.LastBlockPartHeight, pss.BlockParts)
|
pss.Votes, pss.BlockParts)
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewPeerState returns a new PeerState for the given Peer
|
// NewPeerState returns a new PeerState for the given Peer
|
||||||
|
@ -1080,18 +1114,14 @@ func (ps *PeerState) ensureVoteBitArrays(height int64, numValidators int) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// RecordVote updates internal statistics for this peer by recording the vote.
|
// RecordVote increments internal votes related statistics for this peer.
|
||||||
// It returns the total number of votes (1 per block). This essentially means
|
// It returns the total number of added votes.
|
||||||
// the number of blocks for which peer has been sending us votes.
|
func (ps *PeerState) RecordVote() int {
|
||||||
func (ps *PeerState) RecordVote(vote *types.Vote) int {
|
|
||||||
ps.mtx.Lock()
|
ps.mtx.Lock()
|
||||||
defer ps.mtx.Unlock()
|
defer ps.mtx.Unlock()
|
||||||
|
|
||||||
if ps.Stats.LastVoteHeight >= vote.Height {
|
|
||||||
return ps.Stats.Votes
|
|
||||||
}
|
|
||||||
ps.Stats.LastVoteHeight = vote.Height
|
|
||||||
ps.Stats.Votes++
|
ps.Stats.Votes++
|
||||||
|
|
||||||
return ps.Stats.Votes
|
return ps.Stats.Votes
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1104,25 +1134,17 @@ func (ps *PeerState) VotesSent() int {
|
||||||
return ps.Stats.Votes
|
return ps.Stats.Votes
|
||||||
}
|
}
|
||||||
|
|
||||||
// RecordBlockPart updates internal statistics for this peer by recording the
|
// RecordBlockPart increments internal block part related statistics for this peer.
|
||||||
// block part. It returns the total number of block parts (1 per block). This
|
// It returns the total number of added block parts.
|
||||||
// essentially means the number of blocks for which peer has been sending us
|
func (ps *PeerState) RecordBlockPart() int {
|
||||||
// block parts.
|
|
||||||
func (ps *PeerState) RecordBlockPart(bp *BlockPartMessage) int {
|
|
||||||
ps.mtx.Lock()
|
ps.mtx.Lock()
|
||||||
defer ps.mtx.Unlock()
|
defer ps.mtx.Unlock()
|
||||||
|
|
||||||
if ps.Stats.LastBlockPartHeight >= bp.Height {
|
|
||||||
return ps.Stats.BlockParts
|
|
||||||
}
|
|
||||||
|
|
||||||
ps.Stats.LastBlockPartHeight = bp.Height
|
|
||||||
ps.Stats.BlockParts++
|
ps.Stats.BlockParts++
|
||||||
return ps.Stats.BlockParts
|
return ps.Stats.BlockParts
|
||||||
}
|
}
|
||||||
|
|
||||||
// BlockPartsSent returns the number of blocks for which peer has been sending
|
// BlockPartsSent returns the number of useful block parts the peer has sent us.
|
||||||
// us block parts.
|
|
||||||
func (ps *PeerState) BlockPartsSent() int {
|
func (ps *PeerState) BlockPartsSent() int {
|
||||||
ps.mtx.Lock()
|
ps.mtx.Lock()
|
||||||
defer ps.mtx.Unlock()
|
defer ps.mtx.Unlock()
|
||||||
|
|
|
@ -11,20 +11,16 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
abcicli "github.com/tendermint/tendermint/abci/client"
|
"github.com/tendermint/tendermint/abci/client"
|
||||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
bc "github.com/tendermint/tendermint/blockchain"
|
bc "github.com/tendermint/tendermint/blockchain"
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
dbm "github.com/tendermint/tendermint/libs/db"
|
dbm "github.com/tendermint/tendermint/libs/db"
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
mempl "github.com/tendermint/tendermint/mempool"
|
mempl "github.com/tendermint/tendermint/mempool"
|
||||||
sm "github.com/tendermint/tendermint/state"
|
|
||||||
tmtime "github.com/tendermint/tendermint/types/time"
|
|
||||||
|
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
|
||||||
"github.com/tendermint/tendermint/p2p"
|
"github.com/tendermint/tendermint/p2p"
|
||||||
p2pdummy "github.com/tendermint/tendermint/p2p/dummy"
|
sm "github.com/tendermint/tendermint/state"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
@ -196,7 +192,7 @@ func newMockEvidencePool(val []byte) *mockEvidencePool {
|
||||||
}
|
}
|
||||||
|
|
||||||
// NOTE: maxBytes is ignored
|
// NOTE: maxBytes is ignored
|
||||||
func (m *mockEvidencePool) PendingEvidence(maxBytes int) []types.Evidence {
|
func (m *mockEvidencePool) PendingEvidence(maxBytes int64) []types.Evidence {
|
||||||
if m.height > 0 {
|
if m.height > 0 {
|
||||||
return m.ev
|
return m.ev
|
||||||
}
|
}
|
||||||
|
@ -246,110 +242,25 @@ func TestReactorProposalHeartbeats(t *testing.T) {
|
||||||
}, css)
|
}, css)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test we record block parts from other peers
|
// Test we record stats about votes and block parts from other peers.
|
||||||
func TestReactorRecordsBlockParts(t *testing.T) {
|
func TestReactorRecordsVotesAndBlockParts(t *testing.T) {
|
||||||
// create dummy peer
|
N := 4
|
||||||
peer := p2pdummy.NewPeer()
|
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
|
||||||
ps := NewPeerState(peer).SetLogger(log.TestingLogger())
|
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
|
||||||
peer.Set(types.PeerStateKey, ps)
|
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
|
||||||
|
|
||||||
// create reactor
|
// wait till everyone makes the first new block
|
||||||
css := randConsensusNet(1, "consensus_reactor_records_block_parts_test", newMockTickerFunc(true), newPersistentKVStore)
|
timeoutWaitGroup(t, N, func(j int) {
|
||||||
reactor := NewConsensusReactor(css[0], false) // so we dont start the consensus states
|
<-eventChans[j]
|
||||||
reactor.SetEventBus(css[0].eventBus)
|
}, css)
|
||||||
reactor.SetLogger(log.TestingLogger())
|
|
||||||
sw := p2p.MakeSwitch(cfg.DefaultP2PConfig(), 1, "testing", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch { return sw })
|
|
||||||
reactor.SetSwitch(sw)
|
|
||||||
err := reactor.Start()
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer reactor.Stop()
|
|
||||||
|
|
||||||
// 1) new block part
|
// Get peer
|
||||||
parts := types.NewPartSetFromData(cmn.RandBytes(100), 10)
|
peer := reactors[1].Switch.Peers().List()[0]
|
||||||
msg := &BlockPartMessage{
|
// Get peer state
|
||||||
Height: 2,
|
ps := peer.Get(types.PeerStateKey).(*PeerState)
|
||||||
Round: 0,
|
|
||||||
Part: parts.GetPart(0),
|
|
||||||
}
|
|
||||||
bz, err := cdc.MarshalBinaryBare(msg)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
reactor.Receive(DataChannel, peer, bz)
|
assert.Equal(t, true, ps.VotesSent() > 0, "number of votes sent should have increased")
|
||||||
require.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should have increased by 1")
|
assert.Equal(t, true, ps.BlockPartsSent() > 0, "number of votes sent should have increased")
|
||||||
|
|
||||||
// 2) block part with the same height, but different round
|
|
||||||
msg.Round = 1
|
|
||||||
|
|
||||||
bz, err = cdc.MarshalBinaryBare(msg)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
reactor.Receive(DataChannel, peer, bz)
|
|
||||||
require.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should stay the same")
|
|
||||||
|
|
||||||
// 3) block part from earlier height
|
|
||||||
msg.Height = 1
|
|
||||||
msg.Round = 0
|
|
||||||
|
|
||||||
bz, err = cdc.MarshalBinaryBare(msg)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
reactor.Receive(DataChannel, peer, bz)
|
|
||||||
require.Equal(t, 1, ps.BlockPartsSent(), "number of block parts sent should stay the same")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test we record votes from other peers.
|
|
||||||
func TestReactorRecordsVotes(t *testing.T) {
|
|
||||||
// Create dummy peer.
|
|
||||||
peer := p2pdummy.NewPeer()
|
|
||||||
ps := NewPeerState(peer).SetLogger(log.TestingLogger())
|
|
||||||
peer.Set(types.PeerStateKey, ps)
|
|
||||||
|
|
||||||
// Create reactor.
|
|
||||||
css := randConsensusNet(1, "consensus_reactor_records_votes_test", newMockTickerFunc(true), newPersistentKVStore)
|
|
||||||
reactor := NewConsensusReactor(css[0], false) // so we dont start the consensus states
|
|
||||||
reactor.SetEventBus(css[0].eventBus)
|
|
||||||
reactor.SetLogger(log.TestingLogger())
|
|
||||||
sw := p2p.MakeSwitch(cfg.DefaultP2PConfig(), 1, "testing", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch { return sw })
|
|
||||||
reactor.SetSwitch(sw)
|
|
||||||
err := reactor.Start()
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer reactor.Stop()
|
|
||||||
_, val := css[0].state.Validators.GetByIndex(0)
|
|
||||||
|
|
||||||
// 1) new vote
|
|
||||||
vote := &types.Vote{
|
|
||||||
ValidatorIndex: 0,
|
|
||||||
ValidatorAddress: val.Address,
|
|
||||||
Height: 2,
|
|
||||||
Round: 0,
|
|
||||||
Timestamp: tmtime.Now(),
|
|
||||||
Type: types.VoteTypePrevote,
|
|
||||||
BlockID: types.BlockID{},
|
|
||||||
}
|
|
||||||
bz, err := cdc.MarshalBinaryBare(&VoteMessage{vote})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
reactor.Receive(VoteChannel, peer, bz)
|
|
||||||
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should have increased by 1")
|
|
||||||
|
|
||||||
// 2) vote with the same height, but different round
|
|
||||||
vote.Round = 1
|
|
||||||
|
|
||||||
bz, err = cdc.MarshalBinaryBare(&VoteMessage{vote})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
reactor.Receive(VoteChannel, peer, bz)
|
|
||||||
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should stay the same")
|
|
||||||
|
|
||||||
// 3) vote from earlier height
|
|
||||||
vote.Height = 1
|
|
||||||
vote.Round = 0
|
|
||||||
|
|
||||||
bz, err = cdc.MarshalBinaryBare(&VoteMessage{vote})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
reactor.Receive(VoteChannel, peer, bz)
|
|
||||||
assert.Equal(t, 1, ps.VotesSent(), "number of votes sent should stay the same")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
//-------------------------------------------------------------
|
//-------------------------------------------------------------
|
||||||
|
|
|
@ -298,13 +298,18 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
|
||||||
|
|
||||||
// Create proxyAppConn connection (consensus, mempool, query)
|
// Create proxyAppConn connection (consensus, mempool, query)
|
||||||
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
|
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
|
||||||
proxyApp := proxy.NewAppConns(clientCreator,
|
proxyApp := proxy.NewAppConns(clientCreator)
|
||||||
NewHandshaker(stateDB, state, blockStore, gdoc))
|
|
||||||
err = proxyApp.Start()
|
err = proxyApp.Start()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
cmn.Exit(fmt.Sprintf("Error starting proxy app conns: %v", err))
|
cmn.Exit(fmt.Sprintf("Error starting proxy app conns: %v", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
handshaker := NewHandshaker(stateDB, state, blockStore, gdoc)
|
||||||
|
err = handshaker.Handshake(proxyApp)
|
||||||
|
if err != nil {
|
||||||
|
cmn.Exit(fmt.Sprintf("Error on handshake: %v", err))
|
||||||
|
}
|
||||||
|
|
||||||
eventBus := types.NewEventBus()
|
eventBus := types.NewEventBus()
|
||||||
if err := eventBus.Start(); err != nil {
|
if err := eventBus.Start(); err != nil {
|
||||||
cmn.Exit(fmt.Sprintf("Failed to start event bus: %v", err))
|
cmn.Exit(fmt.Sprintf("Failed to start event bus: %v", err))
|
||||||
|
|
|
@ -102,14 +102,6 @@ func TestWALCrash(t *testing.T) {
|
||||||
{"empty block",
|
{"empty block",
|
||||||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {},
|
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {},
|
||||||
1},
|
1},
|
||||||
{"block with a smaller part size",
|
|
||||||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
|
|
||||||
// XXX: is there a better way to change BlockPartSizeBytes?
|
|
||||||
cs.state.ConsensusParams.BlockPartSizeBytes = 512
|
|
||||||
sm.SaveState(stateDB, cs.state)
|
|
||||||
go sendTxs(cs, ctx)
|
|
||||||
},
|
|
||||||
1},
|
|
||||||
{"many non-empty blocks",
|
{"many non-empty blocks",
|
||||||
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
|
func(stateDB dbm.DB, cs *ConsensusState, ctx context.Context) {
|
||||||
go sendTxs(cs, ctx)
|
go sendTxs(cs, ctx)
|
||||||
|
@ -359,7 +351,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||||
if nBlocks > 0 {
|
if nBlocks > 0 {
|
||||||
// run nBlocks against a new client to build up the app state.
|
// run nBlocks against a new client to build up the app state.
|
||||||
// use a throwaway tendermint state
|
// use a throwaway tendermint state
|
||||||
proxyApp := proxy.NewAppConns(clientCreator2, nil)
|
proxyApp := proxy.NewAppConns(clientCreator2)
|
||||||
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey())
|
stateDB, state, _ := stateAndStore(config, privVal.GetPubKey())
|
||||||
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode)
|
buildAppStateFromChain(proxyApp, stateDB, state, chain, nBlocks, mode)
|
||||||
}
|
}
|
||||||
|
@ -367,11 +359,14 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||||
// now start the app using the handshake - it should sync
|
// now start the app using the handshake - it should sync
|
||||||
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
|
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
|
||||||
handshaker := NewHandshaker(stateDB, state, store, genDoc)
|
handshaker := NewHandshaker(stateDB, state, store, genDoc)
|
||||||
proxyApp := proxy.NewAppConns(clientCreator2, handshaker)
|
proxyApp := proxy.NewAppConns(clientCreator2)
|
||||||
if err := proxyApp.Start(); err != nil {
|
if err := proxyApp.Start(); err != nil {
|
||||||
t.Fatalf("Error starting proxy app connections: %v", err)
|
t.Fatalf("Error starting proxy app connections: %v", err)
|
||||||
}
|
}
|
||||||
defer proxyApp.Stop()
|
defer proxyApp.Stop()
|
||||||
|
if err := handshaker.Handshake(proxyApp); err != nil {
|
||||||
|
t.Fatalf("Error on abci handshake: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
// get the latest app hash from the app
|
// get the latest app hash from the app
|
||||||
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{Version: ""})
|
res, err := proxyApp.Query().InfoSync(abci.RequestInfo{Version: ""})
|
||||||
|
@ -397,7 +392,7 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
|
func applyBlock(stateDB dbm.DB, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
|
||||||
testPartSize := st.ConsensusParams.BlockPartSizeBytes
|
testPartSize := types.BlockPartSizeBytes
|
||||||
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
|
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
|
||||||
|
|
||||||
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()}
|
blkID := types.BlockID{blk.Hash(), blk.MakePartSet(testPartSize).Header()}
|
||||||
|
@ -447,7 +442,7 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, stateDB dbm.DB,
|
||||||
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State {
|
func buildTMStateFromChain(config *cfg.Config, stateDB dbm.DB, state sm.State, chain []*types.Block, mode uint) sm.State {
|
||||||
// run the whole chain against this client to build up the tendermint state
|
// run the whole chain against this client to build up the tendermint state
|
||||||
clientCreator := proxy.NewLocalClientCreator(kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "1")))
|
clientCreator := proxy.NewLocalClientCreator(kvstore.NewPersistentKVStoreApplication(path.Join(config.DBDir(), "1")))
|
||||||
proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock))
|
proxyApp := proxy.NewAppConns(clientCreator) // sm.NewHandshaker(config, state, store, ReplayLastBlock))
|
||||||
if err := proxyApp.Start(); err != nil {
|
if err := proxyApp.Start(); err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
@ -620,7 +615,7 @@ func (bs *mockBlockStore) LoadBlock(height int64) *types.Block { return bs.chain
|
||||||
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
|
||||||
block := bs.chain[height-1]
|
block := bs.chain[height-1]
|
||||||
return &types.BlockMeta{
|
return &types.BlockMeta{
|
||||||
BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.params.BlockPartSizeBytes).Header()},
|
BlockID: types.BlockID{block.Hash(), block.MakePartSet(types.BlockPartSizeBytes).Header()},
|
||||||
Header: block.Header,
|
Header: block.Header,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -651,11 +646,14 @@ func TestInitChainUpdateValidators(t *testing.T) {
|
||||||
// now start the app using the handshake - it should sync
|
// now start the app using the handshake - it should sync
|
||||||
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
|
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
|
||||||
handshaker := NewHandshaker(stateDB, state, store, genDoc)
|
handshaker := NewHandshaker(stateDB, state, store, genDoc)
|
||||||
proxyApp := proxy.NewAppConns(clientCreator, handshaker)
|
proxyApp := proxy.NewAppConns(clientCreator)
|
||||||
if err := proxyApp.Start(); err != nil {
|
if err := proxyApp.Start(); err != nil {
|
||||||
t.Fatalf("Error starting proxy app connections: %v", err)
|
t.Fatalf("Error starting proxy app connections: %v", err)
|
||||||
}
|
}
|
||||||
defer proxyApp.Stop()
|
defer proxyApp.Stop()
|
||||||
|
if err := handshaker.Handshake(proxyApp); err != nil {
|
||||||
|
t.Fatalf("Error on abci handshake: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
// reload the state, check the validator set was updated
|
// reload the state, check the validator set was updated
|
||||||
state = sm.LoadState(stateDB)
|
state = sm.LoadState(stateDB)
|
||||||
|
|
|
@ -91,6 +91,10 @@ type ConsensusState struct {
|
||||||
internalMsgQueue chan msgInfo
|
internalMsgQueue chan msgInfo
|
||||||
timeoutTicker TimeoutTicker
|
timeoutTicker TimeoutTicker
|
||||||
|
|
||||||
|
// information about about added votes and block parts are written on this channel
|
||||||
|
// so statistics can be computed by reactor
|
||||||
|
statsMsgQueue chan msgInfo
|
||||||
|
|
||||||
// we use eventBus to trigger msg broadcasts in the reactor,
|
// we use eventBus to trigger msg broadcasts in the reactor,
|
||||||
// and to notify external subscribers, eg. through a websocket
|
// and to notify external subscribers, eg. through a websocket
|
||||||
eventBus *types.EventBus
|
eventBus *types.EventBus
|
||||||
|
@ -141,6 +145,7 @@ func NewConsensusState(
|
||||||
peerMsgQueue: make(chan msgInfo, msgQueueSize),
|
peerMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||||
internalMsgQueue: make(chan msgInfo, msgQueueSize),
|
internalMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||||
timeoutTicker: NewTimeoutTicker(),
|
timeoutTicker: NewTimeoutTicker(),
|
||||||
|
statsMsgQueue: make(chan msgInfo, msgQueueSize),
|
||||||
done: make(chan struct{}),
|
done: make(chan struct{}),
|
||||||
doWALCatchup: true,
|
doWALCatchup: true,
|
||||||
wal: nilWAL{},
|
wal: nilWAL{},
|
||||||
|
@ -639,7 +644,11 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||||
err = cs.setProposal(msg.Proposal)
|
err = cs.setProposal(msg.Proposal)
|
||||||
case *BlockPartMessage:
|
case *BlockPartMessage:
|
||||||
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
|
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
|
||||||
_, err = cs.addProposalBlockPart(msg, peerID)
|
added, err := cs.addProposalBlockPart(msg, peerID)
|
||||||
|
if added {
|
||||||
|
cs.statsMsgQueue <- mi
|
||||||
|
}
|
||||||
|
|
||||||
if err != nil && msg.Round != cs.Round {
|
if err != nil && msg.Round != cs.Round {
|
||||||
cs.Logger.Debug("Received block part from wrong round", "height", cs.Height, "csRound", cs.Round, "blockRound", msg.Round)
|
cs.Logger.Debug("Received block part from wrong round", "height", cs.Height, "csRound", cs.Round, "blockRound", msg.Round)
|
||||||
err = nil
|
err = nil
|
||||||
|
@ -647,7 +656,11 @@ func (cs *ConsensusState) handleMsg(mi msgInfo) {
|
||||||
case *VoteMessage:
|
case *VoteMessage:
|
||||||
// attempt to add the vote and dupeout the validator if its a duplicate signature
|
// attempt to add the vote and dupeout the validator if its a duplicate signature
|
||||||
// if the vote gives us a 2/3-any or 2/3-one, we transition
|
// if the vote gives us a 2/3-any or 2/3-one, we transition
|
||||||
err := cs.tryAddVote(msg.Vote, peerID)
|
added, err := cs.tryAddVote(msg.Vote, peerID)
|
||||||
|
if added {
|
||||||
|
cs.statsMsgQueue <- mi
|
||||||
|
}
|
||||||
|
|
||||||
if err == ErrAddingVote {
|
if err == ErrAddingVote {
|
||||||
// TODO: punish peer
|
// TODO: punish peer
|
||||||
// We probably don't want to stop the peer here. The vote does not
|
// We probably don't want to stop the peer here. The vote does not
|
||||||
|
@ -949,24 +962,21 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
|
||||||
}
|
}
|
||||||
|
|
||||||
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
|
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
|
||||||
|
maxGas := cs.state.ConsensusParams.BlockSize.MaxGas
|
||||||
// bound evidence to 1/10th of the block
|
// bound evidence to 1/10th of the block
|
||||||
evidence := cs.evpool.PendingEvidence(maxBytes / 10)
|
evidence := cs.evpool.PendingEvidence(types.MaxEvidenceBytesPerBlock(maxBytes))
|
||||||
// Mempool validated transactions
|
// Mempool validated transactions
|
||||||
txs := cs.mempool.ReapMaxBytes(maxDataBytes(maxBytes, cs.state.Validators.Size(), len(evidence)))
|
txs := cs.mempool.ReapMaxBytesMaxGas(types.MaxDataBytes(
|
||||||
|
maxBytes,
|
||||||
|
cs.state.Validators.Size(),
|
||||||
|
len(evidence),
|
||||||
|
), maxGas)
|
||||||
proposerAddr := cs.privValidator.GetAddress()
|
proposerAddr := cs.privValidator.GetAddress()
|
||||||
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
|
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
|
||||||
|
|
||||||
return block, parts
|
return block, parts
|
||||||
}
|
}
|
||||||
|
|
||||||
func maxDataBytes(maxBytes, valsCount, evidenceCount int) int {
|
|
||||||
return maxBytes -
|
|
||||||
types.MaxAminoOverheadForBlock -
|
|
||||||
types.MaxHeaderBytes -
|
|
||||||
(valsCount * types.MaxVoteBytes) -
|
|
||||||
(evidenceCount * types.MaxEvidenceBytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Enter: `timeoutPropose` after entering Propose.
|
// Enter: `timeoutPropose` after entering Propose.
|
||||||
// Enter: proposal block and POL is ready.
|
// Enter: proposal block and POL is ready.
|
||||||
// Enter: any +2/3 prevotes for future round.
|
// Enter: any +2/3 prevotes for future round.
|
||||||
|
@ -1379,7 +1389,7 @@ func (cs *ConsensusState) recordMetrics(height int64, block *types.Block) {
|
||||||
|
|
||||||
if height > 1 {
|
if height > 1 {
|
||||||
lastBlockMeta := cs.blockStore.LoadBlockMeta(height - 1)
|
lastBlockMeta := cs.blockStore.LoadBlockMeta(height - 1)
|
||||||
cs.metrics.BlockIntervalSeconds.Observe(
|
cs.metrics.BlockIntervalSeconds.Set(
|
||||||
block.Time.Sub(lastBlockMeta.Header.Time).Seconds(),
|
block.Time.Sub(lastBlockMeta.Header.Time).Seconds(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
@ -1457,7 +1467,7 @@ func (cs *ConsensusState) addProposalBlockPart(msg *BlockPartMessage, peerID p2p
|
||||||
int64(cs.state.ConsensusParams.BlockSize.MaxBytes),
|
int64(cs.state.ConsensusParams.BlockSize.MaxBytes),
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return true, err
|
return added, err
|
||||||
}
|
}
|
||||||
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
|
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
|
||||||
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
|
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
|
||||||
|
@ -1487,35 +1497,35 @@ func (cs *ConsensusState) addProposalBlockPart(msg *BlockPartMessage, peerID p2p
|
||||||
// If we're waiting on the proposal block...
|
// If we're waiting on the proposal block...
|
||||||
cs.tryFinalizeCommit(height)
|
cs.tryFinalizeCommit(height)
|
||||||
}
|
}
|
||||||
return true, nil
|
return added, nil
|
||||||
}
|
}
|
||||||
return added, nil
|
return added, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Attempt to add the vote. if its a duplicate signature, dupeout the validator
|
// Attempt to add the vote. if its a duplicate signature, dupeout the validator
|
||||||
func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) error {
|
func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) (bool, error) {
|
||||||
_, err := cs.addVote(vote, peerID)
|
added, err := cs.addVote(vote, peerID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// If the vote height is off, we'll just ignore it,
|
// If the vote height is off, we'll just ignore it,
|
||||||
// But if it's a conflicting sig, add it to the cs.evpool.
|
// But if it's a conflicting sig, add it to the cs.evpool.
|
||||||
// If it's otherwise invalid, punish peer.
|
// If it's otherwise invalid, punish peer.
|
||||||
if err == ErrVoteHeightMismatch {
|
if err == ErrVoteHeightMismatch {
|
||||||
return err
|
return added, err
|
||||||
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
|
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
|
||||||
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
|
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
|
||||||
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
|
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
|
||||||
return err
|
return added, err
|
||||||
}
|
}
|
||||||
cs.evpool.AddEvidence(voteErr.DuplicateVoteEvidence)
|
cs.evpool.AddEvidence(voteErr.DuplicateVoteEvidence)
|
||||||
return err
|
return added, err
|
||||||
} else {
|
} else {
|
||||||
// Probably an invalid signature / Bad peer.
|
// Probably an invalid signature / Bad peer.
|
||||||
// Seems this can also err sometimes with "Unexpected step" - perhaps not from a bad peer ?
|
// Seems this can also err sometimes with "Unexpected step" - perhaps not from a bad peer ?
|
||||||
cs.Logger.Error("Error attempting to add vote", "err", err)
|
cs.Logger.Error("Error attempting to add vote", "err", err)
|
||||||
return ErrAddingVote
|
return added, ErrAddingVote
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return added, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
//-----------------------------------------------------------------------------
|
//-----------------------------------------------------------------------------
|
||||||
|
|
|
@ -7,9 +7,13 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
cstypes "github.com/tendermint/tendermint/consensus/types"
|
cstypes "github.com/tendermint/tendermint/consensus/types"
|
||||||
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
||||||
|
p2pdummy "github.com/tendermint/tendermint/p2p/dummy"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -184,7 +188,7 @@ func TestStateBadProposal(t *testing.T) {
|
||||||
height, round := cs1.Height, cs1.Round
|
height, round := cs1.Height, cs1.Round
|
||||||
vs2 := vss[1]
|
vs2 := vss[1]
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||||
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
|
||||||
|
@ -339,7 +343,7 @@ func TestStateLockNoPOL(t *testing.T) {
|
||||||
vs2 := vss[1]
|
vs2 := vss[1]
|
||||||
height := cs1.Height
|
height := cs1.Height
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||||
|
@ -507,7 +511,7 @@ func TestStateLockPOLRelock(t *testing.T) {
|
||||||
cs1, vss := randConsensusState(4)
|
cs1, vss := randConsensusState(4)
|
||||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||||
|
@ -622,7 +626,7 @@ func TestStateLockPOLUnlock(t *testing.T) {
|
||||||
cs1, vss := randConsensusState(4)
|
cs1, vss := randConsensusState(4)
|
||||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||||
|
@ -719,7 +723,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
|
||||||
cs1, vss := randConsensusState(4)
|
cs1, vss := randConsensusState(4)
|
||||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||||
|
@ -842,7 +846,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
|
||||||
cs1, vss := randConsensusState(4)
|
cs1, vss := randConsensusState(4)
|
||||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||||
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
|
||||||
|
@ -1021,7 +1025,7 @@ func TestStateHalt1(t *testing.T) {
|
||||||
cs1, vss := randConsensusState(4)
|
cs1, vss := randConsensusState(4)
|
||||||
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
|
||||||
|
|
||||||
partSize := cs1.state.ConsensusParams.BlockPartSizeBytes
|
partSize := types.BlockPartSizeBytes
|
||||||
|
|
||||||
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
|
||||||
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
|
||||||
|
@ -1081,6 +1085,80 @@ func TestStateHalt1(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestStateOutputsBlockPartsStats(t *testing.T) {
|
||||||
|
// create dummy peer
|
||||||
|
cs, _ := randConsensusState(1)
|
||||||
|
peer := p2pdummy.NewPeer()
|
||||||
|
|
||||||
|
// 1) new block part
|
||||||
|
parts := types.NewPartSetFromData(cmn.RandBytes(100), 10)
|
||||||
|
msg := &BlockPartMessage{
|
||||||
|
Height: 1,
|
||||||
|
Round: 0,
|
||||||
|
Part: parts.GetPart(0),
|
||||||
|
}
|
||||||
|
|
||||||
|
cs.ProposalBlockParts = types.NewPartSetFromHeader(parts.Header())
|
||||||
|
cs.handleMsg(msgInfo{msg, peer.ID()})
|
||||||
|
|
||||||
|
statsMessage := <-cs.statsMsgQueue
|
||||||
|
require.Equal(t, msg, statsMessage.Msg, "")
|
||||||
|
require.Equal(t, peer.ID(), statsMessage.PeerID, "")
|
||||||
|
|
||||||
|
// sending the same part from different peer
|
||||||
|
cs.handleMsg(msgInfo{msg, "peer2"})
|
||||||
|
|
||||||
|
// sending the part with the same height, but different round
|
||||||
|
msg.Round = 1
|
||||||
|
cs.handleMsg(msgInfo{msg, peer.ID()})
|
||||||
|
|
||||||
|
// sending the part from the smaller height
|
||||||
|
msg.Height = 0
|
||||||
|
cs.handleMsg(msgInfo{msg, peer.ID()})
|
||||||
|
|
||||||
|
// sending the part from the bigger height
|
||||||
|
msg.Height = 3
|
||||||
|
cs.handleMsg(msgInfo{msg, peer.ID()})
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-cs.statsMsgQueue:
|
||||||
|
t.Errorf("Should not output stats message after receiving the known block part!")
|
||||||
|
case <-time.After(50 * time.Millisecond):
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestStateOutputVoteStats(t *testing.T) {
|
||||||
|
cs, vss := randConsensusState(2)
|
||||||
|
// create dummy peer
|
||||||
|
peer := p2pdummy.NewPeer()
|
||||||
|
|
||||||
|
vote := signVote(vss[1], types.VoteTypePrecommit, []byte("test"), types.PartSetHeader{})
|
||||||
|
|
||||||
|
voteMessage := &VoteMessage{vote}
|
||||||
|
cs.handleMsg(msgInfo{voteMessage, peer.ID()})
|
||||||
|
|
||||||
|
statsMessage := <-cs.statsMsgQueue
|
||||||
|
require.Equal(t, voteMessage, statsMessage.Msg, "")
|
||||||
|
require.Equal(t, peer.ID(), statsMessage.PeerID, "")
|
||||||
|
|
||||||
|
// sending the same part from different peer
|
||||||
|
cs.handleMsg(msgInfo{&VoteMessage{vote}, "peer2"})
|
||||||
|
|
||||||
|
// sending the vote for the bigger height
|
||||||
|
incrementHeight(vss[1])
|
||||||
|
vote = signVote(vss[1], types.VoteTypePrecommit, []byte("test"), types.PartSetHeader{})
|
||||||
|
|
||||||
|
cs.handleMsg(msgInfo{&VoteMessage{vote}, peer.ID()})
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-cs.statsMsgQueue:
|
||||||
|
t.Errorf("Should not output stats message after receiving the known vote or vote from bigger height")
|
||||||
|
case <-time.After(50 * time.Millisecond):
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
// subscribe subscribes test client to the given query and returns a channel with cap = 1.
|
// subscribe subscribes test client to the given query and returns a channel with cap = 1.
|
||||||
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan interface{} {
|
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan interface{} {
|
||||||
out := make(chan interface{}, 1)
|
out := make(chan interface{}, 1)
|
||||||
|
|
|
@ -18,7 +18,7 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// must be greater than params.BlockGossip.BlockPartSizeBytes + a few bytes
|
// must be greater than types.BlockPartSizeBytes + a few bytes
|
||||||
maxMsgSizeBytes = 1024 * 1024 // 1MB
|
maxMsgSizeBytes = 1024 * 1024 // 1MB
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -52,13 +52,13 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
|
||||||
return nil, errors.Wrap(err, "failed to make genesis state")
|
return nil, errors.Wrap(err, "failed to make genesis state")
|
||||||
}
|
}
|
||||||
blockStore := bc.NewBlockStore(blockStoreDB)
|
blockStore := bc.NewBlockStore(blockStoreDB)
|
||||||
handshaker := NewHandshaker(stateDB, state, blockStore, genDoc)
|
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app))
|
||||||
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app), handshaker)
|
|
||||||
proxyApp.SetLogger(logger.With("module", "proxy"))
|
proxyApp.SetLogger(logger.With("module", "proxy"))
|
||||||
if err := proxyApp.Start(); err != nil {
|
if err := proxyApp.Start(); err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to start proxy app connections")
|
return nil, errors.Wrap(err, "failed to start proxy app connections")
|
||||||
}
|
}
|
||||||
defer proxyApp.Stop()
|
defer proxyApp.Stop()
|
||||||
|
|
||||||
eventBus := types.NewEventBus()
|
eventBus := types.NewEventBus()
|
||||||
eventBus.SetLogger(logger.With("module", "events"))
|
eventBus.SetLogger(logger.With("module", "events"))
|
||||||
if err := eventBus.Start(); err != nil {
|
if err := eventBus.Start(); err != nil {
|
||||||
|
|
|
@ -27,6 +27,7 @@ module.exports = {
|
||||||
"/tendermint-core/configuration",
|
"/tendermint-core/configuration",
|
||||||
"/tendermint-core/rpc",
|
"/tendermint-core/rpc",
|
||||||
"/tendermint-core/running-in-production",
|
"/tendermint-core/running-in-production",
|
||||||
|
"/tendermint-core/fast-sync",
|
||||||
"/tendermint-core/how-to-read-logs",
|
"/tendermint-core/how-to-read-logs",
|
||||||
"/tendermint-core/block-structure",
|
"/tendermint-core/block-structure",
|
||||||
"/tendermint-core/light-client-protocol",
|
"/tendermint-core/light-client-protocol",
|
||||||
|
@ -36,21 +37,23 @@ module.exports = {
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
title: "Tendermint Tools",
|
title: "Tools",
|
||||||
collapsable: false,
|
collapsable: false,
|
||||||
children: ["tools/benchmarking", "tools/monitoring"]
|
children: [
|
||||||
|
"tools/benchmarking",
|
||||||
|
"tools/monitoring"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
title: "Tendermint Networks",
|
title: "Networks",
|
||||||
collapsable: false,
|
collapsable: false,
|
||||||
children: [
|
children: [
|
||||||
"/networks/deploy-testnets",
|
"/networks/deploy-testnets",
|
||||||
"/networks/terraform-and-ansible",
|
"/networks/terraform-and-ansible",
|
||||||
"/networks/fast-sync"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
title: "Application Development",
|
title: "Apps",
|
||||||
collapsable: false,
|
collapsable: false,
|
||||||
children: [
|
children: [
|
||||||
"/app-dev/getting-started",
|
"/app-dev/getting-started",
|
||||||
|
@ -63,6 +66,38 @@ module.exports = {
|
||||||
"/app-dev/ecosystem"
|
"/app-dev/ecosystem"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
title: "Tendermint Spec",
|
||||||
|
collapsable: true,
|
||||||
|
children: [
|
||||||
|
"/spec/",
|
||||||
|
"/spec/blockchain/blockchain",
|
||||||
|
"/spec/blockchain/encoding",
|
||||||
|
"/spec/blockchain/state",
|
||||||
|
"/spec/software/abci",
|
||||||
|
"/spec/consensus/bft-time",
|
||||||
|
"/spec/consensus/consensus",
|
||||||
|
"/spec/consensus/light-client",
|
||||||
|
"/spec/software/wal",
|
||||||
|
"/spec/p2p/config",
|
||||||
|
"/spec/p2p/connection",
|
||||||
|
"/spec/p2p/node",
|
||||||
|
"/spec/p2p/peer",
|
||||||
|
"/spec/reactors/block_sync/reactor",
|
||||||
|
"/spec/reactors/block_sync/impl",
|
||||||
|
"/spec/reactors/consensus/consensus",
|
||||||
|
"/spec/reactors/consensus/consensus-reactor",
|
||||||
|
"/spec/reactors/consensus/proposer-selection",
|
||||||
|
"/spec/reactors/evidence/reactor",
|
||||||
|
"/spec/reactors/mempool/concurrency",
|
||||||
|
"/spec/reactors/mempool/config",
|
||||||
|
"/spec/reactors/mempool/functionality",
|
||||||
|
"/spec/reactors/mempool/messages",
|
||||||
|
"/spec/reactors/mempool/reactor",
|
||||||
|
"/spec/reactors/pex/pex",
|
||||||
|
"/spec/reactors/pex/reactor",
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
title: "ABCI Specification",
|
title: "ABCI Specification",
|
||||||
collapsable: false,
|
collapsable: false,
|
||||||
|
@ -75,7 +110,10 @@ module.exports = {
|
||||||
{
|
{
|
||||||
title: "Research",
|
title: "Research",
|
||||||
collapsable: false,
|
collapsable: false,
|
||||||
children: ["/research/determinism", "/research/transactional-semantics"]
|
children: [
|
||||||
|
"/research/determinism",
|
||||||
|
"/research/transactional-semantics"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
|
@ -0,0 +1,17 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en-US">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||||
|
<title>VuePress</title>
|
||||||
|
<meta name="description" content="">
|
||||||
|
|
||||||
|
|
||||||
|
<link rel="preload" href="/assets/css/1.styles.c01b7ee3.css" as="style"><link rel="preload" href="/assets/js/app.48f1ff5f.js" as="script"><link rel="prefetch" href="/assets/js/0.7c2695bf.js">
|
||||||
|
<link rel="stylesheet" href="/assets/css/1.styles.c01b7ee3.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div id="app" data-server-rendered="true"><div class="theme-container"><div class="content"><h1>404</h1><blockquote>Looks like we've got some broken links.</blockquote><a href="/" class="router-link-active">Take me home.</a></div></div></div>
|
||||||
|
<script src="/assets/js/app.48f1ff5f.js" defer></script>
|
||||||
|
</body>
|
||||||
|
</html>
|
File diff suppressed because one or more lines are too long
|
@ -0,0 +1 @@
|
||||||
|
<?xml version="1.0" encoding="UTF-8"?><svg xmlns="http://www.w3.org/2000/svg" width="12" height="13"><g stroke-width="2" stroke="#aaa" fill="none"><path d="M11.29 11.71l-4-4"/><circle cx="5" cy="5" r="4"/></g></svg>
|
After Width: | Height: | Size: 216 B |
|
@ -0,0 +1 @@
|
||||||
|
(window.webpackJsonp=window.webpackJsonp||[]).push([[0],{136:function(e,t,s){"use strict";s.r(t);var n=s(0),r=Object(n.a)({},function(){this.$createElement;this._self._c;return this._m(0)},[function(){var e=this.$createElement,t=this._self._c||e;return t("div",{staticClass:"content"},[t("h1",{attrs:{id:"hello-vuepress"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#hello-vuepress","aria-hidden":"true"}},[this._v("#")]),this._v(" Hello VuePress")])])}],!1,null,null,null);t.default=r.exports}}]);
|
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,17 @@
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en-US">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||||
|
<title>Hello VuePress</title>
|
||||||
|
<meta name="description" content="">
|
||||||
|
|
||||||
|
|
||||||
|
<link rel="preload" href="/assets/css/1.styles.c01b7ee3.css" as="style"><link rel="preload" href="/assets/js/app.48f1ff5f.js" as="script"><link rel="preload" href="/assets/js/0.7c2695bf.js" as="script">
|
||||||
|
<link rel="stylesheet" href="/assets/css/1.styles.c01b7ee3.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div id="app" data-server-rendered="true"><div class="theme-container no-sidebar"><header class="navbar"><div class="sidebar-button"><svg xmlns="http://www.w3.org/2000/svg" aria-hidden="true" role="img" viewBox="0 0 448 512" class="icon"><path fill="currentColor" d="M436 124H12c-6.627 0-12-5.373-12-12V80c0-6.627 5.373-12 12-12h424c6.627 0 12 5.373 12 12v32c0 6.627-5.373 12-12 12zm0 160H12c-6.627 0-12-5.373-12-12v-32c0-6.627 5.373-12 12-12h424c6.627 0 12 5.373 12 12v32c0 6.627-5.373 12-12 12zm0 160H12c-6.627 0-12-5.373-12-12v-32c0-6.627 5.373-12 12-12h424c6.627 0 12 5.373 12 12v32c0 6.627-5.373 12-12 12z"></path></svg></div><a href="/" class="home-link router-link-exact-active router-link-active"></a><div class="links"><div class="search-box"><input aria-label="Search" autocomplete="off" spellcheck="false" value=""><!----></div><!----></div></header><div class="sidebar-mask"></div><div class="sidebar"><!----><!----></div><div class="page"><div class="content"><h1 id="hello-vuepress"><a href="#hello-vuepress" aria-hidden="true" class="header-anchor">#</a> Hello VuePress</h1></div><div class="page-edit"><!----><!----></div><!----></div></div></div>
|
||||||
|
<script src="/assets/js/0.7c2695bf.js" defer></script><script src="/assets/js/app.48f1ff5f.js" defer></script>
|
||||||
|
</body>
|
||||||
|
</html>
|
|
@ -1,17 +1,96 @@
|
||||||
# Documentation Maintenance Overview
|
# Docs Build Workflow
|
||||||
|
|
||||||
The documentation found in this directory is hosted at:
|
The documentation for Tendermint Core is hosted at:
|
||||||
|
|
||||||
- https://tendermint.com/docs/
|
- https://tendermint.com/docs/ and
|
||||||
|
- https://tendermint-staging.interblock.io/docs/
|
||||||
|
|
||||||
and built using [VuePress](https://vuepress.vuejs.org/) from the tendermint website repo:
|
built from the files in this (`/docs`) directory for
|
||||||
|
[master](https://github.com/tendermint/tendermint/tree/master/docs)
|
||||||
|
and [develop](https://github.com/tendermint/tendermint/tree/develop/docs),
|
||||||
|
respectively.
|
||||||
|
|
||||||
- https://github.com/tendermint/tendermint.com
|
## How It Works
|
||||||
|
|
||||||
Under the hood, Jenkins listens for changes (on develop or master) in ./docs then rebuilds
|
There is a Jenkins job listening for changes in the `/docs` directory, on both
|
||||||
either the staging or production site depending on which branch the changes were made.
|
the `master` and `develop` branches. Any updates to files in this directory
|
||||||
|
on those branches will automatically trigger a website deployment. Under the hood,
|
||||||
|
a private website repository has make targets consumed by a standard Jenkins task.
|
||||||
|
|
||||||
To update the Table of Contents (layout of the documentation sidebar), edit the
|
## README
|
||||||
`config.js` in this directory, while the `README.md` is the landing page for the
|
|
||||||
website documentation.
|
|
||||||
|
|
||||||
|
The [README.md](./README.md) is also the landing page for the documentation
|
||||||
|
on the website.
|
||||||
|
|
||||||
|
## Config.js
|
||||||
|
|
||||||
|
The [config.js](./.vuepress/config.js) generates the sidebar and Table of Contents
|
||||||
|
on the website docs. Note the use of relative links and the omission of
|
||||||
|
file extensions. Additional features are available to improve the look
|
||||||
|
of the sidebar.
|
||||||
|
|
||||||
|
## Links
|
||||||
|
|
||||||
|
**NOTE:** Strongly consider the existing links - both within this directory
|
||||||
|
and to the website docs - when moving or deleting files.
|
||||||
|
|
||||||
|
Relative links should be used nearly everywhere, having discovered and weighed the following:
|
||||||
|
|
||||||
|
### Relative
|
||||||
|
|
||||||
|
Where is the other file, relative to the current one?
|
||||||
|
|
||||||
|
- works both on GitHub and for the VuePress build
|
||||||
|
- confusing / annoying to have things like: `../../../../myfile.md`
|
||||||
|
- requires more updates when files are re-shuffled
|
||||||
|
|
||||||
|
### Absolute
|
||||||
|
|
||||||
|
Where is the other file, given the root of the repo?
|
||||||
|
|
||||||
|
- works on GitHub, doesn't work for the VuePress build
|
||||||
|
- this is much nicer: `/docs/hereitis/myfile.md`
|
||||||
|
- if you move that file around, the links inside it are preserved (but not to it, of course)
|
||||||
|
|
||||||
|
### Full
|
||||||
|
|
||||||
|
The full GitHub URL to a file or directory. Used occasionally when it makes sense
|
||||||
|
to send users to the GitHub.
|
||||||
|
|
||||||
|
## Building Locally
|
||||||
|
|
||||||
|
To build and serve the documentation locally, run:
|
||||||
|
|
||||||
|
```
|
||||||
|
# from this directory
|
||||||
|
npm install
|
||||||
|
npm install -g vuepress
|
||||||
|
```
|
||||||
|
|
||||||
|
then change the following line in the `config.js`:
|
||||||
|
|
||||||
|
```
|
||||||
|
base: "/docs/",
|
||||||
|
```
|
||||||
|
|
||||||
|
to:
|
||||||
|
|
||||||
|
```
|
||||||
|
base: "/",
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, go up one directory to the root of the repo and run:
|
||||||
|
|
||||||
|
```
|
||||||
|
# from root of repo
|
||||||
|
vuepress build docs
|
||||||
|
cd dist/docs
|
||||||
|
python -m SimpleHTTPServer 8080
|
||||||
|
```
|
||||||
|
|
||||||
|
then navigate to localhost:8080 in your browser.
|
||||||
|
|
||||||
|
## Consistency
|
||||||
|
|
||||||
|
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as
|
||||||
|
much as possible with its [counterpart in the Cosmos SDK repo](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/DOCS_README.md).
|
||||||
|
|
|
@ -1,9 +1,7 @@
|
||||||
# Tendermint
|
# Tendermint
|
||||||
|
|
||||||
Welcome to the Tendermint Core documentation! The introduction below provides
|
Welcome to the Tendermint Core documentation! Below you'll find an
|
||||||
an overview to help you navigate to your area of interest.
|
overview of the documentation.
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state
|
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state
|
||||||
transition machine - written in any programming language - and securely
|
transition machine - written in any programming language - and securely
|
||||||
|
@ -11,17 +9,33 @@ replicates it on many machines. In other words, a blockchain.
|
||||||
|
|
||||||
Tendermint requires an application running over the Application Blockchain
|
Tendermint requires an application running over the Application Blockchain
|
||||||
Interface (ABCI) - and comes packaged with an example application to do so.
|
Interface (ABCI) - and comes packaged with an example application to do so.
|
||||||
Follow the [installation instructions](./introduction/install.md) to get up and running
|
|
||||||
quickly. For more details on [using tendermint](./tendermint-core/using-tendermint.md) see that
|
## Getting Started
|
||||||
and the following sections.
|
|
||||||
|
Here you'll find quick start guides and links to more advanced "get up and running"
|
||||||
|
documentation.
|
||||||
|
|
||||||
|
## Core
|
||||||
|
|
||||||
|
Details about the core functionality and configuration of Tendermint.
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
Benchmarking and monitoring tools.
|
||||||
|
|
||||||
## Networks
|
## Networks
|
||||||
|
|
||||||
Testnets can be setup manually on one or more machines, or automatically on one
|
Setting up testnets manually or automated, local or in the cloud.
|
||||||
or more machine, using a variety of methods described in the [deploy testnets
|
|
||||||
section](./networks/deploy-testnets.md).
|
|
||||||
|
|
||||||
## Application Development
|
## Apps
|
||||||
|
|
||||||
The first step to building application on Tendermint is to [install
|
Building appplications with the ABCI.
|
||||||
ABCI-CLI](./app-dev/getting-started.md) and play with the example applications.
|
|
||||||
|
## Specification
|
||||||
|
|
||||||
|
Dive deep into the spec. There's one for each Tendermint and the ABCI
|
||||||
|
|
||||||
|
## Edit the Documentation
|
||||||
|
|
||||||
|
See [this file](./DOCS_README.md) for details of the build process and
|
||||||
|
considerations when making changes.
|
||||||
|
|
|
@ -7,7 +7,7 @@ application you want to run. So, to run a complete blockchain that does
|
||||||
something useful, you must start two programs: one is Tendermint Core,
|
something useful, you must start two programs: one is Tendermint Core,
|
||||||
the other is your application, which can be written in any programming
|
the other is your application, which can be written in any programming
|
||||||
language. Recall from [the intro to
|
language. Recall from [the intro to
|
||||||
ABCI](./introduction.md#ABCI-Overview) that Tendermint Core handles all
|
ABCI](../introduction/introduction.md#ABCI-Overview) that Tendermint Core handles all
|
||||||
the p2p and consensus stuff, and just forwards transactions to the
|
the p2p and consensus stuff, and just forwards transactions to the
|
||||||
application when they need to be validated, or when they're ready to be
|
application when they need to be validated, or when they're ready to be
|
||||||
committed to a block.
|
committed to a block.
|
||||||
|
@ -64,7 +64,7 @@ tendermint node
|
||||||
If you have used Tendermint, you may want to reset the data for a new
|
If you have used Tendermint, you may want to reset the data for a new
|
||||||
blockchain by running `tendermint unsafe_reset_all`. Then you can run
|
blockchain by running `tendermint unsafe_reset_all`. Then you can run
|
||||||
`tendermint node` to start Tendermint, and connect to the app. For more
|
`tendermint node` to start Tendermint, and connect to the app. For more
|
||||||
details, see [the guide on using Tendermint](./using-tendermint.md).
|
details, see [the guide on using Tendermint](../tendermint-core/using-tendermint.md).
|
||||||
|
|
||||||
You should see Tendermint making blocks! We can get the status of our
|
You should see Tendermint making blocks! We can get the status of our
|
||||||
Tendermint node as follows:
|
Tendermint node as follows:
|
||||||
|
@ -244,7 +244,7 @@ But if we send a `1`, it works again:
|
||||||
```
|
```
|
||||||
|
|
||||||
For more details on the `broadcast_tx` API, see [the guide on using
|
For more details on the `broadcast_tx` API, see [the guide on using
|
||||||
Tendermint](./using-tendermint.md).
|
Tendermint](../tendermint-core/using-tendermint.md).
|
||||||
|
|
||||||
## CounterJS - Example in Another Language
|
## CounterJS - Example in Another Language
|
||||||
|
|
||||||
|
|
|
@ -12,8 +12,8 @@ Let's take a look at the `[tx_index]` config section:
|
||||||
# What indexer to use for transactions
|
# What indexer to use for transactions
|
||||||
#
|
#
|
||||||
# Options:
|
# Options:
|
||||||
# 1) "null" (default)
|
# 1) "null"
|
||||||
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
|
||||||
indexer = "kv"
|
indexer = "kv"
|
||||||
|
|
||||||
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
|
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
|
||||||
|
|
|
@ -4,9 +4,17 @@
|
||||||
|
|
||||||
- How to / should we version the authenticated encryption handshake itself (ie.
|
- How to / should we version the authenticated encryption handshake itself (ie.
|
||||||
upfront protocol negotiation for the P2PVersion)
|
upfront protocol negotiation for the P2PVersion)
|
||||||
|
- How to / should we version ABCI itself? Should it just be absorbed by the
|
||||||
|
BlockVersion?
|
||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
|
|
||||||
|
- 18-09-2018: Updates after working a bit on implementation
|
||||||
|
- ABCI Handshake needs to happen independently of starting the app
|
||||||
|
conns so we can see the result
|
||||||
|
- Add question about ABCI protocol version
|
||||||
|
- 16-08-2018: Updates after discussion with SDK team
|
||||||
|
- Remove signalling for next version from Header/ABCI
|
||||||
- 03-08-2018: Updates from discussion with Jae:
|
- 03-08-2018: Updates from discussion with Jae:
|
||||||
- ProtocolVersion contains Block/AppVersion, not Current/Next
|
- ProtocolVersion contains Block/AppVersion, not Current/Next
|
||||||
- signal upgrades to Tendermint using EndBlock fields
|
- signal upgrades to Tendermint using EndBlock fields
|
||||||
|
@ -19,18 +27,18 @@
|
||||||
|
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
|
Here we focus on software-agnostic protocol versions.
|
||||||
|
|
||||||
The Software Version is covered by SemVer and described elsewhere.
|
The Software Version is covered by SemVer and described elsewhere.
|
||||||
It is not relevant to the protocol description, suffice to say that if any protocol version
|
It is not relevant to the protocol description, suffice to say that if any protocol version
|
||||||
changes, the software version changes, but not necessarily vice versa.
|
changes, the software version changes, but not necessarily vice versa.
|
||||||
|
|
||||||
Software version shoudl be included in NodeInfo for convenience/diagnostics.
|
Software version should be included in NodeInfo for convenience/diagnostics.
|
||||||
|
|
||||||
We are also interested in versioning across different blockchains in a
|
We are also interested in versioning across different blockchains in a
|
||||||
meaningful way, for instance to differentiate branches of a contentious
|
meaningful way, for instance to differentiate branches of a contentious
|
||||||
hard-fork. We leave that for a later ADR.
|
hard-fork. We leave that for a later ADR.
|
||||||
|
|
||||||
Here we focus on protocol versions.
|
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
We need to version components of the blockchain that may be independently upgraded.
|
We need to version components of the blockchain that may be independently upgraded.
|
||||||
|
@ -86,11 +94,9 @@ to connect to peers with older version.
|
||||||
|
|
||||||
Each component of the software is independently versioned in a modular way and its easy to mix and match and upgrade.
|
Each component of the software is independently versioned in a modular way and its easy to mix and match and upgrade.
|
||||||
|
|
||||||
Good luck pal ;)
|
|
||||||
|
|
||||||
## Proposal
|
## Proposal
|
||||||
|
|
||||||
Each of BlockVersion, AppVersion, P2PVersion is a monotonically increasing int64.
|
Each of BlockVersion, AppVersion, P2PVersion, is a monotonically increasing int64.
|
||||||
|
|
||||||
To use these versions, we need to update the block Header, the p2p NodeInfo, and the ABCI.
|
To use these versions, we need to update the block Header, the p2p NodeInfo, and the ABCI.
|
||||||
|
|
||||||
|
@ -100,19 +106,16 @@ Block Header should include a `Version` struct as its first field like:
|
||||||
|
|
||||||
```
|
```
|
||||||
type Version struct {
|
type Version struct {
|
||||||
CurrentVersion ProtocolVersion
|
Block int64
|
||||||
ChainID string
|
App int64
|
||||||
|
|
||||||
NextVersion ProtocolVersion
|
|
||||||
}
|
|
||||||
|
|
||||||
type ProtocolVersion struct {
|
|
||||||
BlockVersion int64
|
|
||||||
AppVersion int64
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Note this effectively makes BlockVersion the first field in the block Header.
|
Here, `Version.Block` defines the rules for the current block, while
|
||||||
|
`Version.App` defines the app version that processed the last block and computed
|
||||||
|
the `AppHash` in the current block. Together they provide a complete description
|
||||||
|
of the consensus-critical protocol.
|
||||||
|
|
||||||
Since we have settled on a proto3 header, the ability to read the BlockVersion out of the serialized header is unanimous.
|
Since we have settled on a proto3 header, the ability to read the BlockVersion out of the serialized header is unanimous.
|
||||||
|
|
||||||
Using a Version struct gives us more flexibility to add fields without breaking
|
Using a Version struct gives us more flexibility to add fields without breaking
|
||||||
|
@ -120,8 +123,6 @@ the header.
|
||||||
|
|
||||||
The ProtocolVersion struct includes both the Block and App versions - it should
|
The ProtocolVersion struct includes both the Block and App versions - it should
|
||||||
serve as a complete description of the consensus-critical protocol.
|
serve as a complete description of the consensus-critical protocol.
|
||||||
Using the `NextVersion` field, proposer's can signal their readiness to upgrade
|
|
||||||
to a new Block and/or App version.
|
|
||||||
|
|
||||||
### NodeInfo
|
### NodeInfo
|
||||||
|
|
||||||
|
@ -129,23 +130,21 @@ NodeInfo should include a Version struct as its first field like:
|
||||||
|
|
||||||
```
|
```
|
||||||
type Version struct {
|
type Version struct {
|
||||||
P2PVersion int64
|
P2P int64
|
||||||
|
Block int64
|
||||||
|
App int64
|
||||||
|
|
||||||
ChainID string
|
Other []string
|
||||||
BlockVersion int64
|
|
||||||
AppVersion int64
|
|
||||||
SoftwareVersion string
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Note this effectively makes P2PVersion the first field in the NodeInfo, so it
|
Note this effectively makes `Version.P2P` the first field in the NodeInfo, so it
|
||||||
should be easy to read this out of the serialized header if need be to facilitate an upgrade.
|
should be easy to read this out of the serialized header if need be to facilitate an upgrade.
|
||||||
|
|
||||||
The SoftwareVersion here should include the name of the software client and
|
The `Version.Other` here should include additional information like the name of the software client and
|
||||||
it's SemVer version - this is for convenience only. Eg.
|
it's SemVer version - this is for convenience only. Eg.
|
||||||
`tendermint-core/v0.22.8`.
|
`tendermint-core/v0.22.8`. It's a `[]string` so it can include information about
|
||||||
|
the version of Tendermint, of the app, of Tendermint libraries, etc.
|
||||||
The other versions and ChainID will determine peer compatibility (described below).
|
|
||||||
|
|
||||||
### ABCI
|
### ABCI
|
||||||
|
|
||||||
|
@ -158,6 +157,11 @@ version information.
|
||||||
We also need to be able to update versions in the life of a blockchain. The
|
We also need to be able to update versions in the life of a blockchain. The
|
||||||
natural place to do this is EndBlock.
|
natural place to do this is EndBlock.
|
||||||
|
|
||||||
|
Note that currently the result of the Handshake isn't exposed anywhere, as the
|
||||||
|
handshaking happens inside the `proxy.AppConns` abstraction. We will need to
|
||||||
|
remove the handshaking from the `proxy` package so we can call it independently
|
||||||
|
and get the result, which should contain the application version.
|
||||||
|
|
||||||
#### Info
|
#### Info
|
||||||
|
|
||||||
RequestInfo should add support for protocol versions like:
|
RequestInfo should add support for protocol versions like:
|
||||||
|
@ -199,28 +203,24 @@ message ResponseEndBlock {
|
||||||
ConsensusParams consensus_param_updates
|
ConsensusParams consensus_param_updates
|
||||||
repeated common.KVPair tags
|
repeated common.KVPair tags
|
||||||
|
|
||||||
VersionUpdates version_updates
|
VersionUpdate version_update
|
||||||
}
|
}
|
||||||
|
|
||||||
message VersionUpdates {
|
message VersionUpdate {
|
||||||
ProtocolVersion current_version
|
|
||||||
ProtocolVersion next_version
|
|
||||||
}
|
|
||||||
|
|
||||||
message ProtocolVersion {
|
|
||||||
int64 block_version
|
|
||||||
int64 app_version
|
int64 app_version
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Tendermint will use the information in VersionUpdates for the next block it
|
Tendermint will use the information in VersionUpdate for the next block it
|
||||||
proposes.
|
proposes.
|
||||||
|
|
||||||
### BlockVersion
|
### BlockVersion
|
||||||
|
|
||||||
BlockVersion is included in both the Header and the NodeInfo.
|
BlockVersion is included in both the Header and the NodeInfo.
|
||||||
|
|
||||||
Changing BlockVersion should happen quite infrequently and ideally only for extreme emergency.
|
Changing BlockVersion should happen quite infrequently and ideally only for
|
||||||
|
critical upgrades. For now, it is not encoded in ABCI, though it's always
|
||||||
|
possible to use tags to signal an external process to co-ordinate an upgrade.
|
||||||
|
|
||||||
Note Ethereum has not had to make an upgrade like this (everything has been at state machine level, AFAIK).
|
Note Ethereum has not had to make an upgrade like this (everything has been at state machine level, AFAIK).
|
||||||
|
|
||||||
|
@ -251,7 +251,7 @@ this is the first byte of a 32-byte ed25519 pubkey.
|
||||||
|
|
||||||
AppVersion is also included in the block Header and the NodeInfo.
|
AppVersion is also included in the block Header and the NodeInfo.
|
||||||
|
|
||||||
AppVersion essentially defines how the AppHash and Results are computed.
|
AppVersion essentially defines how the AppHash and LastResults are computed.
|
||||||
|
|
||||||
### Peer Compatibility
|
### Peer Compatibility
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,52 @@
|
||||||
|
# ADR 012: ABCI Events
|
||||||
|
|
||||||
|
## Changelog
|
||||||
|
|
||||||
|
- *2018-09-02* Remove ABCI errors component. Update description for events
|
||||||
|
- *2018-07-12* Initial version
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
ABCI tags were first described in [ADR 002](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-002-event-subscription.md).
|
||||||
|
They are key-value pairs that can be used to index transactions.
|
||||||
|
|
||||||
|
Currently, ABCI messages return a list of tags to describe an
|
||||||
|
"event" that took place during the Check/DeliverTx/Begin/EndBlock,
|
||||||
|
where each tag refers to a different property of the event, like the sending and receiving account addresses.
|
||||||
|
|
||||||
|
Since there is only one list of tags, recording data for multiple such events in
|
||||||
|
a single Check/DeliverTx/Begin/EndBlock must be done using prefixes in the key
|
||||||
|
space.
|
||||||
|
|
||||||
|
Alternatively, groups of tags that constitute an event can be separated by a
|
||||||
|
special tag that denotes a break between the events. This would allow
|
||||||
|
straightforward encoding of multiple events into a single list of tags without
|
||||||
|
prefixing, at the cost of these "special" tags to separate the different events.
|
||||||
|
|
||||||
|
TODO: brief description of how the indexing works
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Instead of returning a list of tags, return a list of events, where
|
||||||
|
each event is a list of tags. This way we naturally capture the concept of
|
||||||
|
multiple events happening during a single ABCI message.
|
||||||
|
|
||||||
|
TODO: describe impact on indexing and querying
|
||||||
|
|
||||||
|
## Status
|
||||||
|
|
||||||
|
Proposed
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
|
||||||
|
- Ability to track distinct events separate from ABCI calls (DeliverTx/BeginBlock/EndBlock)
|
||||||
|
- More powerful query abilities
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
|
||||||
|
- More complex query syntax
|
||||||
|
- More complex search implementation
|
||||||
|
|
||||||
|
### Neutral
|
|
@ -0,0 +1,64 @@
|
||||||
|
# ADR 023: ABCI Codespaces
|
||||||
|
|
||||||
|
## Changelog
|
||||||
|
|
||||||
|
- *2018-09-01* Initial version
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
ABCI errors should provide an abstraction between application details
|
||||||
|
and the client interface responsible for formatting & displaying errors to the user.
|
||||||
|
|
||||||
|
Currently, this abstraction consists of a single integer (the `code`), where any
|
||||||
|
`code > 0` is considered an error (ie. invalid transaction) and all type
|
||||||
|
information about the error is contained in the code. This integer is
|
||||||
|
expected to be decoded by the client into a known error string, where any
|
||||||
|
more specific data is contained in the `data`.
|
||||||
|
|
||||||
|
In a [previous conversation](https://github.com/tendermint/abci/issues/165#issuecomment-353704015),
|
||||||
|
it was suggested that not all non-zero codes need to be errors, hence why it's called `code` and not `error code`.
|
||||||
|
It is unclear exactly how the semantics of the `code` field will evolve, though
|
||||||
|
better lite-client proofs (like discussed for tags
|
||||||
|
[here](https://github.com/tendermint/tendermint/issues/1007#issuecomment-413917763))
|
||||||
|
may play a role.
|
||||||
|
|
||||||
|
Note that having all type information in a single integer
|
||||||
|
precludes an easy coordination method between "module implementers" and "client
|
||||||
|
implementers", especially for apps with many "modules". With an unbounded error domain (such as a string), module
|
||||||
|
implementers can pick a globally unique prefix & error code set, so client
|
||||||
|
implementers could easily implement support for "module A" regardless of which
|
||||||
|
particular blockchain network it was running in and which other modules were running with it. With
|
||||||
|
only error codes, globally unique codes are difficult/impossible, as the space
|
||||||
|
is finite and collisions are likely without an easy way to coordinate.
|
||||||
|
|
||||||
|
For instance, while trying to build an ecosystem of modules that can be composed into a single
|
||||||
|
ABCI application, the Cosmos-SDK had to hack a higher level "codespace" into the
|
||||||
|
single integer so that each module could have its own space to express its
|
||||||
|
errors.
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Include a `string code_space` in all ABCI messages that have a `code`.
|
||||||
|
This allows applications to namespace the codes so they can experiment with
|
||||||
|
their own code schemes.
|
||||||
|
|
||||||
|
It is the responsibility of applications to limit the size of the `code_space`
|
||||||
|
string.
|
||||||
|
|
||||||
|
How the codespace is hashed into block headers (ie. so it can be queried
|
||||||
|
efficiently by lite clients) is left for a separate ADR.
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
## Positive
|
||||||
|
|
||||||
|
- No need for complex codespacing on a single integer
|
||||||
|
- More expressive type system for errors
|
||||||
|
|
||||||
|
## Negative
|
||||||
|
|
||||||
|
- Another field in the response needs to be accounted for
|
||||||
|
- Some redundancy with `code` field
|
||||||
|
- May encourage more error/code type info to move to the `codespace` string, which
|
||||||
|
could impact lite clients.
|
||||||
|
|
|
@ -5,7 +5,7 @@ is to run [this script](https://github.com/tendermint/tendermint/blob/develop/sc
|
||||||
a fresh Ubuntu instance,
|
a fresh Ubuntu instance,
|
||||||
or [this script](https://github.com/tendermint/tendermint/blob/develop/scripts/install/install_tendermint_bsd.sh)
|
or [this script](https://github.com/tendermint/tendermint/blob/develop/scripts/install/install_tendermint_bsd.sh)
|
||||||
on a fresh FreeBSD instance. Read the comments / instructions carefully (i.e., reset your terminal after running the script,
|
on a fresh FreeBSD instance. Read the comments / instructions carefully (i.e., reset your terminal after running the script,
|
||||||
make sure your okay with the network connections being made).
|
make sure you are okay with the network connections being made).
|
||||||
|
|
||||||
## From Binary
|
## From Binary
|
||||||
|
|
||||||
|
@ -48,6 +48,15 @@ to put the binary in `./build`.
|
||||||
|
|
||||||
The latest `tendermint version` is now installed.
|
The latest `tendermint version` is now installed.
|
||||||
|
|
||||||
|
## Run
|
||||||
|
|
||||||
|
To start a one-node blockchain with a simple in-process application:
|
||||||
|
|
||||||
|
```
|
||||||
|
tendermint init
|
||||||
|
tendermint node --proxy_app=kvstore
|
||||||
|
```
|
||||||
|
|
||||||
## Reinstall
|
## Reinstall
|
||||||
|
|
||||||
If you already have Tendermint installed, and you make updates, simply
|
If you already have Tendermint installed, and you make updates, simply
|
||||||
|
@ -66,11 +75,42 @@ make get_vendor_deps
|
||||||
make install
|
make install
|
||||||
```
|
```
|
||||||
|
|
||||||
## Run
|
## Compile with CLevelDB support
|
||||||
|
|
||||||
To start a one-node blockchain with a simple in-process application:
|
Install [LevelDB](https://github.com/google/leveldb) (minimum version is 1.7).
|
||||||
|
|
||||||
|
Build Tendermint with C libraries: `make build_c`.
|
||||||
|
|
||||||
|
### Ubuntu
|
||||||
|
|
||||||
|
Install LevelDB with snappy:
|
||||||
|
|
||||||
```
|
```
|
||||||
tendermint init
|
sudo apt-get update
|
||||||
tendermint node --proxy_app=kvstore
|
sudo apt install build-essential
|
||||||
|
|
||||||
|
sudo apt-get install libsnappy-dev
|
||||||
|
|
||||||
|
wget https://github.com/google/leveldb/archive/v1.20.tar.gz && \
|
||||||
|
tar -zxvf v1.20.tar.gz && \
|
||||||
|
cd leveldb-1.20/ && \
|
||||||
|
make && \
|
||||||
|
sudo scp -r out-static/lib* out-shared/lib* /usr/local/lib/ && \
|
||||||
|
cd include/ && \
|
||||||
|
sudo scp -r leveldb /usr/local/include/ && \
|
||||||
|
sudo ldconfig && \
|
||||||
|
rm -f v1.20.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
Set database backend to cleveldb:
|
||||||
|
|
||||||
|
```
|
||||||
|
# config/config.toml
|
||||||
|
db_backend = "cleveldb"
|
||||||
|
```
|
||||||
|
|
||||||
|
To build Tendermint, run
|
||||||
|
|
||||||
|
```
|
||||||
|
CGO_LDFLAGS="-lsnappy" go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags "tendermint gcc" -o build/tendermint ./cmd/tendermint/
|
||||||
```
|
```
|
||||||
|
|
|
@ -36,7 +36,7 @@ tendermint node --proxy_app=kvstore --p2p.persistent_peers=96663a3dd0d7b9d17d4c8
|
||||||
|
|
||||||
After a few seconds, all the nodes should connect to each other and
|
After a few seconds, all the nodes should connect to each other and
|
||||||
start making blocks! For more information, see the Tendermint Networks
|
start making blocks! For more information, see the Tendermint Networks
|
||||||
section of [the guide to using Tendermint](./using-tendermint.md).
|
section of [the guide to using Tendermint](../tendermint-core/using-tendermint.md).
|
||||||
|
|
||||||
But wait! Steps 3, 4 and 5 are quite manual. Instead, use the `tendermint testnet` command. By default, running `tendermint testnet` will create all the
|
But wait! Steps 3, 4 and 5 are quite manual. Instead, use the `tendermint testnet` command. By default, running `tendermint testnet` will create all the
|
||||||
required files, but it won't populate the list of persistent peers. It will do
|
required files, but it won't populate the list of persistent peers. It will do
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,6 +1,6 @@
|
||||||
# Transactional Semantics
|
# Transactional Semantics
|
||||||
|
|
||||||
In [Using Tendermint](./using-tendermint.md#broadcast-api) we
|
In [Using Tendermint](../tendermint-core/using-tendermint.md#broadcast-api) we
|
||||||
discussed different API endpoints for sending transactions and
|
discussed different API endpoints for sending transactions and
|
||||||
differences between them.
|
differences between them.
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Tendermint Specification
|
# Overview
|
||||||
|
|
||||||
This is a markdown specification of the Tendermint blockchain.
|
This is a markdown specification of the Tendermint blockchain.
|
||||||
It defines the base data structures, how they are validated,
|
It defines the base data structures, how they are validated,
|
||||||
|
@ -21,6 +21,7 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
|
||||||
### Consensus Protocol
|
### Consensus Protocol
|
||||||
|
|
||||||
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
|
- [Consensus Algorithm](/docs/spec/consensus/consensus.md)
|
||||||
|
- [Creating a proposal](/docs/spec/consensus/creating-proposal.md)
|
||||||
- [Time](/docs/spec/consensus/bft-time.md)
|
- [Time](/docs/spec/consensus/bft-time.md)
|
||||||
- [Light-Client](/docs/spec/consensus/light-client.md)
|
- [Light-Client](/docs/spec/consensus/light-client.md)
|
||||||
|
|
||||||
|
|
|
@ -48,16 +48,50 @@ Keys and values in tags must be UTF-8 encoded strings (e.g.
|
||||||
|
|
||||||
## Determinism
|
## Determinism
|
||||||
|
|
||||||
Some methods (`SetOption, Query, CheckTx, DeliverTx`) return
|
ABCI applications must implement deterministic finite-state machines to be
|
||||||
non-deterministic data in the form of `Info` and `Log`. The `Log` is
|
securely replicated by the Tendermint consensus. This means block execution
|
||||||
intended for the literal output from the application's logger, while the
|
over the Consensus Connection must be strictly deterministic: given the same
|
||||||
`Info` is any additional info that should be returned.
|
ordered set of requests, all nodes will compute identical responses, for all
|
||||||
|
BeginBlock, DeliverTx, EndBlock, and Commit. This is critical, because the
|
||||||
All other fields in the `Response*` of all methods must be strictly deterministic.
|
responses are included in the header of the next block, either via a Merkle root
|
||||||
|
or directly, so all nodes must agree on exactly what they are.
|
||||||
|
|
||||||
For this reason, it is recommended that applications not be exposed to any
|
For this reason, it is recommended that applications not be exposed to any
|
||||||
external user or process except via the ABCI connections to a consensus engine
|
external user or process except via the ABCI connections to a consensus engine
|
||||||
like Tendermint Core.
|
like Tendermint Core. The application must only change its state based on input
|
||||||
|
from block execution (BeginBlock, DeliverTx, EndBlock, Commit), and not through
|
||||||
|
any other kind of request. This is the only way to ensure all nodes see the same
|
||||||
|
transactions and compute the same results.
|
||||||
|
|
||||||
|
If there is some non-determinism in the state machine, consensus will eventually
|
||||||
|
fail as nodes disagree over the correct values for the block header. The
|
||||||
|
non-determinism must be fixed and the nodes restarted.
|
||||||
|
|
||||||
|
Sources of non-determinism in applications may include:
|
||||||
|
|
||||||
|
- Hardware failures
|
||||||
|
- Cosmic rays, overheating, etc.
|
||||||
|
- Node-dependent state
|
||||||
|
- Random numbers
|
||||||
|
- Time
|
||||||
|
- Underspecification
|
||||||
|
- Library version changes
|
||||||
|
- Race conditions
|
||||||
|
- Floating point numbers
|
||||||
|
- JSON serialization
|
||||||
|
- Iterating through hash-tables/maps/dictionaries
|
||||||
|
- External Sources
|
||||||
|
- Filesystem
|
||||||
|
- Network calls (eg. some external REST API service)
|
||||||
|
|
||||||
|
See [#56](https://github.com/tendermint/abci/issues/56) for original discussion.
|
||||||
|
|
||||||
|
Note that some methods (`SetOption, Query, CheckTx, DeliverTx`) return
|
||||||
|
explicitly non-deterministic data in the form of `Info` and `Log` fields. The `Log` is
|
||||||
|
intended for the literal output from the application's logger, while the
|
||||||
|
`Info` is any additional info that should be returned. These are the only fields
|
||||||
|
that are not included in block header computations, so we don't need agreement
|
||||||
|
on them. All other fields in the `Response*` must be strictly deterministic.
|
||||||
|
|
||||||
## Block Execution
|
## Block Execution
|
||||||
|
|
||||||
|
@ -175,7 +209,8 @@ Commit are included in the header of the next block.
|
||||||
- `Index (int64)`: The index of the key in the tree.
|
- `Index (int64)`: The index of the key in the tree.
|
||||||
- `Key ([]byte)`: The key of the matching data.
|
- `Key ([]byte)`: The key of the matching data.
|
||||||
- `Value ([]byte)`: The value of the matching data.
|
- `Value ([]byte)`: The value of the matching data.
|
||||||
- `Proof ([]byte)`: Proof for the data, if requested.
|
- `Proof ([]byte)`: Serialized proof for the data, if requested, to be
|
||||||
|
verified against the `AppHash` for the given Height.
|
||||||
- `Height (int64)`: The block height from which data was derived.
|
- `Height (int64)`: The block height from which data was derived.
|
||||||
Note that this is the height of the block containing the
|
Note that this is the height of the block containing the
|
||||||
application's Merkle root hash, which represents the state as it
|
application's Merkle root hash, which represents the state as it
|
||||||
|
@ -216,7 +251,7 @@ Commit are included in the header of the next block.
|
||||||
be non-deterministic.
|
be non-deterministic.
|
||||||
- `Info (string)`: Additional information. May
|
- `Info (string)`: Additional information. May
|
||||||
be non-deterministic.
|
be non-deterministic.
|
||||||
- `GasWanted (int64)`: Amount of gas request for transaction.
|
- `GasWanted (int64)`: Amount of gas requested for transaction.
|
||||||
- `GasUsed (int64)`: Amount of gas consumed by transaction.
|
- `GasUsed (int64)`: Amount of gas consumed by transaction.
|
||||||
- `Tags ([]cmn.KVPair)`: Key-Value tags for filtering and indexing
|
- `Tags ([]cmn.KVPair)`: Key-Value tags for filtering and indexing
|
||||||
transactions (eg. by account).
|
transactions (eg. by account).
|
||||||
|
@ -275,13 +310,18 @@ Commit are included in the header of the next block.
|
||||||
### Commit
|
### Commit
|
||||||
|
|
||||||
- **Response**:
|
- **Response**:
|
||||||
- `Data ([]byte)`: The Merkle root hash
|
- `Data ([]byte)`: The Merkle root hash of the application state
|
||||||
- **Usage**:
|
- **Usage**:
|
||||||
- Persist the application state.
|
- Persist the application state.
|
||||||
- Return a Merkle root hash of the application state.
|
- Return an (optional) Merkle root hash of the application state
|
||||||
- It's critical that all application instances return the
|
- `ResponseCommit.Data` is included as the `Header.AppHash` in the next block
|
||||||
same hash. If not, they will not be able to agree on the next
|
- it may be empty
|
||||||
block, because the hash is included in the next block!
|
- Later calls to `Query` can return proofs about the application state anchored
|
||||||
|
in this Merkle root hash
|
||||||
|
- Note developers can return whatever they want here (could be nothing, or a
|
||||||
|
constant string, etc.), so long as it is deterministic - it must not be a
|
||||||
|
function of anything that did not come from the
|
||||||
|
BeginBlock/DeliverTx/EndBlock methods.
|
||||||
|
|
||||||
## Data Types
|
## Data Types
|
||||||
|
|
||||||
|
|
|
@ -4,19 +4,20 @@ Please ensure you've first read the spec for [ABCI Methods and Types](abci.md)
|
||||||
|
|
||||||
Here we cover the following components of ABCI applications:
|
Here we cover the following components of ABCI applications:
|
||||||
|
|
||||||
- [State](#state) - the interplay between ABCI connections and application state
|
- [Connection State](#state) - the interplay between ABCI connections and application state
|
||||||
and the differences between `CheckTx` and `DeliverTx`.
|
and the differences between `CheckTx` and `DeliverTx`.
|
||||||
- [Transaction Results](#transaction-results) - rules around transaction
|
- [Transaction Results](#transaction-results) - rules around transaction
|
||||||
results and validity
|
results and validity
|
||||||
- [Validator Set Updates](#validator-updates) - how validator sets are
|
- [Validator Set Updates](#validator-updates) - how validator sets are
|
||||||
changed during `InitChain` and `EndBlock`
|
changed during `InitChain` and `EndBlock`
|
||||||
- [Query](#query) - standards for using the `Query` method
|
- [Query](#query) - standards for using the `Query` method and proofs about the
|
||||||
|
application state
|
||||||
- [Crash Recovery](#crash-recovery) - handshake protocol to synchronize
|
- [Crash Recovery](#crash-recovery) - handshake protocol to synchronize
|
||||||
Tendermint and the application on startup.
|
Tendermint and the application on startup.
|
||||||
|
|
||||||
## State
|
## State
|
||||||
|
|
||||||
Since Tendermint maintains multiple concurrent ABCI connections, it is typical
|
Since Tendermint maintains three concurrent ABCI connections, it is typical
|
||||||
for an application to maintain a distinct state for each, and for the states to
|
for an application to maintain a distinct state for each, and for the states to
|
||||||
be synchronized during `Commit`.
|
be synchronized during `Commit`.
|
||||||
|
|
||||||
|
@ -85,18 +86,50 @@ Otherwise it should never be modified.
|
||||||
|
|
||||||
## Transaction Results
|
## Transaction Results
|
||||||
|
|
||||||
`ResponseCheckTx` and `ResponseDeliverTx` contain the same fields, though they
|
`ResponseCheckTx` and `ResponseDeliverTx` contain the same fields.
|
||||||
have slightly different effects.
|
|
||||||
|
|
||||||
In both cases, `Info` and `Log` are non-deterministic values for debugging/convenience purposes
|
The `Info` and `Log` fields are non-deterministic values for debugging/convenience purposes
|
||||||
that are otherwise ignored.
|
that are otherwise ignored.
|
||||||
|
|
||||||
In both cases, `GasWanted` and `GasUsed` parameters are currently ignored,
|
The `Data` field must be strictly deterministic, but can be arbitrary data.
|
||||||
though see issues
|
|
||||||
[#1861](https://github.com/tendermint/tendermint/issues/1861),
|
### Gas
|
||||||
[#2299](https://github.com/tendermint/tendermint/issues/2299) and
|
|
||||||
[#2310](https://github.com/tendermint/tendermint/issues/2310) for how this may
|
Ethereum introduced the notion of `gas` as an absract representation of the
|
||||||
change.
|
cost of resources used by nodes when processing transactions. Every operation in the
|
||||||
|
Ethereum Virtual Machine uses some amount of gas, and gas can be accepted at a market-variable price.
|
||||||
|
Users propose a maximum amount of gas for their transaction; if the tx uses less, they get
|
||||||
|
the difference credited back. Tendermint adopts a similar abstraction,
|
||||||
|
though uses it only optionally and weakly, allowing applications to define
|
||||||
|
their own sense of the cost of execution.
|
||||||
|
|
||||||
|
In Tendermint, the `ConsensusParams.BlockSize.MaxGas` limits the amount of `gas` that can be used in a block.
|
||||||
|
The default value is `-1`, meaning no limit, or that the concept of gas is
|
||||||
|
meaningless.
|
||||||
|
|
||||||
|
Responses contain a `GasWanted` and `GasUsed` field. The former is the maximum
|
||||||
|
amount of gas the sender of a tx is willing to use, and the later is how much it actually
|
||||||
|
used. Applications should enforce that `GasUsed <= GasWanted` - ie. tx execution
|
||||||
|
should halt before it can use more resources than it requested.
|
||||||
|
|
||||||
|
When `MaxGas > -1`, Tendermint enforces the following rules:
|
||||||
|
|
||||||
|
- `GasWanted <= MaxGas` for all txs in the mempool
|
||||||
|
- `(sum of GasWanted in a block) <= MaxGas` when proposing a block
|
||||||
|
|
||||||
|
If `MaxGas == -1`, no rules about gas are enforced.
|
||||||
|
|
||||||
|
Note that Tendermint does not currently enforce anything about Gas in the consensus, only the mempool.
|
||||||
|
This means it does not guarantee that committed blocks satisfy these rules!
|
||||||
|
It is the application's responsibility to return non-zero response codes when gas limits are exceeded.
|
||||||
|
|
||||||
|
The `GasUsed` field is ignored compltely by Tendermint. That said, applications should enforce:
|
||||||
|
- `GasUsed <= GasWanted` for any given transaction
|
||||||
|
- `(sum of GasUsed in a block) <= MaxGas` for every block
|
||||||
|
|
||||||
|
In the future, we intend to add a `Priority` field to the responses that can be
|
||||||
|
used to explicitly prioritize txs in the mempool for inclusion in a block
|
||||||
|
proposal. See [#1861](https://github.com/tendermint/tendermint/issues/1861).
|
||||||
|
|
||||||
### CheckTx
|
### CheckTx
|
||||||
|
|
||||||
|
@ -141,9 +174,6 @@ If the list is not empty, Tendermint will use it for the validator set.
|
||||||
This way the application can determine the initial validator set for the
|
This way the application can determine the initial validator set for the
|
||||||
blockchain.
|
blockchain.
|
||||||
|
|
||||||
ResponseInitChain also includes ConsensusParams, but these are presently
|
|
||||||
ignored.
|
|
||||||
|
|
||||||
### EndBlock
|
### EndBlock
|
||||||
|
|
||||||
Updates to the Tendermint validator set can be made by returning
|
Updates to the Tendermint validator set can be made by returning
|
||||||
|
@ -178,13 +208,108 @@ following rules:
|
||||||
|
|
||||||
Note the updates returned in block `H` will only take effect at block `H+2`.
|
Note the updates returned in block `H` will only take effect at block `H+2`.
|
||||||
|
|
||||||
|
## Consensus Parameters
|
||||||
|
|
||||||
|
ConsensusParams enforce certain limits in the blockchain, like the maximum size
|
||||||
|
of blocks, amount of gas used in a block, and the maximum acceptable age of
|
||||||
|
evidence. They can be set in InitChain and updated in EndBlock.
|
||||||
|
|
||||||
|
### BlockSize.MaxBytes
|
||||||
|
|
||||||
|
The maximum size of a complete Amino encoded block.
|
||||||
|
This is enforced by Tendermint consensus.
|
||||||
|
|
||||||
|
This implies a maximum tx size that is this MaxBytes, less the expected size of
|
||||||
|
the header, the validator set, and any included evidence in the block.
|
||||||
|
|
||||||
|
Must have `0 < MaxBytes < 100 MB`.
|
||||||
|
|
||||||
|
### BlockSize.MaxGas
|
||||||
|
|
||||||
|
The maximum of the sum of `GasWanted` in a proposed block.
|
||||||
|
This is *not* enforced by Tendermint consensus.
|
||||||
|
It is left to the app to enforce (ie. if txs are included past the
|
||||||
|
limit, they should return non-zero codes). It is used by Tendermint to limit the
|
||||||
|
txs included in a proposed block.
|
||||||
|
|
||||||
|
Must have `MaxGas >= -1`.
|
||||||
|
If `MaxGas == -1`, no limit is enforced.
|
||||||
|
|
||||||
|
### EvidenceParams.MaxAge
|
||||||
|
|
||||||
|
This is the maximum age of evidence.
|
||||||
|
This is enforced by Tendermint consensus.
|
||||||
|
If a block includes evidence older than this, the block will be rejected
|
||||||
|
(validators won't vote for it).
|
||||||
|
|
||||||
|
Must have `0 < MaxAge`.
|
||||||
|
|
||||||
|
### Updates
|
||||||
|
|
||||||
|
The application may set the consensus params during InitChain, and update them during
|
||||||
|
EndBlock.
|
||||||
|
|
||||||
|
#### InitChain
|
||||||
|
|
||||||
|
ResponseInitChain includes a ConsensusParams.
|
||||||
|
If its nil, Tendermint will use the params loaded in the genesis
|
||||||
|
file. If it's not nil, Tendermint will use it.
|
||||||
|
This way the application can determine the initial consensus params for the
|
||||||
|
blockchain.
|
||||||
|
|
||||||
|
#### EndBlock
|
||||||
|
|
||||||
|
ResponseEndBlock includes a ConsensusParams.
|
||||||
|
If its nil, Tendermint will do nothing.
|
||||||
|
If it's not nil, Tendermint will use it.
|
||||||
|
This way the application can update the consensus params over time.
|
||||||
|
|
||||||
|
Note the updates returned in block `H` will take effect right away for block
|
||||||
|
`H+1`.
|
||||||
|
|
||||||
## Query
|
## Query
|
||||||
|
|
||||||
Query is a generic message type with lots of flexibility to enable diverse sets
|
Query is a generic method with lots of flexibility to enable diverse sets
|
||||||
of queries from applications. Tendermint has no requirements from the Query
|
of queries on application state. Tendermint makes use of Query to filter new peers
|
||||||
message for normal operation - that is, the ABCI app developer need not implement Query functionality if they do not wish too.
|
based on ID and IP, and exposes Query to the user over RPC.
|
||||||
That said, Tendermint makes a number of queries to support some optional
|
|
||||||
features. These are:
|
Note that calls to Query are not replicated across nodes, but rather query the
|
||||||
|
local node's state - hence they may return stale reads. For reads that require
|
||||||
|
consensus, use a transaction.
|
||||||
|
|
||||||
|
The most important use of Query is to return Merkle proofs of the application state at some height
|
||||||
|
that can be used for efficient application-specific lite-clients.
|
||||||
|
|
||||||
|
Note Tendermint has technically no requirements from the Query
|
||||||
|
message for normal operation - that is, the ABCI app developer need not implement
|
||||||
|
Query functionality if they do not wish too.
|
||||||
|
|
||||||
|
### Query Proofs
|
||||||
|
|
||||||
|
The Tendermint block header includes a number of hashes, each providing an
|
||||||
|
anchor for some type of proof about the blockchain. The `ValidatorsHash` enables
|
||||||
|
quick verification of the validator set, the `DataHash` gives quick
|
||||||
|
verification of the transactions included in the block, etc.
|
||||||
|
|
||||||
|
The `AppHash` is unique in that it is application specific, and allows for
|
||||||
|
application-specific Merkle proofs about the state of the application.
|
||||||
|
While some applications keep all relevant state in the transactions themselves
|
||||||
|
(like Bitcoin and its UTXOs), others maintain a separated state that is
|
||||||
|
computed deterministically *from* transactions, but is not contained directly in
|
||||||
|
the transactions themselves (like Ethereum contracts and accounts).
|
||||||
|
For such applications, the `AppHash` provides a much more efficient way to verify lite-client proofs.
|
||||||
|
|
||||||
|
ABCI applications can take advantage of more efficient lite-client proofs for
|
||||||
|
their state as follows:
|
||||||
|
|
||||||
|
- return the Merkle root of the deterministic application state in
|
||||||
|
`ResponseCommit.Data`.
|
||||||
|
- it will be included as the `AppHash` in the next block.
|
||||||
|
- return efficient Merkle proofs about that application state in `ResponseQuery.Proof`
|
||||||
|
that can be verified using the `AppHash` of the corresponding block.
|
||||||
|
|
||||||
|
For instance, this allows an application's lite-client to verify proofs of
|
||||||
|
absence in the application state, something which is much less efficient to do using the block hash.
|
||||||
|
|
||||||
### Peer Filtering
|
### Peer Filtering
|
||||||
|
|
||||||
|
@ -199,6 +324,15 @@ using the following paths, with no additional data:
|
||||||
If either of these queries return a non-zero ABCI code, Tendermint will refuse
|
If either of these queries return a non-zero ABCI code, Tendermint will refuse
|
||||||
to connect to the peer.
|
to connect to the peer.
|
||||||
|
|
||||||
|
### Paths
|
||||||
|
|
||||||
|
Queries are directed at paths, and may optionally include additional data.
|
||||||
|
|
||||||
|
The expectation is for there to be some number of high level paths
|
||||||
|
differentiating concerns, like `/p2p`, `/store`, and `/app`. Currently,
|
||||||
|
Tendermint only uses `/p2p`, for filtering peers. For more advanced use, see the
|
||||||
|
implementation of
|
||||||
|
[Query in the Cosmos-SDK](https://github.com/cosmos/cosmos-sdk/blob/v0.23.1/baseapp/baseapp.go#L333).
|
||||||
|
|
||||||
## Crash Recovery
|
## Crash Recovery
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Tendermint Blockchain
|
# Blockchain
|
||||||
|
|
||||||
Here we describe the data structures in the Tendermint blockchain and the rules for validating them.
|
Here we describe the data structures in the Tendermint blockchain and the rules for validating them.
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Tendermint Encoding
|
# Encoding
|
||||||
|
|
||||||
## Amino
|
## Amino
|
||||||
|
|
||||||
|
@ -269,7 +269,7 @@ similarly derived.
|
||||||
|
|
||||||
### IAVL+ Tree
|
### IAVL+ Tree
|
||||||
|
|
||||||
Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/core/multistore.md)
|
Because Tendermint only uses a Simple Merkle Tree, application developers are expect to use their own Merkle tree in their applications. For example, the IAVL+ Tree - an immutable self-balancing binary tree for persisting application state is used by the [Cosmos SDK](https://github.com/cosmos/cosmos-sdk/blob/develop/docs/sdk/core/multistore.md)
|
||||||
|
|
||||||
## JSON
|
## JSON
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Tendermint State
|
# State
|
||||||
|
|
||||||
## State
|
## State
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# BFT time in Tendermint
|
# BFT Time
|
||||||
|
|
||||||
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
|
Tendermint provides a deterministic, Byzantine fault-tolerant, source of time.
|
||||||
Time in Tendermint is defined with the Time field of the block header.
|
Time in Tendermint is defined with the Time field of the block header.
|
||||||
|
|
|
@ -282,7 +282,7 @@ may make JSet verification/gossip logic easier to implement.
|
||||||
### Censorship Attacks
|
### Censorship Attacks
|
||||||
|
|
||||||
Due to the definition of a block
|
Due to the definition of a block
|
||||||
[commit](../../tendermint-core/validator.md#commiting-a-block), any 1/3+ coalition of
|
[commit](../../tendermint-core/validators.md#commit-a-block), any 1/3+ coalition of
|
||||||
validators can halt the blockchain by not broadcasting their votes. Such
|
validators can halt the blockchain by not broadcasting their votes. Such
|
||||||
a coalition can also censor particular transactions by rejecting blocks
|
a coalition can also censor particular transactions by rejecting blocks
|
||||||
that include these transactions, though this would result in a
|
that include these transactions, though this would result in a
|
||||||
|
|
|
@ -0,0 +1,42 @@
|
||||||
|
# Creating a proposal
|
||||||
|
|
||||||
|
A block consists of a header, transactions, votes (the commit),
|
||||||
|
and a list of evidence of malfeasance (ie. signing conflicting votes).
|
||||||
|
|
||||||
|
We include no more than 1/10th of the maximum block size
|
||||||
|
(`ConsensusParams.BlockSize.MaxBytes`) of evidence with each block.
|
||||||
|
|
||||||
|
## Reaping transactions from the mempool
|
||||||
|
|
||||||
|
When we reap transactions from the mempool, we calculate maximum data
|
||||||
|
size by subtracting maximum header size (`MaxHeaderBytes`), the maximum
|
||||||
|
amino overhead for a block (`MaxAminoOverheadForBlock`), the size of
|
||||||
|
the last commit (if present) and evidence (if present). While reaping
|
||||||
|
we account for amino overhead for each transaction.
|
||||||
|
|
||||||
|
```go
|
||||||
|
func MaxDataBytes(maxBytes int64, valsCount, evidenceCount int) int64 {
|
||||||
|
return maxBytes -
|
||||||
|
MaxAminoOverheadForBlock -
|
||||||
|
MaxHeaderBytes -
|
||||||
|
int64(valsCount)*MaxVoteBytes -
|
||||||
|
int64(evidenceCount)*MaxEvidenceBytes
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Validating transactions in the mempool
|
||||||
|
|
||||||
|
Before we accept a transaction in the mempool, we check if it's size is no more
|
||||||
|
than {MaxDataSize}. {MaxDataSize} is calculated using the same formula as
|
||||||
|
above, except because the evidence size is unknown at the moment, we subtract
|
||||||
|
maximum evidence size (1/10th of the maximum block size).
|
||||||
|
|
||||||
|
```go
|
||||||
|
func MaxDataBytesUnknownEvidence(maxBytes int64, valsCount int) int64 {
|
||||||
|
return maxBytes -
|
||||||
|
MaxAminoOverheadForBlock -
|
||||||
|
MaxHeaderBytes -
|
||||||
|
int64(valsCount)*MaxVoteBytes -
|
||||||
|
MaxEvidenceBytesPerBlock(maxBytes)
|
||||||
|
}
|
||||||
|
```
|
|
@ -1,4 +1,4 @@
|
||||||
# Light client
|
# Light Client
|
||||||
|
|
||||||
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
|
A light client is a process that connects to the Tendermint Full Node(s) and then tries to verify the Merkle proofs
|
||||||
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
|
about the blockchain application. In this document we describe mechanisms that ensures that the Tendermint light client
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Tendermint Peer Discovery
|
# Peer Discovery
|
||||||
|
|
||||||
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity to one another.
|
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity to one another.
|
||||||
This document describes what kind of nodes Tendermint should enable and how they should work.
|
This document describes what kind of nodes Tendermint should enable and how they should work.
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
# Tendermint Peers
|
# Peers
|
||||||
|
|
||||||
This document explains how Tendermint Peers are identified and how they connect to one another.
|
This document explains how Tendermint Peers are identified and how they connect to one another.
|
||||||
|
|
||||||
|
@ -83,7 +83,16 @@ type NodeInfo struct {
|
||||||
Channels []int8
|
Channels []int8
|
||||||
|
|
||||||
Moniker string
|
Moniker string
|
||||||
Other []string
|
Other NodeInfoOther
|
||||||
|
}
|
||||||
|
|
||||||
|
type NodeInfoOther struct {
|
||||||
|
AminoVersion string
|
||||||
|
P2PVersion string
|
||||||
|
ConsensusVersion string
|
||||||
|
RPCVersion string
|
||||||
|
TxIndex string
|
||||||
|
RPCAddress string
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -22,7 +22,8 @@ to potentially untrusted actors.
|
||||||
Internal functionality is exposed via method calls to other
|
Internal functionality is exposed via method calls to other
|
||||||
code compiled into the tendermint binary.
|
code compiled into the tendermint binary.
|
||||||
|
|
||||||
- Reap - get tx to propose in next block
|
- ReapMaxBytesMaxGas - get txs to propose in the next block. Guarantees that the
|
||||||
|
size of the txs is less than MaxBytes, and gas is less than MaxGas
|
||||||
- Update - remove tx that were included in last block
|
- Update - remove tx that were included in last block
|
||||||
- ABCI.CheckTx - call ABCI app to validate the tx
|
- ABCI.CheckTx - call ABCI app to validate the tx
|
||||||
|
|
||||||
|
|
|
@ -30,7 +30,7 @@ moniker = "anonymous"
|
||||||
# and verifying their commits
|
# and verifying their commits
|
||||||
fast_sync = true
|
fast_sync = true
|
||||||
|
|
||||||
# Database backend: leveldb | memdb
|
# Database backend: leveldb | memdb | cleveldb
|
||||||
db_backend = "leveldb"
|
db_backend = "leveldb"
|
||||||
|
|
||||||
# Database directory
|
# Database directory
|
||||||
|
|
|
@ -127,7 +127,7 @@ little overview what they do.
|
||||||
found
|
found
|
||||||
[here](https://github.com/tendermint/tendermint/blob/master/types/events.go).
|
[here](https://github.com/tendermint/tendermint/blob/master/types/events.go).
|
||||||
You can subscribe to them by calling `subscribe` RPC method. Refer
|
You can subscribe to them by calling `subscribe` RPC method. Refer
|
||||||
to [RPC docs](./specification/rpc.md) for additional information.
|
to [RPC docs](./rpc.md) for additional information.
|
||||||
- `mempool` Mempool module handles all incoming transactions, whenever
|
- `mempool` Mempool module handles all incoming transactions, whenever
|
||||||
they are coming from peers or the application.
|
they are coming from peers or the application.
|
||||||
- `p2p` Provides an abstraction around peer-to-peer communication. For
|
- `p2p` Provides an abstraction around peer-to-peer communication. For
|
||||||
|
|
|
@ -10,11 +10,11 @@ package](https://godoc.org/github.com/tendermint/tendermint/lite).
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
The objective of the light client protocol is to get a
|
The objective of the light client protocol is to get a
|
||||||
[commit](./validators.md#committing-a-block) for a recent [block
|
commit for a recent block
|
||||||
hash](../spec/consensus/consensus.md.md#block-hash) where the commit includes a
|
hash where the commit includes a
|
||||||
majority of signatures from the last known validator set. From there,
|
majority of signatures from the last known validator set. From there,
|
||||||
all the application state is verifiable with [merkle
|
all the application state is verifiable with [merkle
|
||||||
proofs](./merkle.md#iavl-tree).
|
proofs](../spec/blockchain/encoding.md#iavl-tree).
|
||||||
|
|
||||||
## Properties
|
## Properties
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,7 @@
|
||||||
# RPC
|
# RPC
|
||||||
|
|
||||||
The RPC documentation is hosted [here](https://tendermint.github.io/slate) and is generated by the CI from our [Slate repo](https://github.com/tendermint/slate). To update the documentation, edit the relevant `godoc` comments in the [rpc/core directory](https://github.com/tendermint/tendermint/tree/develop/rpc/core).
|
The RPC documentation is hosted here:
|
||||||
|
|
||||||
NOTE: We will be moving the RPC documentation into the website in the near future. Stay tuned!
|
- https://tendermint.com/rpc/
|
||||||
|
|
||||||
|
To update the documentation, edit the relevant `godoc` comments in the [rpc/core directory](https://github.com/tendermint/tendermint/tree/develop/rpc/core).
|
||||||
|
|
|
@ -1,5 +1,41 @@
|
||||||
# Running in production
|
# Running in production
|
||||||
|
|
||||||
|
## Database
|
||||||
|
|
||||||
|
By default, Tendermint uses the `syndtr/goleveldb` package for it's in-process
|
||||||
|
key-value database. Unfortunately, this implementation of LevelDB seems to suffer under heavy load (see
|
||||||
|
[#226](https://github.com/syndtr/goleveldb/issues/226)). It may be best to
|
||||||
|
install the real C-implementaiton of LevelDB and compile Tendermint to use
|
||||||
|
that using `make build_c`. See the [install instructions](../introduction/install) for details.
|
||||||
|
|
||||||
|
Tendermint keeps multiple distinct LevelDB databases in the `$TMROOT/data`:
|
||||||
|
|
||||||
|
- `blockstore.db`: Keeps the entire blockchain - stores blocks,
|
||||||
|
block commits, and block meta data, each indexed by height. Used to sync new
|
||||||
|
peers.
|
||||||
|
- `evidence.db`: Stores all verified evidence of misbehaviour.
|
||||||
|
- `state.db`: Stores the current blockchain state (ie. height, validators,
|
||||||
|
consensus params). Only grows if consensus params or validators change. Also
|
||||||
|
used to temporarily store intermediate results during block processing.
|
||||||
|
- `tx_index.db`: Indexes txs (and their results) by tx hash and by DeliverTx result tags.
|
||||||
|
|
||||||
|
By default, Tendermint will only index txs by their hash, not by their DeliverTx
|
||||||
|
result tags. See [indexing transactions](../app-dev/indexing-transactions) for
|
||||||
|
details.
|
||||||
|
|
||||||
|
There is no current strategy for pruning the databases. Consider reducing
|
||||||
|
block production by [controlling empty blocks](../tendermint-core/using-tendermint#No-Empty-Blocks)
|
||||||
|
or by increasing the `consensus.timeout_commit` param. Note both of these are
|
||||||
|
local settings and not enforced by the consensus.
|
||||||
|
|
||||||
|
We're working on [state
|
||||||
|
syncing](https://github.com/tendermint/tendermint/issues/828),
|
||||||
|
which will enable history to be thrown away
|
||||||
|
and recent application state to be directly synced. We'll need to develop solutions
|
||||||
|
for archival nodes that allow queries on historical transactions and states.
|
||||||
|
The Cosmos project has had much success just dumping the latest state of a
|
||||||
|
blockchain to disk and starting a new chain from that state.
|
||||||
|
|
||||||
## Logging
|
## Logging
|
||||||
|
|
||||||
Default logging level (`main:info,state:info,*:`) should suffice for
|
Default logging level (`main:info,state:info,*:`) should suffice for
|
||||||
|
@ -11,6 +47,29 @@ you're trying to debug Tendermint or asked to provide logs with debug
|
||||||
logging level, you can do so by running tendermint with
|
logging level, you can do so by running tendermint with
|
||||||
`--log_level="*:debug"`.
|
`--log_level="*:debug"`.
|
||||||
|
|
||||||
|
## Write Ahead Logs (WAL)
|
||||||
|
|
||||||
|
Tendermint uses write ahead logs for the consensus (`cs.wal`) and the mempool
|
||||||
|
(`mempool.wal`). Both WALs have a max size of 1GB and are automatically rotated..
|
||||||
|
|
||||||
|
The `consensus.wal` is used to ensure we can recover from a crash at any point
|
||||||
|
in the consensus state machine.
|
||||||
|
It writes all consensus messages (timeouts, proposals, block part, or vote)
|
||||||
|
to a single file, flushing to disk before processing messages from its own
|
||||||
|
validator. Since Tendermint validators are expected to never sign a conflicting vote, the
|
||||||
|
WAL ensures we can always recover deterministically to the latest state of the consensus without
|
||||||
|
using the network or re-signing any consensus messages.
|
||||||
|
|
||||||
|
If your `consensus.wal` is corrupted, see [below](#WAL-Corruption).
|
||||||
|
|
||||||
|
The `mempool.wal` logs all incoming txs before running CheckTx, but is
|
||||||
|
otherwise not used in any programmatic way. It's just a kind of manual
|
||||||
|
safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes
|
||||||
|
may never make it into the blockchain if those nodes crash before being able to
|
||||||
|
propose it. Clients must monitor their txs by subscribing over websockets,
|
||||||
|
polling for them, or using `/broadcast_tx_commit`. In the worst case, txs can be
|
||||||
|
resent from the mempool WAL manually.
|
||||||
|
|
||||||
## DOS Exposure and Mitigation
|
## DOS Exposure and Mitigation
|
||||||
|
|
||||||
Validators are supposed to setup [Sentry Node
|
Validators are supposed to setup [Sentry Node
|
||||||
|
@ -28,7 +87,8 @@ send & receive rate per connection (`SendRate`, `RecvRate`).
|
||||||
### RPC
|
### RPC
|
||||||
|
|
||||||
Endpoints returning multiple entries are limited by default to return 30
|
Endpoints returning multiple entries are limited by default to return 30
|
||||||
elements (100 max). See [here](./rpc.md) for more information about the RPC.
|
elements (100 max). See the [RPC Documentation](https://tendermint.com/rpc/)
|
||||||
|
for more information.
|
||||||
|
|
||||||
Rate-limiting and authentication are another key aspects to help protect
|
Rate-limiting and authentication are another key aspects to help protect
|
||||||
against DOS attacks. While in the future we may implement these
|
against DOS attacks. While in the future we may implement these
|
||||||
|
|
|
@ -8,41 +8,43 @@ Each peer generates an ED25519 key-pair to use as a persistent
|
||||||
(long-term) id.
|
(long-term) id.
|
||||||
|
|
||||||
When two peers establish a TCP connection, they first each generate an
|
When two peers establish a TCP connection, they first each generate an
|
||||||
ephemeral ED25519 key-pair to use for this session, and send each other
|
ephemeral X25519 key-pair to use for this session, and send each other
|
||||||
their respective ephemeral public keys. This happens in the clear.
|
their respective ephemeral public keys. This happens in the clear.
|
||||||
|
|
||||||
They then each compute the shared secret. The shared secret is the
|
They then each compute the shared secret, as done in a [diffie hellman
|
||||||
multiplication of the peer's ephemeral private key by the other peer's
|
key exhange](https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange).
|
||||||
ephemeral public key. The result is the same for both peers by the magic
|
The shared secret is used as the symmetric key for the encryption algorithm.
|
||||||
of [elliptic
|
|
||||||
curves](https://en.wikipedia.org/wiki/Elliptic_curve_cryptography). The
|
|
||||||
shared secret is used as the symmetric key for the encryption algorithm.
|
|
||||||
|
|
||||||
The two ephemeral public keys are sorted to establish a canonical order.
|
We then run [hkdf-sha256](https://en.wikipedia.org/wiki/HKDF) to expand the
|
||||||
Then a 24-byte nonce is generated by concatenating the public keys and
|
shared secret to generate a symmetric key for sending data,
|
||||||
hashing them with Ripemd160. Note Ripemd160 produces 20byte hashes, so
|
a symmetric key for receiving data,
|
||||||
the nonce ends with four 0s.
|
a challenge to authenticate the other party.
|
||||||
|
One peer will send data with their sending key, and the other peer
|
||||||
|
would decode it using their own receiving key.
|
||||||
|
We must ensure that both parties don't try to use the same key as the sending
|
||||||
|
key, and the same key as the receiving key, as in that case nothing can be
|
||||||
|
decoded.
|
||||||
|
To ensure this, the peer with the canonically smaller ephemeral pubkey
|
||||||
|
uses the first key as their receiving key, and the second key as their sending key.
|
||||||
|
If the peer has the canonically larger ephemeral pubkey, they do the reverse.
|
||||||
|
|
||||||
The nonce is used to seed the encryption - it is critical that the same
|
Each peer also keeps a received message counter and sent message counter, both
|
||||||
nonce never be used twice with the same private key. For convenience,
|
are initialized to zero.
|
||||||
the last bit of the nonce is flipped, giving us two nonces: one for
|
All future communication is encrypted using chacha20poly1305.
|
||||||
encrypting our own messages, one for decrypting our peer's. Which ever
|
The key used to send the message is the sending key, and the key used to decode
|
||||||
peer has the higher public key uses the "bit-flipped" nonce for
|
the message is the receiving key.
|
||||||
encryption.
|
The nonce for chacha20poly1305 is the relevant message counter.
|
||||||
|
It is critical that the message counter is incremented every time you send a
|
||||||
|
message and every time you receive a message that decodes correctly.
|
||||||
|
|
||||||
Now, a challenge is generated by concatenating the ephemeral public keys
|
Each peer now signs the challenge with their persistent private key, and
|
||||||
and taking the SHA256 hash.
|
|
||||||
|
|
||||||
Each peer signs the challenge with their persistent private key, and
|
|
||||||
sends the other peer an AuthSigMsg, containing their persistent public
|
sends the other peer an AuthSigMsg, containing their persistent public
|
||||||
key and the signature. On receiving an AuthSigMsg, the peer verifies the
|
key and the signature. On receiving an AuthSigMsg, the peer verifies the
|
||||||
signature.
|
signature.
|
||||||
|
|
||||||
The peers are now authenticated.
|
The peers are now authenticated.
|
||||||
|
|
||||||
All future communications can now be encrypted using the shared secret
|
The communication maintains Perfect Forward Secrecy, as
|
||||||
and the generated nonces, where each nonce is incremented by one each
|
|
||||||
time it is used. The communications maintain Perfect Forward Secrecy, as
|
|
||||||
the persistent key pair was not used for generating secrets - only for
|
the persistent key pair was not used for generating secrets - only for
|
||||||
authenticating.
|
authenticating.
|
||||||
|
|
||||||
|
|
|
@ -156,6 +156,10 @@ Visit http://localhost:26657 in your browser to see the list of other
|
||||||
endpoints. Some take no arguments (like `/status`), while others specify
|
endpoints. Some take no arguments (like `/status`), while others specify
|
||||||
the argument name and use `_` as a placeholder.
|
the argument name and use `_` as a placeholder.
|
||||||
|
|
||||||
|
::: tip
|
||||||
|
Find the RPC Documentation [here](https://tendermint.com/rpc/)
|
||||||
|
:::
|
||||||
|
|
||||||
### Formatting
|
### Formatting
|
||||||
|
|
||||||
The following nuances when sending/formatting transactions should be
|
The following nuances when sending/formatting transactions should be
|
||||||
|
@ -209,23 +213,19 @@ Note that raw hex cannot be used in `POST` transactions.
|
||||||
**WARNING: UNSAFE** Only do this in development and only if you can
|
**WARNING: UNSAFE** Only do this in development and only if you can
|
||||||
afford to lose all blockchain data!
|
afford to lose all blockchain data!
|
||||||
|
|
||||||
To reset a blockchain, stop the node, remove the `~/.tendermint/data`
|
To reset a blockchain, stop the node and run:
|
||||||
directory and run
|
|
||||||
|
|
||||||
```
|
```
|
||||||
tendermint unsafe_reset_priv_validator
|
tendermint unsafe_reset_all
|
||||||
```
|
```
|
||||||
|
|
||||||
This final step is necessary to reset the `priv_validator.json`, which
|
This command will remove the data directory and reset private validator and
|
||||||
otherwise prevents you from making conflicting votes in the consensus
|
address book files.
|
||||||
(something that could get you in trouble if you do it on a real
|
|
||||||
blockchain). If you don't reset the `priv_validator.json`, your fresh
|
|
||||||
new blockchain will not make any blocks.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
Tendermint uses a `config.toml` for configuration. For details, see [the
|
Tendermint uses a `config.toml` for configuration. For details, see [the
|
||||||
config specification](./tendermint-core/configuration.md).
|
config specification](./configuration.md).
|
||||||
|
|
||||||
Notable options include the socket address of the application
|
Notable options include the socket address of the application
|
||||||
(`proxy_app`), the listening address of the Tendermint peer
|
(`proxy_app`), the listening address of the Tendermint peer
|
||||||
|
|
|
@ -22,7 +22,7 @@ Validators have a cryptographic key-pair and an associated amount of
|
||||||
|
|
||||||
There are two ways to become validator.
|
There are two ways to become validator.
|
||||||
|
|
||||||
1. They can be pre-established in the [genesis state](../../tendermint-core/using-tendermint.md#genesis)
|
1. They can be pre-established in the [genesis state](./using-tendermint.md#genesis)
|
||||||
2. The ABCI app responds to the EndBlock message with changes to the
|
2. The ABCI app responds to the EndBlock message with changes to the
|
||||||
existing validator set.
|
existing validator set.
|
||||||
|
|
||||||
|
@ -36,4 +36,4 @@ The +2/3 set of precommit votes is called a
|
||||||
[_commit_](../spec/blockchain/blockchain.md#commit). While any +2/3 set of
|
[_commit_](../spec/blockchain/blockchain.md#commit). While any +2/3 set of
|
||||||
precommits for the same block at the same height&round can serve as
|
precommits for the same block at the same height&round can serve as
|
||||||
validation, the canonical commit is included in the next block (see
|
validation, the canonical commit is included in the next block (see
|
||||||
[LastCommit](../spec/blockchain/blockchain.md#last-commit)).
|
[LastCommit](../spec/blockchain/blockchain.md#lastcommit)).
|
||||||
|
|
|
@ -59,7 +59,7 @@ func (evpool *EvidencePool) PriorityEvidence() []types.Evidence {
|
||||||
|
|
||||||
// PendingEvidence returns uncommitted evidence up to maxBytes.
|
// PendingEvidence returns uncommitted evidence up to maxBytes.
|
||||||
// If maxBytes is -1, all evidence is returned.
|
// If maxBytes is -1, all evidence is returned.
|
||||||
func (evpool *EvidencePool) PendingEvidence(maxBytes int) []types.Evidence {
|
func (evpool *EvidencePool) PendingEvidence(maxBytes int64) []types.Evidence {
|
||||||
return evpool.evidenceStore.PendingEvidence(maxBytes)
|
return evpool.evidenceStore.PendingEvidence(maxBytes)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -88,23 +88,23 @@ func (store *EvidenceStore) PriorityEvidence() (evidence []types.Evidence) {
|
||||||
|
|
||||||
// PendingEvidence returns known uncommitted evidence up to maxBytes.
|
// PendingEvidence returns known uncommitted evidence up to maxBytes.
|
||||||
// If maxBytes is -1, all evidence is returned.
|
// If maxBytes is -1, all evidence is returned.
|
||||||
func (store *EvidenceStore) PendingEvidence(maxBytes int) (evidence []types.Evidence) {
|
func (store *EvidenceStore) PendingEvidence(maxBytes int64) (evidence []types.Evidence) {
|
||||||
return store.listEvidence(baseKeyPending, maxBytes)
|
return store.listEvidence(baseKeyPending, maxBytes)
|
||||||
}
|
}
|
||||||
|
|
||||||
// listEvidence lists the evidence for the given prefix key up to maxBytes.
|
// listEvidence lists the evidence for the given prefix key up to maxBytes.
|
||||||
// It is wrapped by PriorityEvidence and PendingEvidence for convenience.
|
// It is wrapped by PriorityEvidence and PendingEvidence for convenience.
|
||||||
// If maxBytes is -1, there's no cap on the size of returned evidence.
|
// If maxBytes is -1, there's no cap on the size of returned evidence.
|
||||||
func (store *EvidenceStore) listEvidence(prefixKey string, maxBytes int) (evidence []types.Evidence) {
|
func (store *EvidenceStore) listEvidence(prefixKey string, maxBytes int64) (evidence []types.Evidence) {
|
||||||
var bytes int
|
var bytes int64
|
||||||
iter := dbm.IteratePrefix(store.db, []byte(prefixKey))
|
iter := dbm.IteratePrefix(store.db, []byte(prefixKey))
|
||||||
for ; iter.Valid(); iter.Next() {
|
for ; iter.Valid(); iter.Next() {
|
||||||
val := iter.Value()
|
val := iter.Value()
|
||||||
|
|
||||||
if maxBytes > 0 && bytes+len(val) > maxBytes {
|
if maxBytes > 0 && bytes+int64(len(val)) > maxBytes {
|
||||||
return evidence
|
return evidence
|
||||||
}
|
}
|
||||||
bytes += len(val)
|
bytes += int64(len(val))
|
||||||
|
|
||||||
var ei EvidenceInfo
|
var ei EvidenceInfo
|
||||||
err := cdc.UnmarshalBinaryBare(val, &ei)
|
err := cdc.UnmarshalBinaryBare(val, &ei)
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
|
"github.com/tendermint/tendermint/libs/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
/* AutoFile usage
|
/* AutoFile usage
|
||||||
|
@ -30,7 +31,10 @@ if err != nil {
|
||||||
}
|
}
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const autoFileOpenDuration = 1000 * time.Millisecond
|
const (
|
||||||
|
autoFileOpenDuration = 1000 * time.Millisecond
|
||||||
|
autoFilePerms = os.FileMode(0600)
|
||||||
|
)
|
||||||
|
|
||||||
// Automatically closes and re-opens file for writing.
|
// Automatically closes and re-opens file for writing.
|
||||||
// This is useful for using a log file with the logrotate tool.
|
// This is useful for using a log file with the logrotate tool.
|
||||||
|
@ -116,10 +120,17 @@ func (af *AutoFile) Sync() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (af *AutoFile) openFile() error {
|
func (af *AutoFile) openFile() error {
|
||||||
file, err := os.OpenFile(af.Path, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0600)
|
file, err := os.OpenFile(af.Path, os.O_RDWR|os.O_CREATE|os.O_APPEND, autoFilePerms)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
fileInfo, err := file.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if fileInfo.Mode() != autoFilePerms {
|
||||||
|
return errors.NewErrPermissionsChanged(file.Name(), fileInfo.Mode(), autoFilePerms)
|
||||||
|
}
|
||||||
af.file = file
|
af.file = file
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,41 +8,33 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
|
"github.com/tendermint/tendermint/libs/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestSIGHUP(t *testing.T) {
|
func TestSIGHUP(t *testing.T) {
|
||||||
|
|
||||||
// First, create an AutoFile writing to a tempfile dir
|
// First, create an AutoFile writing to a tempfile dir
|
||||||
file, err := ioutil.TempFile("", "sighup_test")
|
file, err := ioutil.TempFile("", "sighup_test")
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error creating tempfile: %v", err)
|
err = file.Close()
|
||||||
}
|
require.NoError(t, err)
|
||||||
if err := file.Close(); err != nil {
|
|
||||||
t.Fatalf("Error closing tempfile: %v", err)
|
|
||||||
}
|
|
||||||
name := file.Name()
|
name := file.Name()
|
||||||
|
|
||||||
// Here is the actual AutoFile
|
// Here is the actual AutoFile
|
||||||
af, err := OpenAutoFile(name)
|
af, err := OpenAutoFile(name)
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error creating autofile: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write to the file.
|
// Write to the file.
|
||||||
_, err = af.Write([]byte("Line 1\n"))
|
_, err = af.Write([]byte("Line 1\n"))
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error writing to autofile: %v", err)
|
|
||||||
}
|
|
||||||
_, err = af.Write([]byte("Line 2\n"))
|
_, err = af.Write([]byte("Line 2\n"))
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error writing to autofile: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Move the file over
|
// Move the file over
|
||||||
err = os.Rename(name, name+"_old")
|
err = os.Rename(name, name+"_old")
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error moving autofile: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Send SIGHUP to self.
|
// Send SIGHUP to self.
|
||||||
oldSighupCounter := atomic.LoadInt32(&sighupCounter)
|
oldSighupCounter := atomic.LoadInt32(&sighupCounter)
|
||||||
|
@ -55,16 +47,11 @@ func TestSIGHUP(t *testing.T) {
|
||||||
|
|
||||||
// Write more to the file.
|
// Write more to the file.
|
||||||
_, err = af.Write([]byte("Line 3\n"))
|
_, err = af.Write([]byte("Line 3\n"))
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error writing to autofile: %v", err)
|
|
||||||
}
|
|
||||||
_, err = af.Write([]byte("Line 4\n"))
|
_, err = af.Write([]byte("Line 4\n"))
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
t.Fatalf("Error writing to autofile: %v", err)
|
err = af.Close()
|
||||||
}
|
require.NoError(t, err)
|
||||||
if err := af.Close(); err != nil {
|
|
||||||
t.Fatalf("Error closing autofile")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Both files should exist
|
// Both files should exist
|
||||||
if body := cmn.MustReadFile(name + "_old"); string(body) != "Line 1\nLine 2\n" {
|
if body := cmn.MustReadFile(name + "_old"); string(body) != "Line 1\nLine 2\n" {
|
||||||
|
@ -74,3 +61,33 @@ func TestSIGHUP(t *testing.T) {
|
||||||
t.Errorf("Unexpected body %s", body)
|
t.Errorf("Unexpected body %s", body)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Manually modify file permissions, close, and reopen using autofile:
|
||||||
|
// We expect the file permissions to be changed back to the intended perms.
|
||||||
|
func TestOpenAutoFilePerms(t *testing.T) {
|
||||||
|
file, err := ioutil.TempFile("", "permission_test")
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
name := file.Name()
|
||||||
|
|
||||||
|
// open and change permissions
|
||||||
|
af, err := OpenAutoFile(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = af.file.Chmod(0755)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = af.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// reopen and expect an ErrPermissionsChanged as Cause
|
||||||
|
af, err = OpenAutoFile(name)
|
||||||
|
require.Error(t, err)
|
||||||
|
if e, ok := err.(*errors.ErrPermissionsChanged); ok {
|
||||||
|
t.Logf("%v", e)
|
||||||
|
} else {
|
||||||
|
t.Errorf("unexpected error %v", e)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = af.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
|
@ -1,9 +1,5 @@
|
||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Fingerprint returns the first 6 bytes of a byte slice.
|
// Fingerprint returns the first 6 bytes of a byte slice.
|
||||||
// If the slice is less than 6 bytes, the fingerprint
|
// If the slice is less than 6 bytes, the fingerprint
|
||||||
// contains trailing zeroes.
|
// contains trailing zeroes.
|
||||||
|
@ -12,62 +8,3 @@ func Fingerprint(slice []byte) []byte {
|
||||||
copy(fingerprint, slice)
|
copy(fingerprint, slice)
|
||||||
return fingerprint
|
return fingerprint
|
||||||
}
|
}
|
||||||
|
|
||||||
func IsZeros(slice []byte) bool {
|
|
||||||
for _, byt := range slice {
|
|
||||||
if byt != byte(0) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func RightPadBytes(slice []byte, l int) []byte {
|
|
||||||
if l < len(slice) {
|
|
||||||
return slice
|
|
||||||
}
|
|
||||||
padded := make([]byte, l)
|
|
||||||
copy(padded[0:len(slice)], slice)
|
|
||||||
return padded
|
|
||||||
}
|
|
||||||
|
|
||||||
func LeftPadBytes(slice []byte, l int) []byte {
|
|
||||||
if l < len(slice) {
|
|
||||||
return slice
|
|
||||||
}
|
|
||||||
padded := make([]byte, l)
|
|
||||||
copy(padded[l-len(slice):], slice)
|
|
||||||
return padded
|
|
||||||
}
|
|
||||||
|
|
||||||
func TrimmedString(b []byte) string {
|
|
||||||
trimSet := string([]byte{0})
|
|
||||||
return string(bytes.TrimLeft(b, trimSet))
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
// PrefixEndBytes returns the end byteslice for a noninclusive range
|
|
||||||
// that would include all byte slices for which the input is the prefix
|
|
||||||
func PrefixEndBytes(prefix []byte) []byte {
|
|
||||||
if prefix == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
end := make([]byte, len(prefix))
|
|
||||||
copy(end, prefix)
|
|
||||||
finished := false
|
|
||||||
|
|
||||||
for !finished {
|
|
||||||
if end[len(end)-1] != byte(255) {
|
|
||||||
end[len(end)-1]++
|
|
||||||
finished = true
|
|
||||||
} else {
|
|
||||||
end = end[:len(end)-1]
|
|
||||||
if len(end) == 0 {
|
|
||||||
end = nil
|
|
||||||
finished = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return end
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,28 +0,0 @@
|
||||||
package common
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestPrefixEndBytes(t *testing.T) {
|
|
||||||
assert := assert.New(t)
|
|
||||||
|
|
||||||
var testCases = []struct {
|
|
||||||
prefix []byte
|
|
||||||
expected []byte
|
|
||||||
}{
|
|
||||||
{[]byte{byte(55), byte(255), byte(255), byte(0)}, []byte{byte(55), byte(255), byte(255), byte(1)}},
|
|
||||||
{[]byte{byte(55), byte(255), byte(255), byte(15)}, []byte{byte(55), byte(255), byte(255), byte(16)}},
|
|
||||||
{[]byte{byte(55), byte(200), byte(255)}, []byte{byte(55), byte(201)}},
|
|
||||||
{[]byte{byte(55), byte(255), byte(255)}, []byte{byte(56)}},
|
|
||||||
{[]byte{byte(255), byte(255), byte(255)}, nil},
|
|
||||||
{nil, nil},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, test := range testCases {
|
|
||||||
end := PrefixEndBytes(test.prefix)
|
|
||||||
assert.Equal(test.expected, end)
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,59 +1,5 @@
|
||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/binary"
|
|
||||||
"sort"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Sort for []uint64
|
|
||||||
|
|
||||||
type Uint64Slice []uint64
|
|
||||||
|
|
||||||
func (p Uint64Slice) Len() int { return len(p) }
|
|
||||||
func (p Uint64Slice) Less(i, j int) bool { return p[i] < p[j] }
|
|
||||||
func (p Uint64Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
|
||||||
func (p Uint64Slice) Sort() { sort.Sort(p) }
|
|
||||||
|
|
||||||
func SearchUint64s(a []uint64, x uint64) int {
|
|
||||||
return sort.Search(len(a), func(i int) bool { return a[i] >= x })
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p Uint64Slice) Search(x uint64) int { return SearchUint64s(p, x) }
|
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
func PutUint64LE(dest []byte, i uint64) {
|
|
||||||
binary.LittleEndian.PutUint64(dest, i)
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetUint64LE(src []byte) uint64 {
|
|
||||||
return binary.LittleEndian.Uint64(src)
|
|
||||||
}
|
|
||||||
|
|
||||||
func PutUint64BE(dest []byte, i uint64) {
|
|
||||||
binary.BigEndian.PutUint64(dest, i)
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetUint64BE(src []byte) uint64 {
|
|
||||||
return binary.BigEndian.Uint64(src)
|
|
||||||
}
|
|
||||||
|
|
||||||
func PutInt64LE(dest []byte, i int64) {
|
|
||||||
binary.LittleEndian.PutUint64(dest, uint64(i))
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetInt64LE(src []byte) int64 {
|
|
||||||
return int64(binary.LittleEndian.Uint64(src))
|
|
||||||
}
|
|
||||||
|
|
||||||
func PutInt64BE(dest []byte, i int64) {
|
|
||||||
binary.BigEndian.PutUint64(dest, uint64(i))
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetInt64BE(src []byte) int64 {
|
|
||||||
return int64(binary.BigEndian.Uint64(src))
|
|
||||||
}
|
|
||||||
|
|
||||||
// IntInSlice returns true if a is found in the list.
|
// IntInSlice returns true if a is found in the list.
|
||||||
func IntInSlice(a int, list []int) bool {
|
func IntInSlice(a int, list []int) bool {
|
||||||
for _, b := range list {
|
for _, b := range list {
|
||||||
|
|
|
@ -9,8 +9,15 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
// ErrAlreadyStarted is returned when somebody tries to start an already
|
||||||
|
// running service.
|
||||||
ErrAlreadyStarted = errors.New("already started")
|
ErrAlreadyStarted = errors.New("already started")
|
||||||
|
// ErrAlreadyStopped is returned when somebody tries to stop an already
|
||||||
|
// stopped service (without resetting it).
|
||||||
ErrAlreadyStopped = errors.New("already stopped")
|
ErrAlreadyStopped = errors.New("already stopped")
|
||||||
|
// ErrNotStarted is returned when somebody tries to stop a not running
|
||||||
|
// service.
|
||||||
|
ErrNotStarted = errors.New("not started")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Service defines a service that can be started, stopped, and reset.
|
// Service defines a service that can be started, stopped, and reset.
|
||||||
|
@ -124,6 +131,8 @@ func (bs *BaseService) Start() error {
|
||||||
if atomic.CompareAndSwapUint32(&bs.started, 0, 1) {
|
if atomic.CompareAndSwapUint32(&bs.started, 0, 1) {
|
||||||
if atomic.LoadUint32(&bs.stopped) == 1 {
|
if atomic.LoadUint32(&bs.stopped) == 1 {
|
||||||
bs.Logger.Error(fmt.Sprintf("Not starting %v -- already stopped", bs.name), "impl", bs.impl)
|
bs.Logger.Error(fmt.Sprintf("Not starting %v -- already stopped", bs.name), "impl", bs.impl)
|
||||||
|
// revert flag
|
||||||
|
atomic.StoreUint32(&bs.started, 0)
|
||||||
return ErrAlreadyStopped
|
return ErrAlreadyStopped
|
||||||
}
|
}
|
||||||
bs.Logger.Info(fmt.Sprintf("Starting %v", bs.name), "impl", bs.impl)
|
bs.Logger.Info(fmt.Sprintf("Starting %v", bs.name), "impl", bs.impl)
|
||||||
|
@ -148,6 +157,12 @@ func (bs *BaseService) OnStart() error { return nil }
|
||||||
// channel. An error will be returned if the service is already stopped.
|
// channel. An error will be returned if the service is already stopped.
|
||||||
func (bs *BaseService) Stop() error {
|
func (bs *BaseService) Stop() error {
|
||||||
if atomic.CompareAndSwapUint32(&bs.stopped, 0, 1) {
|
if atomic.CompareAndSwapUint32(&bs.stopped, 0, 1) {
|
||||||
|
if atomic.LoadUint32(&bs.started) == 0 {
|
||||||
|
bs.Logger.Error(fmt.Sprintf("Not stopping %v -- have not been started yet", bs.name), "impl", bs.impl)
|
||||||
|
// revert flag
|
||||||
|
atomic.StoreUint32(&bs.stopped, 0)
|
||||||
|
return ErrNotStarted
|
||||||
|
}
|
||||||
bs.Logger.Info(fmt.Sprintf("Stopping %v", bs.name), "impl", bs.impl)
|
bs.Logger.Info(fmt.Sprintf("Stopping %v", bs.name), "impl", bs.impl)
|
||||||
bs.impl.OnStop()
|
bs.impl.OnStop()
|
||||||
close(bs.quit)
|
close(bs.quit)
|
||||||
|
|
|
@ -1,28 +1,10 @@
|
||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/hex"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
// IsHex returns true for non-empty hex-string prefixed with "0x"
|
|
||||||
func IsHex(s string) bool {
|
|
||||||
if len(s) > 2 && strings.EqualFold(s[:2], "0x") {
|
|
||||||
_, err := hex.DecodeString(s[2:])
|
|
||||||
return err == nil
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// StripHex returns hex string without leading "0x"
|
|
||||||
func StripHex(s string) string {
|
|
||||||
if IsHex(s) {
|
|
||||||
return s[2:]
|
|
||||||
}
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
|
|
||||||
// StringInSlice returns true if a is found the list.
|
// StringInSlice returns true if a is found the list.
|
||||||
func StringInSlice(a string, list []string) bool {
|
func StringInSlice(a string, list []string) bool {
|
||||||
for _, b := range list {
|
for _, b := range list {
|
||||||
|
|
|
@ -13,30 +13,12 @@ func TestStringInSlice(t *testing.T) {
|
||||||
assert.False(t, StringInSlice("", []string{}))
|
assert.False(t, StringInSlice("", []string{}))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestIsHex(t *testing.T) {
|
|
||||||
notHex := []string{
|
|
||||||
"", " ", "a", "x", "0", "0x", "0X", "0x ", "0X ", "0X a",
|
|
||||||
"0xf ", "0x f", "0xp", "0x-",
|
|
||||||
"0xf", "0XBED", "0xF", "0xbed", // Odd lengths
|
|
||||||
}
|
|
||||||
for _, v := range notHex {
|
|
||||||
assert.False(t, IsHex(v), "%q is not hex", v)
|
|
||||||
}
|
|
||||||
hex := []string{
|
|
||||||
"0x00", "0x0a", "0x0F", "0xFFFFFF", "0Xdeadbeef", "0x0BED",
|
|
||||||
"0X12", "0X0A",
|
|
||||||
}
|
|
||||||
for _, v := range hex {
|
|
||||||
assert.True(t, IsHex(v), "%q is hex", v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsASCIIText(t *testing.T) {
|
func TestIsASCIIText(t *testing.T) {
|
||||||
notASCIIText := []string{
|
notASCIIText := []string{
|
||||||
"", "\xC2", "\xC2\xA2", "\xFF", "\x80", "\xF0", "\n", "\t",
|
"", "\xC2", "\xC2\xA2", "\xFF", "\x80", "\xF0", "\n", "\t",
|
||||||
}
|
}
|
||||||
for _, v := range notASCIIText {
|
for _, v := range notASCIIText {
|
||||||
assert.False(t, IsHex(v), "%q is not ascii-text", v)
|
assert.False(t, IsASCIIText(v), "%q is not ascii-text", v)
|
||||||
}
|
}
|
||||||
asciiText := []string{
|
asciiText := []string{
|
||||||
" ", ".", "x", "$", "_", "abcdefg;", "-", "0x00", "0", "123",
|
" ", ".", "x", "$", "_", "abcdefg;", "-", "0x00", "0", "123",
|
||||||
|
|
|
@ -1,90 +0,0 @@
|
||||||
package common
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"sort"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
Zero256 = Word256{0}
|
|
||||||
One256 = Word256{1}
|
|
||||||
)
|
|
||||||
|
|
||||||
type Word256 [32]byte
|
|
||||||
|
|
||||||
func (w Word256) String() string { return string(w[:]) }
|
|
||||||
func (w Word256) TrimmedString() string { return TrimmedString(w.Bytes()) }
|
|
||||||
func (w Word256) Copy() Word256 { return w }
|
|
||||||
func (w Word256) Bytes() []byte { return w[:] } // copied.
|
|
||||||
func (w Word256) Prefix(n int) []byte { return w[:n] }
|
|
||||||
func (w Word256) Postfix(n int) []byte { return w[32-n:] }
|
|
||||||
func (w Word256) IsZero() bool {
|
|
||||||
accum := byte(0)
|
|
||||||
for _, byt := range w {
|
|
||||||
accum |= byt
|
|
||||||
}
|
|
||||||
return accum == 0
|
|
||||||
}
|
|
||||||
func (w Word256) Compare(other Word256) int {
|
|
||||||
return bytes.Compare(w[:], other[:])
|
|
||||||
}
|
|
||||||
|
|
||||||
func Uint64ToWord256(i uint64) Word256 {
|
|
||||||
buf := [8]byte{}
|
|
||||||
PutUint64BE(buf[:], i)
|
|
||||||
return LeftPadWord256(buf[:])
|
|
||||||
}
|
|
||||||
|
|
||||||
func Int64ToWord256(i int64) Word256 {
|
|
||||||
buf := [8]byte{}
|
|
||||||
PutInt64BE(buf[:], i)
|
|
||||||
return LeftPadWord256(buf[:])
|
|
||||||
}
|
|
||||||
|
|
||||||
func RightPadWord256(bz []byte) (word Word256) {
|
|
||||||
copy(word[:], bz)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func LeftPadWord256(bz []byte) (word Word256) {
|
|
||||||
copy(word[32-len(bz):], bz)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func Uint64FromWord256(word Word256) uint64 {
|
|
||||||
buf := word.Postfix(8)
|
|
||||||
return GetUint64BE(buf)
|
|
||||||
}
|
|
||||||
|
|
||||||
func Int64FromWord256(word Word256) int64 {
|
|
||||||
buf := word.Postfix(8)
|
|
||||||
return GetInt64BE(buf)
|
|
||||||
}
|
|
||||||
|
|
||||||
//-------------------------------------
|
|
||||||
|
|
||||||
type Tuple256 struct {
|
|
||||||
First Word256
|
|
||||||
Second Word256
|
|
||||||
}
|
|
||||||
|
|
||||||
func (tuple Tuple256) Compare(other Tuple256) int {
|
|
||||||
firstCompare := tuple.First.Compare(other.First)
|
|
||||||
if firstCompare == 0 {
|
|
||||||
return tuple.Second.Compare(other.Second)
|
|
||||||
}
|
|
||||||
return firstCompare
|
|
||||||
}
|
|
||||||
|
|
||||||
func Tuple256Split(t Tuple256) (Word256, Word256) {
|
|
||||||
return t.First, t.Second
|
|
||||||
}
|
|
||||||
|
|
||||||
type Tuple256Slice []Tuple256
|
|
||||||
|
|
||||||
func (p Tuple256Slice) Len() int { return len(p) }
|
|
||||||
func (p Tuple256Slice) Less(i, j int) bool {
|
|
||||||
return p[i].Compare(p[j]) < 0
|
|
||||||
}
|
|
||||||
func (p Tuple256Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
|
||||||
func (p Tuple256Slice) Sort() { sort.Sort(p) }
|
|
|
@ -1,3 +0,0 @@
|
||||||
Tendermint Go-DB Copyright (C) 2015 All in Bits, Inc
|
|
||||||
|
|
||||||
Released under the Apache2.0 license
|
|
|
@ -1 +0,0 @@
|
||||||
TODO: syndtr/goleveldb should be replaced with actual LevelDB instance
|
|
|
@ -13,7 +13,10 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
func cleanupDBDir(dir, name string) {
|
func cleanupDBDir(dir, name string) {
|
||||||
os.RemoveAll(filepath.Join(dir, name) + ".db")
|
err := os.RemoveAll(filepath.Join(dir, name) + ".db")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func testBackendGetSetDelete(t *testing.T, backend DBBackendType) {
|
func testBackendGetSetDelete(t *testing.T, backend DBBackendType) {
|
||||||
|
@ -21,6 +24,7 @@ func testBackendGetSetDelete(t *testing.T, backend DBBackendType) {
|
||||||
dirname, err := ioutil.TempDir("", fmt.Sprintf("test_backend_%s_", backend))
|
dirname, err := ioutil.TempDir("", fmt.Sprintf("test_backend_%s_", backend))
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
db := NewDB("testdb", backend, dirname)
|
db := NewDB("testdb", backend, dirname)
|
||||||
|
defer cleanupDBDir(dirname, "testdb")
|
||||||
|
|
||||||
// A nonexistent key should return nil, even if the key is empty
|
// A nonexistent key should return nil, even if the key is empty
|
||||||
require.Nil(t, db.Get([]byte("")))
|
require.Nil(t, db.Get([]byte("")))
|
||||||
|
@ -55,9 +59,10 @@ func TestBackendsGetSetDelete(t *testing.T) {
|
||||||
|
|
||||||
func withDB(t *testing.T, creator dbCreator, fn func(DB)) {
|
func withDB(t *testing.T, creator dbCreator, fn func(DB)) {
|
||||||
name := fmt.Sprintf("test_%x", cmn.RandStr(12))
|
name := fmt.Sprintf("test_%x", cmn.RandStr(12))
|
||||||
db, err := creator(name, "")
|
dir := os.TempDir()
|
||||||
defer cleanupDBDir("", name)
|
db, err := creator(name, dir)
|
||||||
assert.Nil(t, err)
|
require.Nil(t, err)
|
||||||
|
defer cleanupDBDir(dir, name)
|
||||||
fn(db)
|
fn(db)
|
||||||
db.Close()
|
db.Close()
|
||||||
}
|
}
|
||||||
|
@ -161,8 +166,9 @@ func TestDBIterator(t *testing.T) {
|
||||||
|
|
||||||
func testDBIterator(t *testing.T, backend DBBackendType) {
|
func testDBIterator(t *testing.T, backend DBBackendType) {
|
||||||
name := fmt.Sprintf("test_%x", cmn.RandStr(12))
|
name := fmt.Sprintf("test_%x", cmn.RandStr(12))
|
||||||
db := NewDB(name, backend, "")
|
dir := os.TempDir()
|
||||||
defer cleanupDBDir("", name)
|
db := NewDB(name, backend, dir)
|
||||||
|
defer cleanupDBDir(dir, name)
|
||||||
|
|
||||||
for i := 0; i < 10; i++ {
|
for i := 0; i < 10; i++ {
|
||||||
if i != 6 { // but skip 6.
|
if i != 6 { // but skip 6.
|
||||||
|
|
|
@ -5,9 +5,11 @@ package db
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -32,7 +34,7 @@ func BenchmarkRandomReadsWrites2(b *testing.B) {
|
||||||
// Write something
|
// Write something
|
||||||
{
|
{
|
||||||
idx := (int64(cmn.RandInt()) % numItems)
|
idx := (int64(cmn.RandInt()) % numItems)
|
||||||
internal[idx] += 1
|
internal[idx]++
|
||||||
val := internal[idx]
|
val := internal[idx]
|
||||||
idxBytes := int642Bytes(int64(idx))
|
idxBytes := int642Bytes(int64(idx))
|
||||||
valBytes := int642Bytes(int64(val))
|
valBytes := int642Bytes(int64(val))
|
||||||
|
@ -88,8 +90,11 @@ func bytes2Int64(buf []byte) int64 {
|
||||||
|
|
||||||
func TestCLevelDBBackend(t *testing.T) {
|
func TestCLevelDBBackend(t *testing.T) {
|
||||||
name := fmt.Sprintf("test_%x", cmn.RandStr(12))
|
name := fmt.Sprintf("test_%x", cmn.RandStr(12))
|
||||||
db := NewDB(name, LevelDBBackend, "")
|
// Can't use "" (current directory) or "./" here because levigo.Open returns:
|
||||||
defer cleanupDBDir("", name)
|
// "Error initializing DB: IO error: test_XXX.db: Invalid argument"
|
||||||
|
dir := os.TempDir()
|
||||||
|
db := NewDB(name, LevelDBBackend, dir)
|
||||||
|
defer cleanupDBDir(dir, name)
|
||||||
|
|
||||||
_, ok := db.(*CLevelDB)
|
_, ok := db.(*CLevelDB)
|
||||||
assert.True(t, ok)
|
assert.True(t, ok)
|
||||||
|
|
|
@ -60,11 +60,10 @@ func checkValuePanics(t *testing.T, itr Iterator) {
|
||||||
assert.Panics(t, func() { itr.Key() }, "checkValuePanics expected panic but didn't")
|
assert.Panics(t, func() { itr.Key() }, "checkValuePanics expected panic but didn't")
|
||||||
}
|
}
|
||||||
|
|
||||||
func newTempDB(t *testing.T, backend DBBackendType) (db DB) {
|
func newTempDB(t *testing.T, backend DBBackendType) (db DB, dbDir string) {
|
||||||
dirname, err := ioutil.TempDir("", "db_common_test")
|
dirname, err := ioutil.TempDir("", "db_common_test")
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
db = NewDB("testdb", backend, dirname)
|
return NewDB("testdb", backend, dirname), dirname
|
||||||
return db
|
|
||||||
}
|
}
|
||||||
|
|
||||||
//----------------------------------------
|
//----------------------------------------
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
package db
|
package db
|
||||||
|
|
||||||
import "fmt"
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
//----------------------------------------
|
//----------------------------------------
|
||||||
// Main entry
|
// Main entry
|
||||||
|
@ -27,8 +30,23 @@ func registerDBCreator(backend DBBackendType, creator dbCreator, force bool) {
|
||||||
backends[backend] = creator
|
backends[backend] = creator
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewDB creates a new database of type backend with the given name.
|
||||||
|
// NOTE: function panics if:
|
||||||
|
// - backend is unknown (not registered)
|
||||||
|
// - creator function, provided during registration, returns error
|
||||||
func NewDB(name string, backend DBBackendType, dir string) DB {
|
func NewDB(name string, backend DBBackendType, dir string) DB {
|
||||||
db, err := backends[backend](name, dir)
|
dbCreator, ok := backends[backend]
|
||||||
|
if !ok {
|
||||||
|
keys := make([]string, len(backends))
|
||||||
|
i := 0
|
||||||
|
for k := range backends {
|
||||||
|
keys[i] = string(k)
|
||||||
|
i++
|
||||||
|
}
|
||||||
|
panic(fmt.Sprintf("Unknown db_backend %s, expected either %s", backend, strings.Join(keys, " or ")))
|
||||||
|
}
|
||||||
|
|
||||||
|
db, err := dbCreator(name, dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(fmt.Sprintf("Error initializing DB: %v", err))
|
panic(fmt.Sprintf("Error initializing DB: %v", err))
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,6 +2,7 @@ package db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
@ -10,7 +11,9 @@ import (
|
||||||
func TestDBIteratorSingleKey(t *testing.T) {
|
func TestDBIteratorSingleKey(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
db.SetSync(bz("1"), bz("value_1"))
|
db.SetSync(bz("1"), bz("value_1"))
|
||||||
itr := db.Iterator(nil, nil)
|
itr := db.Iterator(nil, nil)
|
||||||
|
|
||||||
|
@ -28,7 +31,9 @@ func TestDBIteratorSingleKey(t *testing.T) {
|
||||||
func TestDBIteratorTwoKeys(t *testing.T) {
|
func TestDBIteratorTwoKeys(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
db.SetSync(bz("1"), bz("value_1"))
|
db.SetSync(bz("1"), bz("value_1"))
|
||||||
db.SetSync(bz("2"), bz("value_1"))
|
db.SetSync(bz("2"), bz("value_1"))
|
||||||
|
|
||||||
|
@ -54,7 +59,8 @@ func TestDBIteratorTwoKeys(t *testing.T) {
|
||||||
func TestDBIteratorMany(t *testing.T) {
|
func TestDBIteratorMany(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
keys := make([][]byte, 100)
|
keys := make([][]byte, 100)
|
||||||
for i := 0; i < 100; i++ {
|
for i := 0; i < 100; i++ {
|
||||||
|
@ -78,7 +84,9 @@ func TestDBIteratorMany(t *testing.T) {
|
||||||
func TestDBIteratorEmpty(t *testing.T) {
|
func TestDBIteratorEmpty(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
itr := db.Iterator(nil, nil)
|
itr := db.Iterator(nil, nil)
|
||||||
|
|
||||||
checkInvalid(t, itr)
|
checkInvalid(t, itr)
|
||||||
|
@ -89,7 +97,9 @@ func TestDBIteratorEmpty(t *testing.T) {
|
||||||
func TestDBIteratorEmptyBeginAfter(t *testing.T) {
|
func TestDBIteratorEmptyBeginAfter(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
itr := db.Iterator(bz("1"), nil)
|
itr := db.Iterator(bz("1"), nil)
|
||||||
|
|
||||||
checkInvalid(t, itr)
|
checkInvalid(t, itr)
|
||||||
|
@ -100,7 +110,9 @@ func TestDBIteratorEmptyBeginAfter(t *testing.T) {
|
||||||
func TestDBIteratorNonemptyBeginAfter(t *testing.T) {
|
func TestDBIteratorNonemptyBeginAfter(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
db.SetSync(bz("1"), bz("value_1"))
|
db.SetSync(bz("1"), bz("value_1"))
|
||||||
itr := db.Iterator(bz("2"), nil)
|
itr := db.Iterator(bz("2"), nil)
|
||||||
|
|
||||||
|
|
|
@ -10,7 +10,9 @@ import (
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
|
tmerrors "github.com/tendermint/tendermint/libs/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
@ -205,6 +207,13 @@ func write(path string, d []byte) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer f.Close()
|
defer f.Close()
|
||||||
|
fInfo, err := f.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if fInfo.Mode() != keyPerm {
|
||||||
|
return tmerrors.NewErrPermissionsChanged(f.Name(), keyPerm, fInfo.Mode())
|
||||||
|
}
|
||||||
_, err = f.Write(d)
|
_, err = f.Write(d)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
|
@ -4,6 +4,7 @@ import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"encoding/binary"
|
"encoding/binary"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||||
|
@ -17,6 +18,7 @@ func TestNewGoLevelDB(t *testing.T) {
|
||||||
// Test write locks
|
// Test write locks
|
||||||
db, err := NewGoLevelDB(name, "")
|
db, err := NewGoLevelDB(name, "")
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
|
defer os.RemoveAll("./" + name + ".db")
|
||||||
_, err = NewGoLevelDB(name, "")
|
_, err = NewGoLevelDB(name, "")
|
||||||
require.NotNil(t, err)
|
require.NotNil(t, err)
|
||||||
db.Close() // Close the db to release the lock
|
db.Close() // Close the db to release the lock
|
||||||
|
|
|
@ -2,6 +2,7 @@ package db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -9,7 +10,8 @@ import (
|
||||||
func TestPrefixIteratorNoMatchNil(t *testing.T) {
|
func TestPrefixIteratorNoMatchNil(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
itr := IteratePrefix(db, []byte("2"))
|
itr := IteratePrefix(db, []byte("2"))
|
||||||
|
|
||||||
checkInvalid(t, itr)
|
checkInvalid(t, itr)
|
||||||
|
@ -21,7 +23,8 @@ func TestPrefixIteratorNoMatchNil(t *testing.T) {
|
||||||
func TestPrefixIteratorNoMatch1(t *testing.T) {
|
func TestPrefixIteratorNoMatch1(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
itr := IteratePrefix(db, []byte("2"))
|
itr := IteratePrefix(db, []byte("2"))
|
||||||
db.SetSync(bz("1"), bz("value_1"))
|
db.SetSync(bz("1"), bz("value_1"))
|
||||||
|
|
||||||
|
@ -34,7 +37,8 @@ func TestPrefixIteratorNoMatch1(t *testing.T) {
|
||||||
func TestPrefixIteratorNoMatch2(t *testing.T) {
|
func TestPrefixIteratorNoMatch2(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
db.SetSync(bz("3"), bz("value_3"))
|
db.SetSync(bz("3"), bz("value_3"))
|
||||||
itr := IteratePrefix(db, []byte("4"))
|
itr := IteratePrefix(db, []byte("4"))
|
||||||
|
|
||||||
|
@ -47,7 +51,8 @@ func TestPrefixIteratorNoMatch2(t *testing.T) {
|
||||||
func TestPrefixIteratorMatch1(t *testing.T) {
|
func TestPrefixIteratorMatch1(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
db.SetSync(bz("2"), bz("value_2"))
|
db.SetSync(bz("2"), bz("value_2"))
|
||||||
itr := IteratePrefix(db, bz("2"))
|
itr := IteratePrefix(db, bz("2"))
|
||||||
|
|
||||||
|
@ -65,7 +70,8 @@ func TestPrefixIteratorMatch1(t *testing.T) {
|
||||||
func TestPrefixIteratorMatches1N(t *testing.T) {
|
func TestPrefixIteratorMatches1N(t *testing.T) {
|
||||||
for backend := range backends {
|
for backend := range backends {
|
||||||
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
t.Run(fmt.Sprintf("Prefix w/ backend %s", backend), func(t *testing.T) {
|
||||||
db := newTempDB(t, backend)
|
db, dir := newTempDB(t, backend)
|
||||||
|
defer os.RemoveAll(dir)
|
||||||
|
|
||||||
// prefixed
|
// prefixed
|
||||||
db.SetSync(bz("a/1"), bz("value_1"))
|
db.SetSync(bz("a/1"), bz("value_1"))
|
||||||
|
|
|
@ -0,0 +1,26 @@
|
||||||
|
// Package errors contains errors that are thrown across packages.
|
||||||
|
package errors
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrPermissionsChanged occurs if the file permission have changed since the file was created.
|
||||||
|
type ErrPermissionsChanged struct {
|
||||||
|
name string
|
||||||
|
got, want os.FileMode
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewErrPermissionsChanged(name string, got, want os.FileMode) *ErrPermissionsChanged {
|
||||||
|
return &ErrPermissionsChanged{name: name, got: got, want: want}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e ErrPermissionsChanged) Error() string {
|
||||||
|
return fmt.Sprintf(
|
||||||
|
"file: [%v]\nexpected file permissions: %v, got: %v",
|
||||||
|
e.name,
|
||||||
|
e.want,
|
||||||
|
e.got,
|
||||||
|
)
|
||||||
|
}
|
|
@ -4,7 +4,6 @@ import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"container/list"
|
"container/list"
|
||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
"encoding/binary"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"sync"
|
"sync"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
|
@ -12,17 +11,27 @@ import (
|
||||||
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
|
||||||
|
amino "github.com/tendermint/go-amino"
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
auto "github.com/tendermint/tendermint/libs/autofile"
|
auto "github.com/tendermint/tendermint/libs/autofile"
|
||||||
"github.com/tendermint/tendermint/libs/clist"
|
"github.com/tendermint/tendermint/libs/clist"
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
|
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
|
||||||
"github.com/tendermint/tendermint/proxy"
|
"github.com/tendermint/tendermint/proxy"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// PreCheckFunc is an optional filter executed before CheckTx and rejects
|
||||||
|
// transaction if false is returned. An example would be to ensure that a
|
||||||
|
// transaction doesn't exceeded the block size.
|
||||||
|
type PreCheckFunc func(types.Tx) bool
|
||||||
|
|
||||||
|
// PostCheckFunc is an optional filter executed after CheckTx and rejects
|
||||||
|
// transaction if false is returned. An example would be to ensure a
|
||||||
|
// transaction doesn't require more gas than available for the block.
|
||||||
|
type PostCheckFunc func(types.Tx, *abci.ResponseCheckTx) bool
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
||||||
The mempool pushes new txs onto the proxyAppConn.
|
The mempool pushes new txs onto the proxyAppConn.
|
||||||
|
@ -59,6 +68,27 @@ var (
|
||||||
ErrMempoolIsFull = errors.New("Mempool is full")
|
ErrMempoolIsFull = errors.New("Mempool is full")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// PreCheckAminoMaxBytes checks that the size of the transaction plus the amino
|
||||||
|
// overhead is smaller or equal to the expected maxBytes.
|
||||||
|
func PreCheckAminoMaxBytes(maxBytes int64) PreCheckFunc {
|
||||||
|
return func(tx types.Tx) bool {
|
||||||
|
// We have to account for the amino overhead in the tx size as well
|
||||||
|
aminoOverhead := amino.UvarintSize(uint64(len(tx)))
|
||||||
|
return int64(len(tx)+aminoOverhead) <= maxBytes
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// PostCheckMaxGas checks that the wanted gas is smaller or equal to the passed
|
||||||
|
// maxGas. Returns true if maxGas is -1.
|
||||||
|
func PostCheckMaxGas(maxGas int64) PostCheckFunc {
|
||||||
|
return func(tx types.Tx, res *abci.ResponseCheckTx) bool {
|
||||||
|
if maxGas == -1 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return res.GasWanted <= maxGas
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// TxID is the hex encoded hash of the bytes as a types.Tx.
|
// TxID is the hex encoded hash of the bytes as a types.Tx.
|
||||||
func TxID(tx []byte) string {
|
func TxID(tx []byte) string {
|
||||||
return fmt.Sprintf("%X", types.Tx(tx).Hash())
|
return fmt.Sprintf("%X", types.Tx(tx).Hash())
|
||||||
|
@ -81,8 +111,8 @@ type Mempool struct {
|
||||||
recheckEnd *clist.CElement // re-checking stops here
|
recheckEnd *clist.CElement // re-checking stops here
|
||||||
notifiedTxsAvailable bool
|
notifiedTxsAvailable bool
|
||||||
txsAvailable chan struct{} // fires once for each height, when the mempool is not empty
|
txsAvailable chan struct{} // fires once for each height, when the mempool is not empty
|
||||||
// Filter mempool to only accept txs for which filter(tx) returns true.
|
preCheck PreCheckFunc
|
||||||
filter func(types.Tx) bool
|
postCheck PostCheckFunc
|
||||||
|
|
||||||
// Keep a cache of already-seen txs.
|
// Keep a cache of already-seen txs.
|
||||||
// This reduces the pressure on the proxyApp.
|
// This reduces the pressure on the proxyApp.
|
||||||
|
@ -142,10 +172,16 @@ func (mem *Mempool) SetLogger(l log.Logger) {
|
||||||
mem.logger = l
|
mem.logger = l
|
||||||
}
|
}
|
||||||
|
|
||||||
// WithFilter sets a filter for mempool to only accept txs for which f(tx)
|
// WithPreCheck sets a filter for the mempool to reject a tx if f(tx) returns
|
||||||
// returns true.
|
// false. This is ran before CheckTx.
|
||||||
func WithFilter(f func(types.Tx) bool) MempoolOption {
|
func WithPreCheck(f PreCheckFunc) MempoolOption {
|
||||||
return func(mem *Mempool) { mem.filter = f }
|
return func(mem *Mempool) { mem.preCheck = f }
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithPostCheck sets a filter for the mempool to reject a tx if f(tx) returns
|
||||||
|
// false. This is ran after CheckTx.
|
||||||
|
func WithPostCheck(f PostCheckFunc) MempoolOption {
|
||||||
|
return func(mem *Mempool) { mem.postCheck = f }
|
||||||
}
|
}
|
||||||
|
|
||||||
// WithMetrics sets the metrics.
|
// WithMetrics sets the metrics.
|
||||||
|
@ -249,7 +285,7 @@ func (mem *Mempool) CheckTx(tx types.Tx, cb func(*abci.Response)) (err error) {
|
||||||
return ErrMempoolIsFull
|
return ErrMempoolIsFull
|
||||||
}
|
}
|
||||||
|
|
||||||
if mem.filter != nil && !mem.filter(tx) {
|
if mem.preCheck != nil && !mem.preCheck(tx) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -299,12 +335,14 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
|
||||||
switch r := res.Value.(type) {
|
switch r := res.Value.(type) {
|
||||||
case *abci.Response_CheckTx:
|
case *abci.Response_CheckTx:
|
||||||
tx := req.GetCheckTx().Tx
|
tx := req.GetCheckTx().Tx
|
||||||
if r.CheckTx.Code == abci.CodeTypeOK {
|
if (r.CheckTx.Code == abci.CodeTypeOK) &&
|
||||||
|
mem.isPostCheckPass(tx, r.CheckTx) {
|
||||||
mem.counter++
|
mem.counter++
|
||||||
memTx := &mempoolTx{
|
memTx := &mempoolTx{
|
||||||
counter: mem.counter,
|
counter: mem.counter,
|
||||||
height: mem.height,
|
height: mem.height,
|
||||||
tx: tx,
|
gasWanted: r.CheckTx.GasWanted,
|
||||||
|
tx: tx,
|
||||||
}
|
}
|
||||||
mem.txs.PushBack(memTx)
|
mem.txs.PushBack(memTx)
|
||||||
mem.logger.Info("Added good transaction", "tx", TxID(tx), "res", r, "total", mem.Size())
|
mem.logger.Info("Added good transaction", "tx", TxID(tx), "res", r, "total", mem.Size())
|
||||||
|
@ -326,10 +364,15 @@ func (mem *Mempool) resCbRecheck(req *abci.Request, res *abci.Response) {
|
||||||
case *abci.Response_CheckTx:
|
case *abci.Response_CheckTx:
|
||||||
memTx := mem.recheckCursor.Value.(*mempoolTx)
|
memTx := mem.recheckCursor.Value.(*mempoolTx)
|
||||||
if !bytes.Equal(req.GetCheckTx().Tx, memTx.tx) {
|
if !bytes.Equal(req.GetCheckTx().Tx, memTx.tx) {
|
||||||
cmn.PanicSanity(fmt.Sprintf("Unexpected tx response from proxy during recheck\n"+
|
cmn.PanicSanity(
|
||||||
"Expected %X, got %X", r.CheckTx.Data, memTx.tx))
|
fmt.Sprintf(
|
||||||
|
"Unexpected tx response from proxy during recheck\nExpected %X, got %X",
|
||||||
|
r.CheckTx.Data,
|
||||||
|
memTx.tx,
|
||||||
|
),
|
||||||
|
)
|
||||||
}
|
}
|
||||||
if r.CheckTx.Code == abci.CodeTypeOK {
|
if (r.CheckTx.Code == abci.CodeTypeOK) && mem.isPostCheckPass(memTx.tx, r.CheckTx) {
|
||||||
// Good, nothing to do.
|
// Good, nothing to do.
|
||||||
} else {
|
} else {
|
||||||
// Tx became invalidated due to newly committed block.
|
// Tx became invalidated due to newly committed block.
|
||||||
|
@ -380,12 +423,11 @@ func (mem *Mempool) notifyTxsAvailable() {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReapMaxBytes reaps transactions from the mempool up to n bytes total.
|
// ReapMaxBytesMaxGas reaps transactions from the mempool up to maxBytes bytes total
|
||||||
// If max is negative, there is no cap on the size of all returned
|
// with the condition that the total gasWanted must be less than maxGas.
|
||||||
|
// If both maxes are negative, there is no cap on the size of all returned
|
||||||
// transactions (~ all available transactions).
|
// transactions (~ all available transactions).
|
||||||
func (mem *Mempool) ReapMaxBytes(max int) types.Txs {
|
func (mem *Mempool) ReapMaxBytesMaxGas(maxBytes, maxGas int64) types.Txs {
|
||||||
var buf [binary.MaxVarintLen64]byte
|
|
||||||
|
|
||||||
mem.proxyMtx.Lock()
|
mem.proxyMtx.Lock()
|
||||||
defer mem.proxyMtx.Unlock()
|
defer mem.proxyMtx.Unlock()
|
||||||
|
|
||||||
|
@ -394,19 +436,25 @@ func (mem *Mempool) ReapMaxBytes(max int) types.Txs {
|
||||||
time.Sleep(time.Millisecond * 10)
|
time.Sleep(time.Millisecond * 10)
|
||||||
}
|
}
|
||||||
|
|
||||||
var cur int
|
var totalBytes int64
|
||||||
|
var totalGas int64
|
||||||
// TODO: we will get a performance boost if we have a good estimate of avg
|
// TODO: we will get a performance boost if we have a good estimate of avg
|
||||||
// size per tx, and set the initial capacity based off of that.
|
// size per tx, and set the initial capacity based off of that.
|
||||||
// txs := make([]types.Tx, 0, cmn.MinInt(mem.txs.Len(), max/mem.avgTxSize))
|
// txs := make([]types.Tx, 0, cmn.MinInt(mem.txs.Len(), max/mem.avgTxSize))
|
||||||
txs := make([]types.Tx, 0, mem.txs.Len())
|
txs := make([]types.Tx, 0, mem.txs.Len())
|
||||||
for e := mem.txs.Front(); e != nil; e = e.Next() {
|
for e := mem.txs.Front(); e != nil; e = e.Next() {
|
||||||
memTx := e.Value.(*mempoolTx)
|
memTx := e.Value.(*mempoolTx)
|
||||||
// amino.UvarintSize is not used here because it won't be possible to reuse buf
|
// Check total size requirement
|
||||||
aminoOverhead := binary.PutUvarint(buf[:], uint64(len(memTx.tx)))
|
aminoOverhead := int64(amino.UvarintSize(uint64(len(memTx.tx))))
|
||||||
if max > 0 && cur+len(memTx.tx)+aminoOverhead > max {
|
if maxBytes > -1 && totalBytes+int64(len(memTx.tx))+aminoOverhead > maxBytes {
|
||||||
return txs
|
return txs
|
||||||
}
|
}
|
||||||
cur += len(memTx.tx) + aminoOverhead
|
totalBytes += int64(len(memTx.tx)) + aminoOverhead
|
||||||
|
// Check total gas requirement
|
||||||
|
if maxGas > -1 && totalGas+memTx.gasWanted > maxGas {
|
||||||
|
return txs
|
||||||
|
}
|
||||||
|
totalGas += memTx.gasWanted
|
||||||
txs = append(txs, memTx.tx)
|
txs = append(txs, memTx.tx)
|
||||||
}
|
}
|
||||||
return txs
|
return txs
|
||||||
|
@ -439,7 +487,12 @@ func (mem *Mempool) ReapMaxTxs(max int) types.Txs {
|
||||||
// Update informs the mempool that the given txs were committed and can be discarded.
|
// Update informs the mempool that the given txs were committed and can be discarded.
|
||||||
// NOTE: this should be called *after* block is committed by consensus.
|
// NOTE: this should be called *after* block is committed by consensus.
|
||||||
// NOTE: unsafe; Lock/Unlock must be managed by caller
|
// NOTE: unsafe; Lock/Unlock must be managed by caller
|
||||||
func (mem *Mempool) Update(height int64, txs types.Txs, filter func(types.Tx) bool) error {
|
func (mem *Mempool) Update(
|
||||||
|
height int64,
|
||||||
|
txs types.Txs,
|
||||||
|
preCheck PreCheckFunc,
|
||||||
|
postCheck PostCheckFunc,
|
||||||
|
) error {
|
||||||
// First, create a lookup map of txns in new txs.
|
// First, create a lookup map of txns in new txs.
|
||||||
txsMap := make(map[string]struct{}, len(txs))
|
txsMap := make(map[string]struct{}, len(txs))
|
||||||
for _, tx := range txs {
|
for _, tx := range txs {
|
||||||
|
@ -450,8 +503,11 @@ func (mem *Mempool) Update(height int64, txs types.Txs, filter func(types.Tx) bo
|
||||||
mem.height = height
|
mem.height = height
|
||||||
mem.notifiedTxsAvailable = false
|
mem.notifiedTxsAvailable = false
|
||||||
|
|
||||||
if filter != nil {
|
if preCheck != nil {
|
||||||
mem.filter = filter
|
mem.preCheck = preCheck
|
||||||
|
}
|
||||||
|
if postCheck != nil {
|
||||||
|
mem.postCheck = postCheck
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove transactions that are already in txs.
|
// Remove transactions that are already in txs.
|
||||||
|
@ -509,13 +565,18 @@ func (mem *Mempool) recheckTxs(goodTxs []types.Tx) {
|
||||||
mem.proxyAppConn.FlushAsync()
|
mem.proxyAppConn.FlushAsync()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (mem *Mempool) isPostCheckPass(tx types.Tx, r *abci.ResponseCheckTx) bool {
|
||||||
|
return mem.postCheck == nil || mem.postCheck(tx, r)
|
||||||
|
}
|
||||||
|
|
||||||
//--------------------------------------------------------------------------------
|
//--------------------------------------------------------------------------------
|
||||||
|
|
||||||
// mempoolTx is a transaction that successfully ran
|
// mempoolTx is a transaction that successfully ran
|
||||||
type mempoolTx struct {
|
type mempoolTx struct {
|
||||||
counter int64 // a simple incrementing counter
|
counter int64 // a simple incrementing counter
|
||||||
height int64 // height that this tx had been validated in
|
height int64 // height that this tx had been validated in
|
||||||
tx types.Tx //
|
gasWanted int64 // amount of gas this tx states it will require
|
||||||
|
tx types.Tx //
|
||||||
}
|
}
|
||||||
|
|
||||||
// Height returns the height for this transaction
|
// Height returns the height for this transaction
|
||||||
|
@ -567,7 +628,8 @@ func (cache *mapTxCache) Push(tx types.Tx) bool {
|
||||||
|
|
||||||
// Use the tx hash in the cache
|
// Use the tx hash in the cache
|
||||||
txHash := sha256.Sum256(tx)
|
txHash := sha256.Sum256(tx)
|
||||||
if _, exists := cache.map_[txHash]; exists {
|
if moved, exists := cache.map_[txHash]; exists {
|
||||||
|
cache.list.MoveToFront(moved)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -11,16 +11,17 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
amino "github.com/tendermint/go-amino"
|
||||||
"github.com/tendermint/tendermint/abci/example/counter"
|
"github.com/tendermint/tendermint/abci/example/counter"
|
||||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
|
||||||
|
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
"github.com/tendermint/tendermint/proxy"
|
"github.com/tendermint/tendermint/proxy"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func newMempoolWithApp(cc proxy.ClientCreator) *Mempool {
|
func newMempoolWithApp(cc proxy.ClientCreator) *Mempool {
|
||||||
|
@ -71,6 +72,110 @@ func checkTxs(t *testing.T, mempool *Mempool, count int) types.Txs {
|
||||||
return txs
|
return txs
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestReapMaxBytesMaxGas(t *testing.T) {
|
||||||
|
app := kvstore.NewKVStoreApplication()
|
||||||
|
cc := proxy.NewLocalClientCreator(app)
|
||||||
|
mempool := newMempoolWithApp(cc)
|
||||||
|
|
||||||
|
// Ensure gas calculation behaves as expected
|
||||||
|
checkTxs(t, mempool, 1)
|
||||||
|
tx0 := mempool.TxsFront().Value.(*mempoolTx)
|
||||||
|
// assert that kv store has gas wanted = 1.
|
||||||
|
require.Equal(t, app.CheckTx(tx0.tx).GasWanted, int64(1), "KVStore had a gas value neq to 1")
|
||||||
|
require.Equal(t, tx0.gasWanted, int64(1), "transactions gas was set incorrectly")
|
||||||
|
// ensure each tx is 20 bytes long
|
||||||
|
require.Equal(t, len(tx0.tx), 20, "Tx is longer than 20 bytes")
|
||||||
|
mempool.Flush()
|
||||||
|
|
||||||
|
// each table driven test creates numTxsToCreate txs with checkTx, and at the end clears all remaining txs.
|
||||||
|
// each tx has 20 bytes + amino overhead = 21 bytes, 1 gas
|
||||||
|
tests := []struct {
|
||||||
|
numTxsToCreate int
|
||||||
|
maxBytes int64
|
||||||
|
maxGas int64
|
||||||
|
expectedNumTxs int
|
||||||
|
}{
|
||||||
|
{20, -1, -1, 20},
|
||||||
|
{20, -1, 0, 0},
|
||||||
|
{20, -1, 10, 10},
|
||||||
|
{20, -1, 30, 20},
|
||||||
|
{20, 0, -1, 0},
|
||||||
|
{20, 0, 10, 0},
|
||||||
|
{20, 10, 10, 0},
|
||||||
|
{20, 21, 10, 1},
|
||||||
|
{20, 210, -1, 10},
|
||||||
|
{20, 210, 5, 5},
|
||||||
|
{20, 210, 10, 10},
|
||||||
|
{20, 210, 15, 10},
|
||||||
|
{20, 20000, -1, 20},
|
||||||
|
{20, 20000, 5, 5},
|
||||||
|
{20, 20000, 30, 20},
|
||||||
|
}
|
||||||
|
for tcIndex, tt := range tests {
|
||||||
|
checkTxs(t, mempool, tt.numTxsToCreate)
|
||||||
|
got := mempool.ReapMaxBytesMaxGas(tt.maxBytes, tt.maxGas)
|
||||||
|
assert.Equal(t, tt.expectedNumTxs, len(got), "Got %d txs, expected %d, tc #%d",
|
||||||
|
len(got), tt.expectedNumTxs, tcIndex)
|
||||||
|
mempool.Flush()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMempoolFilters(t *testing.T) {
|
||||||
|
app := kvstore.NewKVStoreApplication()
|
||||||
|
cc := proxy.NewLocalClientCreator(app)
|
||||||
|
mempool := newMempoolWithApp(cc)
|
||||||
|
emptyTxArr := []types.Tx{[]byte{}}
|
||||||
|
|
||||||
|
nopPreFilter := func(tx types.Tx) bool { return true }
|
||||||
|
nopPostFilter := func(tx types.Tx, res *abci.ResponseCheckTx) bool { return true }
|
||||||
|
|
||||||
|
// This is the same filter we expect to be used within node/node.go and state/execution.go
|
||||||
|
nBytePreFilter := func(n int) func(tx types.Tx) bool {
|
||||||
|
return func(tx types.Tx) bool {
|
||||||
|
// We have to account for the amino overhead in the tx size as well
|
||||||
|
aminoOverhead := amino.UvarintSize(uint64(len(tx)))
|
||||||
|
return (len(tx) + aminoOverhead) <= n
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
nGasPostFilter := func(n int64) func(tx types.Tx, res *abci.ResponseCheckTx) bool {
|
||||||
|
return func(tx types.Tx, res *abci.ResponseCheckTx) bool {
|
||||||
|
if n == -1 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return res.GasWanted <= n
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// each table driven test creates numTxsToCreate txs with checkTx, and at the end clears all remaining txs.
|
||||||
|
// each tx has 20 bytes + amino overhead = 21 bytes, 1 gas
|
||||||
|
tests := []struct {
|
||||||
|
numTxsToCreate int
|
||||||
|
preFilter func(tx types.Tx) bool
|
||||||
|
postFilter func(tx types.Tx, res *abci.ResponseCheckTx) bool
|
||||||
|
expectedNumTxs int
|
||||||
|
}{
|
||||||
|
{10, nopPreFilter, nopPostFilter, 10},
|
||||||
|
{10, nBytePreFilter(10), nopPostFilter, 0},
|
||||||
|
{10, nBytePreFilter(20), nopPostFilter, 0},
|
||||||
|
{10, nBytePreFilter(21), nopPostFilter, 10},
|
||||||
|
{10, nopPreFilter, nGasPostFilter(-1), 10},
|
||||||
|
{10, nopPreFilter, nGasPostFilter(0), 0},
|
||||||
|
{10, nopPreFilter, nGasPostFilter(1), 10},
|
||||||
|
{10, nopPreFilter, nGasPostFilter(3000), 10},
|
||||||
|
{10, nBytePreFilter(10), nGasPostFilter(20), 0},
|
||||||
|
{10, nBytePreFilter(30), nGasPostFilter(20), 10},
|
||||||
|
{10, nBytePreFilter(21), nGasPostFilter(1), 10},
|
||||||
|
{10, nBytePreFilter(21), nGasPostFilter(0), 0},
|
||||||
|
}
|
||||||
|
for tcIndex, tt := range tests {
|
||||||
|
mempool.Update(1, emptyTxArr, tt.preFilter, tt.postFilter)
|
||||||
|
checkTxs(t, mempool, tt.numTxsToCreate)
|
||||||
|
require.Equal(t, tt.expectedNumTxs, mempool.Size(), "mempool had the incorrect size, on test case %d", tcIndex)
|
||||||
|
mempool.Flush()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestTxsAvailable(t *testing.T) {
|
func TestTxsAvailable(t *testing.T) {
|
||||||
app := kvstore.NewKVStoreApplication()
|
app := kvstore.NewKVStoreApplication()
|
||||||
cc := proxy.NewLocalClientCreator(app)
|
cc := proxy.NewLocalClientCreator(app)
|
||||||
|
@ -91,7 +196,7 @@ func TestTxsAvailable(t *testing.T) {
|
||||||
// it should fire once now for the new height
|
// it should fire once now for the new height
|
||||||
// since there are still txs left
|
// since there are still txs left
|
||||||
committedTxs, txs := txs[:50], txs[50:]
|
committedTxs, txs := txs[:50], txs[50:]
|
||||||
if err := mempool.Update(1, committedTxs, nil); err != nil {
|
if err := mempool.Update(1, committedTxs, nil, nil); err != nil {
|
||||||
t.Error(err)
|
t.Error(err)
|
||||||
}
|
}
|
||||||
ensureFire(t, mempool.TxsAvailable(), timeoutMS)
|
ensureFire(t, mempool.TxsAvailable(), timeoutMS)
|
||||||
|
@ -103,7 +208,7 @@ func TestTxsAvailable(t *testing.T) {
|
||||||
|
|
||||||
// now call update with all the txs. it should not fire as there are no txs left
|
// now call update with all the txs. it should not fire as there are no txs left
|
||||||
committedTxs = append(txs, moreTxs...)
|
committedTxs = append(txs, moreTxs...)
|
||||||
if err := mempool.Update(2, committedTxs, nil); err != nil {
|
if err := mempool.Update(2, committedTxs, nil, nil); err != nil {
|
||||||
t.Error(err)
|
t.Error(err)
|
||||||
}
|
}
|
||||||
ensureNoFire(t, mempool.TxsAvailable(), timeoutMS)
|
ensureNoFire(t, mempool.TxsAvailable(), timeoutMS)
|
||||||
|
@ -149,7 +254,7 @@ func TestSerialReap(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
reapCheck := func(exp int) {
|
reapCheck := func(exp int) {
|
||||||
txs := mempool.ReapMaxBytes(-1)
|
txs := mempool.ReapMaxBytesMaxGas(-1, -1)
|
||||||
require.Equal(t, len(txs), exp, fmt.Sprintf("Expected to reap %v txs but got %v", exp, len(txs)))
|
require.Equal(t, len(txs), exp, fmt.Sprintf("Expected to reap %v txs but got %v", exp, len(txs)))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -160,7 +265,7 @@ func TestSerialReap(t *testing.T) {
|
||||||
binary.BigEndian.PutUint64(txBytes, uint64(i))
|
binary.BigEndian.PutUint64(txBytes, uint64(i))
|
||||||
txs = append(txs, txBytes)
|
txs = append(txs, txBytes)
|
||||||
}
|
}
|
||||||
if err := mempool.Update(0, txs, nil); err != nil {
|
if err := mempool.Update(0, txs, nil, nil); err != nil {
|
||||||
t.Error(err)
|
t.Error(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
351
node/node.go
351
node/node.go
|
@ -7,21 +7,23 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
_ "net/http/pprof"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/prometheus/client_golang/prometheus"
|
"github.com/prometheus/client_golang/prometheus"
|
||||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||||
|
|
||||||
amino "github.com/tendermint/go-amino"
|
amino "github.com/tendermint/go-amino"
|
||||||
abci "github.com/tendermint/tendermint/abci/types"
|
|
||||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
|
||||||
dbm "github.com/tendermint/tendermint/libs/db"
|
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
|
||||||
|
|
||||||
|
abci "github.com/tendermint/tendermint/abci/types"
|
||||||
bc "github.com/tendermint/tendermint/blockchain"
|
bc "github.com/tendermint/tendermint/blockchain"
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
cs "github.com/tendermint/tendermint/consensus"
|
cs "github.com/tendermint/tendermint/consensus"
|
||||||
|
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||||
"github.com/tendermint/tendermint/evidence"
|
"github.com/tendermint/tendermint/evidence"
|
||||||
|
cmn "github.com/tendermint/tendermint/libs/common"
|
||||||
|
dbm "github.com/tendermint/tendermint/libs/db"
|
||||||
|
"github.com/tendermint/tendermint/libs/log"
|
||||||
mempl "github.com/tendermint/tendermint/mempool"
|
mempl "github.com/tendermint/tendermint/mempool"
|
||||||
"github.com/tendermint/tendermint/p2p"
|
"github.com/tendermint/tendermint/p2p"
|
||||||
"github.com/tendermint/tendermint/p2p/pex"
|
"github.com/tendermint/tendermint/p2p/pex"
|
||||||
|
@ -37,10 +39,8 @@ import (
|
||||||
"github.com/tendermint/tendermint/state/txindex/kv"
|
"github.com/tendermint/tendermint/state/txindex/kv"
|
||||||
"github.com/tendermint/tendermint/state/txindex/null"
|
"github.com/tendermint/tendermint/state/txindex/null"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
|
tmtime "github.com/tendermint/tendermint/types/time"
|
||||||
"github.com/tendermint/tendermint/version"
|
"github.com/tendermint/tendermint/version"
|
||||||
|
|
||||||
_ "net/http/pprof"
|
|
||||||
"strings"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
//------------------------------------------------------------------------------
|
//------------------------------------------------------------------------------
|
||||||
|
@ -124,9 +124,12 @@ type Node struct {
|
||||||
privValidator types.PrivValidator // local node's validator key
|
privValidator types.PrivValidator // local node's validator key
|
||||||
|
|
||||||
// network
|
// network
|
||||||
sw *p2p.Switch // p2p connections
|
transport *p2p.MultiplexTransport
|
||||||
addrBook pex.AddrBook // known peers
|
sw *p2p.Switch // p2p connections
|
||||||
nodeKey *p2p.NodeKey // our node privkey
|
addrBook pex.AddrBook // known peers
|
||||||
|
nodeInfo p2p.NodeInfo
|
||||||
|
nodeKey *p2p.NodeKey // our node privkey
|
||||||
|
isListening bool
|
||||||
|
|
||||||
// services
|
// services
|
||||||
eventBus *types.EventBus // pub/sub for services
|
eventBus *types.EventBus // pub/sub for services
|
||||||
|
@ -185,18 +188,22 @@ func NewNode(config *cfg.Config,
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create the proxyApp, which manages connections (consensus, mempool, query)
|
// Create the proxyApp and establish connections to the ABCI app (consensus, mempool, query).
|
||||||
// and sync tendermint and the app by performing a handshake
|
proxyApp := proxy.NewAppConns(clientCreator)
|
||||||
// and replaying any necessary blocks
|
|
||||||
consensusLogger := logger.With("module", "consensus")
|
|
||||||
handshaker := cs.NewHandshaker(stateDB, state, blockStore, genDoc)
|
|
||||||
handshaker.SetLogger(consensusLogger)
|
|
||||||
proxyApp := proxy.NewAppConns(clientCreator, handshaker)
|
|
||||||
proxyApp.SetLogger(logger.With("module", "proxy"))
|
proxyApp.SetLogger(logger.With("module", "proxy"))
|
||||||
if err := proxyApp.Start(); err != nil {
|
if err := proxyApp.Start(); err != nil {
|
||||||
return nil, fmt.Errorf("Error starting proxy app connections: %v", err)
|
return nil, fmt.Errorf("Error starting proxy app connections: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Create the handshaker, which calls RequestInfo and replays any blocks
|
||||||
|
// as necessary to sync tendermint with the app.
|
||||||
|
consensusLogger := logger.With("module", "consensus")
|
||||||
|
handshaker := cs.NewHandshaker(stateDB, state, blockStore, genDoc)
|
||||||
|
handshaker.SetLogger(consensusLogger)
|
||||||
|
if err := handshaker.Handshake(proxyApp); err != nil {
|
||||||
|
return nil, fmt.Errorf("Error during handshake: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
// reload the state (it may have been updated by the handshake)
|
// reload the state (it may have been updated by the handshake)
|
||||||
state = sm.LoadState(stateDB)
|
state = sm.LoadState(stateDB)
|
||||||
|
|
||||||
|
@ -241,13 +248,22 @@ func NewNode(config *cfg.Config,
|
||||||
csMetrics, p2pMetrics, memplMetrics := metricsProvider()
|
csMetrics, p2pMetrics, memplMetrics := metricsProvider()
|
||||||
|
|
||||||
// Make MempoolReactor
|
// Make MempoolReactor
|
||||||
maxBytes := state.ConsensusParams.TxSize.MaxBytes
|
|
||||||
mempool := mempl.NewMempool(
|
mempool := mempl.NewMempool(
|
||||||
config.Mempool,
|
config.Mempool,
|
||||||
proxyApp.Mempool(),
|
proxyApp.Mempool(),
|
||||||
state.LastBlockHeight,
|
state.LastBlockHeight,
|
||||||
mempl.WithMetrics(memplMetrics),
|
mempl.WithMetrics(memplMetrics),
|
||||||
mempl.WithFilter(func(tx types.Tx) bool { return len(tx) <= maxBytes }),
|
mempl.WithPreCheck(
|
||||||
|
mempl.PreCheckAminoMaxBytes(
|
||||||
|
types.MaxDataBytesUnknownEvidence(
|
||||||
|
state.ConsensusParams.BlockSize.MaxBytes,
|
||||||
|
state.Validators.Size(),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
),
|
||||||
|
mempl.WithPostCheck(
|
||||||
|
mempl.PostCheckMaxGas(state.ConsensusParams.BlockSize.MaxGas),
|
||||||
|
),
|
||||||
)
|
)
|
||||||
mempoolLogger := logger.With("module", "mempool")
|
mempoolLogger := logger.With("module", "mempool")
|
||||||
mempool.SetLogger(mempoolLogger)
|
mempool.SetLogger(mempoolLogger)
|
||||||
|
@ -296,70 +312,6 @@ func NewNode(config *cfg.Config,
|
||||||
consensusReactor := cs.NewConsensusReactor(consensusState, fastSync)
|
consensusReactor := cs.NewConsensusReactor(consensusState, fastSync)
|
||||||
consensusReactor.SetLogger(consensusLogger)
|
consensusReactor.SetLogger(consensusLogger)
|
||||||
|
|
||||||
p2pLogger := logger.With("module", "p2p")
|
|
||||||
|
|
||||||
sw := p2p.NewSwitch(config.P2P, p2p.WithMetrics(p2pMetrics))
|
|
||||||
sw.SetLogger(p2pLogger)
|
|
||||||
sw.AddReactor("MEMPOOL", mempoolReactor)
|
|
||||||
sw.AddReactor("BLOCKCHAIN", bcReactor)
|
|
||||||
sw.AddReactor("CONSENSUS", consensusReactor)
|
|
||||||
sw.AddReactor("EVIDENCE", evidenceReactor)
|
|
||||||
p2pLogger.Info("P2P Node ID", "ID", nodeKey.ID(), "file", config.NodeKeyFile())
|
|
||||||
|
|
||||||
// Optionally, start the pex reactor
|
|
||||||
//
|
|
||||||
// TODO:
|
|
||||||
//
|
|
||||||
// We need to set Seeds and PersistentPeers on the switch,
|
|
||||||
// since it needs to be able to use these (and their DNS names)
|
|
||||||
// even if the PEX is off. We can include the DNS name in the NetAddress,
|
|
||||||
// but it would still be nice to have a clear list of the current "PersistentPeers"
|
|
||||||
// somewhere that we can return with net_info.
|
|
||||||
//
|
|
||||||
// If PEX is on, it should handle dialing the seeds. Otherwise the switch does it.
|
|
||||||
// Note we currently use the addrBook regardless at least for AddOurAddress
|
|
||||||
addrBook := pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict)
|
|
||||||
addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile()))
|
|
||||||
if config.P2P.PexReactor {
|
|
||||||
// TODO persistent peers ? so we can have their DNS addrs saved
|
|
||||||
pexReactor := pex.NewPEXReactor(addrBook,
|
|
||||||
&pex.PEXReactorConfig{
|
|
||||||
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
|
|
||||||
SeedMode: config.P2P.SeedMode,
|
|
||||||
})
|
|
||||||
pexReactor.SetLogger(p2pLogger)
|
|
||||||
sw.AddReactor("PEX", pexReactor)
|
|
||||||
}
|
|
||||||
|
|
||||||
sw.SetAddrBook(addrBook)
|
|
||||||
|
|
||||||
// Filter peers by addr or pubkey with an ABCI query.
|
|
||||||
// If the query return code is OK, add peer.
|
|
||||||
// XXX: Query format subject to change
|
|
||||||
if config.FilterPeers {
|
|
||||||
// NOTE: addr is ip:port
|
|
||||||
sw.SetAddrFilter(func(addr net.Addr) error {
|
|
||||||
resQuery, err := proxyApp.Query().QuerySync(abci.RequestQuery{Path: fmt.Sprintf("/p2p/filter/addr/%s", addr.String())})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if resQuery.IsErr() {
|
|
||||||
return fmt.Errorf("Error querying abci app: %v", resQuery)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
sw.SetIDFilter(func(id p2p.ID) error {
|
|
||||||
resQuery, err := proxyApp.Query().QuerySync(abci.RequestQuery{Path: fmt.Sprintf("/p2p/filter/id/%s", id)})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if resQuery.IsErr() {
|
|
||||||
return fmt.Errorf("Error querying abci app: %v", resQuery)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
eventBus := types.NewEventBus()
|
eventBus := types.NewEventBus()
|
||||||
eventBus.SetLogger(logger.With("module", "events"))
|
eventBus.SetLogger(logger.With("module", "events"))
|
||||||
|
|
||||||
|
@ -389,6 +341,113 @@ func NewNode(config *cfg.Config,
|
||||||
indexerService := txindex.NewIndexerService(txIndexer, eventBus)
|
indexerService := txindex.NewIndexerService(txIndexer, eventBus)
|
||||||
indexerService.SetLogger(logger.With("module", "txindex"))
|
indexerService.SetLogger(logger.With("module", "txindex"))
|
||||||
|
|
||||||
|
var (
|
||||||
|
p2pLogger = logger.With("module", "p2p")
|
||||||
|
nodeInfo = makeNodeInfo(config, nodeKey.ID(), txIndexer, genDoc.ChainID)
|
||||||
|
)
|
||||||
|
|
||||||
|
// Setup Transport.
|
||||||
|
var (
|
||||||
|
transport = p2p.NewMultiplexTransport(nodeInfo, *nodeKey)
|
||||||
|
connFilters = []p2p.ConnFilterFunc{}
|
||||||
|
peerFilters = []p2p.PeerFilterFunc{}
|
||||||
|
)
|
||||||
|
|
||||||
|
if !config.P2P.AllowDuplicateIP {
|
||||||
|
connFilters = append(connFilters, p2p.ConnDuplicateIPFilter())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter peers by addr or pubkey with an ABCI query.
|
||||||
|
// If the query return code is OK, add peer.
|
||||||
|
// XXX: Query format subject to change
|
||||||
|
if config.FilterPeers {
|
||||||
|
connFilters = append(
|
||||||
|
connFilters,
|
||||||
|
// ABCI query for address filtering.
|
||||||
|
func(_ p2p.ConnSet, c net.Conn, _ []net.IP) error {
|
||||||
|
res, err := proxyApp.Query().QuerySync(abci.RequestQuery{
|
||||||
|
Path: fmt.Sprintf("/p2p/filter/addr/%s", c.RemoteAddr().String()),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if res.IsErr() {
|
||||||
|
return fmt.Errorf("Error querying abci app: %v", res)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
peerFilters = append(
|
||||||
|
peerFilters,
|
||||||
|
// ABCI query for ID filtering.
|
||||||
|
func(_ p2p.IPeerSet, p p2p.Peer) error {
|
||||||
|
res, err := proxyApp.Query().QuerySync(abci.RequestQuery{
|
||||||
|
Path: fmt.Sprintf("/p2p/filter/id/%s", p.ID()),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if res.IsErr() {
|
||||||
|
return fmt.Errorf("Error querying abci app: %v", res)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
p2p.MultiplexTransportConnFilters(connFilters...)(transport)
|
||||||
|
|
||||||
|
// Setup Switch.
|
||||||
|
sw := p2p.NewSwitch(
|
||||||
|
config.P2P,
|
||||||
|
transport,
|
||||||
|
p2p.WithMetrics(p2pMetrics),
|
||||||
|
p2p.SwitchPeerFilters(peerFilters...),
|
||||||
|
)
|
||||||
|
sw.SetLogger(p2pLogger)
|
||||||
|
sw.AddReactor("MEMPOOL", mempoolReactor)
|
||||||
|
sw.AddReactor("BLOCKCHAIN", bcReactor)
|
||||||
|
sw.AddReactor("CONSENSUS", consensusReactor)
|
||||||
|
sw.AddReactor("EVIDENCE", evidenceReactor)
|
||||||
|
sw.SetNodeInfo(nodeInfo)
|
||||||
|
sw.SetNodeKey(nodeKey)
|
||||||
|
|
||||||
|
p2pLogger.Info("P2P Node ID", "ID", nodeKey.ID(), "file", config.NodeKeyFile())
|
||||||
|
|
||||||
|
// Optionally, start the pex reactor
|
||||||
|
//
|
||||||
|
// TODO:
|
||||||
|
//
|
||||||
|
// We need to set Seeds and PersistentPeers on the switch,
|
||||||
|
// since it needs to be able to use these (and their DNS names)
|
||||||
|
// even if the PEX is off. We can include the DNS name in the NetAddress,
|
||||||
|
// but it would still be nice to have a clear list of the current "PersistentPeers"
|
||||||
|
// somewhere that we can return with net_info.
|
||||||
|
//
|
||||||
|
// If PEX is on, it should handle dialing the seeds. Otherwise the switch does it.
|
||||||
|
// Note we currently use the addrBook regardless at least for AddOurAddress
|
||||||
|
addrBook := pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict)
|
||||||
|
|
||||||
|
// Add ourselves to addrbook to prevent dialing ourselves
|
||||||
|
addrBook.AddOurAddress(nodeInfo.NetAddress())
|
||||||
|
|
||||||
|
addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile()))
|
||||||
|
if config.P2P.PexReactor {
|
||||||
|
// TODO persistent peers ? so we can have their DNS addrs saved
|
||||||
|
pexReactor := pex.NewPEXReactor(addrBook,
|
||||||
|
&pex.PEXReactorConfig{
|
||||||
|
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
|
||||||
|
SeedMode: config.P2P.SeedMode,
|
||||||
|
})
|
||||||
|
pexReactor.SetLogger(p2pLogger)
|
||||||
|
sw.AddReactor("PEX", pexReactor)
|
||||||
|
}
|
||||||
|
|
||||||
|
sw.SetAddrBook(addrBook)
|
||||||
|
|
||||||
// run the profile server
|
// run the profile server
|
||||||
profileHost := config.ProfListenAddress
|
profileHost := config.ProfListenAddress
|
||||||
if profileHost != "" {
|
if profileHost != "" {
|
||||||
|
@ -402,9 +461,11 @@ func NewNode(config *cfg.Config,
|
||||||
genesisDoc: genDoc,
|
genesisDoc: genDoc,
|
||||||
privValidator: privValidator,
|
privValidator: privValidator,
|
||||||
|
|
||||||
sw: sw,
|
transport: transport,
|
||||||
addrBook: addrBook,
|
sw: sw,
|
||||||
nodeKey: nodeKey,
|
addrBook: addrBook,
|
||||||
|
nodeInfo: nodeInfo,
|
||||||
|
nodeKey: nodeKey,
|
||||||
|
|
||||||
stateDB: stateDB,
|
stateDB: stateDB,
|
||||||
blockStore: blockStore,
|
blockStore: blockStore,
|
||||||
|
@ -424,26 +485,18 @@ func NewNode(config *cfg.Config,
|
||||||
|
|
||||||
// OnStart starts the Node. It implements cmn.Service.
|
// OnStart starts the Node. It implements cmn.Service.
|
||||||
func (n *Node) OnStart() error {
|
func (n *Node) OnStart() error {
|
||||||
|
now := tmtime.Now()
|
||||||
|
genTime := n.genesisDoc.GenesisTime
|
||||||
|
if genTime.After(now) {
|
||||||
|
n.Logger.Info("Genesis time is in the future. Sleeping until then...", "genTime", genTime)
|
||||||
|
time.Sleep(genTime.Sub(now))
|
||||||
|
}
|
||||||
|
|
||||||
err := n.eventBus.Start()
|
err := n.eventBus.Start()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create & add listener
|
|
||||||
l := p2p.NewDefaultListener(
|
|
||||||
n.config.P2P.ListenAddress,
|
|
||||||
n.config.P2P.ExternalAddress,
|
|
||||||
n.config.P2P.UPNP,
|
|
||||||
n.Logger.With("module", "p2p"))
|
|
||||||
n.sw.AddListener(l)
|
|
||||||
|
|
||||||
nodeInfo := n.makeNodeInfo(n.nodeKey.ID())
|
|
||||||
n.sw.SetNodeInfo(nodeInfo)
|
|
||||||
n.sw.SetNodeKey(n.nodeKey)
|
|
||||||
|
|
||||||
// Add ourselves to addrbook to prevent dialing ourselves
|
|
||||||
n.addrBook.AddOurAddress(nodeInfo.NetAddress())
|
|
||||||
|
|
||||||
// Add private IDs to addrbook to block those peers being added
|
// Add private IDs to addrbook to block those peers being added
|
||||||
n.addrBook.AddPrivateIDs(splitAndTrimEmpty(n.config.P2P.PrivatePeerIDs, ",", " "))
|
n.addrBook.AddPrivateIDs(splitAndTrimEmpty(n.config.P2P.PrivatePeerIDs, ",", " "))
|
||||||
|
|
||||||
|
@ -462,6 +515,17 @@ func (n *Node) OnStart() error {
|
||||||
n.prometheusSrv = n.startPrometheusServer(n.config.Instrumentation.PrometheusListenAddr)
|
n.prometheusSrv = n.startPrometheusServer(n.config.Instrumentation.PrometheusListenAddr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Start the transport.
|
||||||
|
addr, err := p2p.NewNetAddressStringWithOptionalID(n.config.P2P.ListenAddress)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := n.transport.Listen(*addr); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
n.isListening = true
|
||||||
|
|
||||||
// Start the switch (the P2P server).
|
// Start the switch (the P2P server).
|
||||||
err = n.sw.Start()
|
err = n.sw.Start()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -494,6 +558,12 @@ func (n *Node) OnStop() {
|
||||||
// TODO: gracefully disconnect from peers.
|
// TODO: gracefully disconnect from peers.
|
||||||
n.sw.Stop()
|
n.sw.Stop()
|
||||||
|
|
||||||
|
if err := n.transport.Close(); err != nil {
|
||||||
|
n.Logger.Error("Error closing transport", "err", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
n.isListening = false
|
||||||
|
|
||||||
// finally stop the listeners / external services
|
// finally stop the listeners / external services
|
||||||
for _, l := range n.rpcListeners {
|
for _, l := range n.rpcListeners {
|
||||||
n.Logger.Info("Closing rpc listener", "listener", l)
|
n.Logger.Info("Closing rpc listener", "listener", l)
|
||||||
|
@ -524,13 +594,6 @@ func (n *Node) RunForever() {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddListener adds a listener to accept inbound peer connections.
|
|
||||||
// It should be called before starting the Node.
|
|
||||||
// The first listener is the primary listener (in NodeInfo)
|
|
||||||
func (n *Node) AddListener(l p2p.Listener) {
|
|
||||||
n.sw.AddListener(l)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ConfigureRPC sets all variables in rpccore so they will serve
|
// ConfigureRPC sets all variables in rpccore so they will serve
|
||||||
// rpc calls from this node
|
// rpc calls from this node
|
||||||
func (n *Node) ConfigureRPC() {
|
func (n *Node) ConfigureRPC() {
|
||||||
|
@ -539,7 +602,8 @@ func (n *Node) ConfigureRPC() {
|
||||||
rpccore.SetConsensusState(n.consensusState)
|
rpccore.SetConsensusState(n.consensusState)
|
||||||
rpccore.SetMempool(n.mempoolReactor.Mempool)
|
rpccore.SetMempool(n.mempoolReactor.Mempool)
|
||||||
rpccore.SetEvidencePool(n.evidencePool)
|
rpccore.SetEvidencePool(n.evidencePool)
|
||||||
rpccore.SetSwitch(n.sw)
|
rpccore.SetP2PPeers(n.sw)
|
||||||
|
rpccore.SetP2PTransport(n)
|
||||||
rpccore.SetPubKey(n.privValidator.GetPubKey())
|
rpccore.SetPubKey(n.privValidator.GetPubKey())
|
||||||
rpccore.SetGenesisDoc(n.genesisDoc)
|
rpccore.SetGenesisDoc(n.genesisDoc)
|
||||||
rpccore.SetAddrBook(n.addrBook)
|
rpccore.SetAddrBook(n.addrBook)
|
||||||
|
@ -671,14 +735,36 @@ func (n *Node) ProxyApp() proxy.AppConns {
|
||||||
return n.proxyApp
|
return n.proxyApp
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *Node) makeNodeInfo(nodeID p2p.ID) p2p.NodeInfo {
|
//------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
func (n *Node) Listeners() []string {
|
||||||
|
return []string{
|
||||||
|
fmt.Sprintf("Listener(@%v)", n.config.P2P.ExternalAddress),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (n *Node) IsListening() bool {
|
||||||
|
return n.isListening
|
||||||
|
}
|
||||||
|
|
||||||
|
// NodeInfo returns the Node's Info from the Switch.
|
||||||
|
func (n *Node) NodeInfo() p2p.NodeInfo {
|
||||||
|
return n.nodeInfo
|
||||||
|
}
|
||||||
|
|
||||||
|
func makeNodeInfo(
|
||||||
|
config *cfg.Config,
|
||||||
|
nodeID p2p.ID,
|
||||||
|
txIndexer txindex.TxIndexer,
|
||||||
|
chainID string,
|
||||||
|
) p2p.NodeInfo {
|
||||||
txIndexerStatus := "on"
|
txIndexerStatus := "on"
|
||||||
if _, ok := n.txIndexer.(*null.TxIndex); ok {
|
if _, ok := txIndexer.(*null.TxIndex); ok {
|
||||||
txIndexerStatus = "off"
|
txIndexerStatus = "off"
|
||||||
}
|
}
|
||||||
nodeInfo := p2p.NodeInfo{
|
nodeInfo := p2p.NodeInfo{
|
||||||
ID: nodeID,
|
ID: nodeID,
|
||||||
Network: n.genesisDoc.ChainID,
|
Network: chainID,
|
||||||
Version: version.Version,
|
Version: version.Version,
|
||||||
Channels: []byte{
|
Channels: []byte{
|
||||||
bc.BlockchainChannel,
|
bc.BlockchainChannel,
|
||||||
|
@ -686,44 +772,34 @@ func (n *Node) makeNodeInfo(nodeID p2p.ID) p2p.NodeInfo {
|
||||||
mempl.MempoolChannel,
|
mempl.MempoolChannel,
|
||||||
evidence.EvidenceChannel,
|
evidence.EvidenceChannel,
|
||||||
},
|
},
|
||||||
Moniker: n.config.Moniker,
|
Moniker: config.Moniker,
|
||||||
Other: []string{
|
Other: p2p.NodeInfoOther{
|
||||||
fmt.Sprintf("amino_version=%v", amino.Version),
|
AminoVersion: amino.Version,
|
||||||
fmt.Sprintf("p2p_version=%v", p2p.Version),
|
P2PVersion: p2p.Version,
|
||||||
fmt.Sprintf("consensus_version=%v", cs.Version),
|
ConsensusVersion: cs.Version,
|
||||||
fmt.Sprintf("rpc_version=%v/%v", rpc.Version, rpccore.Version),
|
RPCVersion: fmt.Sprintf("%v/%v", rpc.Version, rpccore.Version),
|
||||||
fmt.Sprintf("tx_index=%v", txIndexerStatus),
|
TxIndex: txIndexerStatus,
|
||||||
|
RPCAddress: config.RPC.ListenAddress,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
if n.config.P2P.PexReactor {
|
if config.P2P.PexReactor {
|
||||||
nodeInfo.Channels = append(nodeInfo.Channels, pex.PexChannel)
|
nodeInfo.Channels = append(nodeInfo.Channels, pex.PexChannel)
|
||||||
}
|
}
|
||||||
|
|
||||||
rpcListenAddr := n.config.RPC.ListenAddress
|
lAddr := config.P2P.ExternalAddress
|
||||||
nodeInfo.Other = append(nodeInfo.Other, fmt.Sprintf("rpc_addr=%v", rpcListenAddr))
|
|
||||||
|
|
||||||
if !n.sw.IsListening() {
|
if lAddr == "" {
|
||||||
return nodeInfo
|
lAddr = config.P2P.ListenAddress
|
||||||
}
|
}
|
||||||
|
|
||||||
p2pListener := n.sw.Listeners()[0]
|
nodeInfo.ListenAddr = lAddr
|
||||||
p2pHost := p2pListener.ExternalAddressHost()
|
|
||||||
p2pPort := p2pListener.ExternalAddress().Port
|
|
||||||
nodeInfo.ListenAddr = fmt.Sprintf("%v:%v", p2pHost, p2pPort)
|
|
||||||
|
|
||||||
return nodeInfo
|
return nodeInfo
|
||||||
}
|
}
|
||||||
|
|
||||||
//------------------------------------------------------------------------------
|
//------------------------------------------------------------------------------
|
||||||
|
|
||||||
// NodeInfo returns the Node's Info from the Switch.
|
|
||||||
func (n *Node) NodeInfo() p2p.NodeInfo {
|
|
||||||
return n.sw.NodeInfo()
|
|
||||||
}
|
|
||||||
|
|
||||||
//------------------------------------------------------------------------------
|
|
||||||
|
|
||||||
var (
|
var (
|
||||||
genesisDocKey = []byte("genesisDoc")
|
genesisDocKey = []byte("genesisDoc")
|
||||||
)
|
)
|
||||||
|
@ -751,7 +827,6 @@ func saveGenesisDoc(db dbm.DB, genDoc *types.GenesisDoc) {
|
||||||
db.SetSync(genesisDocKey, bytes)
|
db.SetSync(genesisDocKey, bytes)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
// splitAndTrimEmpty slices s into all subslices separated by sep and returns a
|
// splitAndTrimEmpty slices s into all subslices separated by sep and returns a
|
||||||
// slice of the string s with all leading and trailing Unicode code points
|
// slice of the string s with all leading and trailing Unicode code points
|
||||||
// contained in cutset removed. If sep is empty, SplitAndTrim splits after each
|
// contained in cutset removed. If sep is empty, SplitAndTrim splits after each
|
||||||
|
|
|
@ -14,6 +14,8 @@ import (
|
||||||
|
|
||||||
cfg "github.com/tendermint/tendermint/config"
|
cfg "github.com/tendermint/tendermint/config"
|
||||||
"github.com/tendermint/tendermint/types"
|
"github.com/tendermint/tendermint/types"
|
||||||
|
|
||||||
|
tmtime "github.com/tendermint/tendermint/types/time"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestNodeStartStop(t *testing.T) {
|
func TestNodeStartStop(t *testing.T) {
|
||||||
|
@ -75,3 +77,17 @@ func TestSplitAndTrimEmpty(t *testing.T) {
|
||||||
assert.Equal(t, tc.expected, splitAndTrimEmpty(tc.s, tc.sep, tc.cutset), "%s", tc.s)
|
assert.Equal(t, tc.expected, splitAndTrimEmpty(tc.s, tc.sep, tc.cutset), "%s", tc.s)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestNodeDelayedStop(t *testing.T) {
|
||||||
|
config := cfg.ResetTestRoot("node_delayed_node_test")
|
||||||
|
now := tmtime.Now()
|
||||||
|
|
||||||
|
// create & start node
|
||||||
|
n, err := DefaultNewNode(config, log.TestingLogger())
|
||||||
|
n.GenesisDoc().GenesisTime = now.Add(5 * time.Second)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
n.Start()
|
||||||
|
startTime := tmtime.Now()
|
||||||
|
assert.Equal(t, true, startTime.After(n.GenesisDoc().GenesisTime))
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,73 @@
|
||||||
|
package p2p
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ConnSet is a lookup table for connections and all their ips.
|
||||||
|
type ConnSet interface {
|
||||||
|
Has(net.Conn) bool
|
||||||
|
HasIP(net.IP) bool
|
||||||
|
Set(net.Conn, []net.IP)
|
||||||
|
Remove(net.Conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
type connSetItem struct {
|
||||||
|
conn net.Conn
|
||||||
|
ips []net.IP
|
||||||
|
}
|
||||||
|
|
||||||
|
type connSet struct {
|
||||||
|
sync.RWMutex
|
||||||
|
|
||||||
|
conns map[string]connSetItem
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewConnSet returns a ConnSet implementation.
|
||||||
|
func NewConnSet() *connSet {
|
||||||
|
return &connSet{
|
||||||
|
conns: map[string]connSetItem{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cs *connSet) Has(c net.Conn) bool {
|
||||||
|
cs.RLock()
|
||||||
|
defer cs.RUnlock()
|
||||||
|
|
||||||
|
_, ok := cs.conns[c.RemoteAddr().String()]
|
||||||
|
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cs *connSet) HasIP(ip net.IP) bool {
|
||||||
|
cs.RLock()
|
||||||
|
defer cs.RUnlock()
|
||||||
|
|
||||||
|
for _, c := range cs.conns {
|
||||||
|
for _, known := range c.ips {
|
||||||
|
if known.Equal(ip) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cs *connSet) Remove(c net.Conn) {
|
||||||
|
cs.Lock()
|
||||||
|
defer cs.Unlock()
|
||||||
|
|
||||||
|
delete(cs.conns, c.RemoteAddr().String())
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cs *connSet) Set(c net.Conn, ips []net.IP) {
|
||||||
|
cs.Lock()
|
||||||
|
defer cs.Unlock()
|
||||||
|
|
||||||
|
cs.conns[c.RemoteAddr().String()] = connSetItem{
|
||||||
|
conn: c,
|
||||||
|
ips: ips,
|
||||||
|
}
|
||||||
|
}
|
|
@ -5,6 +5,98 @@ import (
|
||||||
"net"
|
"net"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// ErrFilterTimeout indicates that a filter operation timed out.
|
||||||
|
type ErrFilterTimeout struct{}
|
||||||
|
|
||||||
|
func (e ErrFilterTimeout) Error() string {
|
||||||
|
return "filter timed out"
|
||||||
|
}
|
||||||
|
|
||||||
|
// ErrRejected indicates that a Peer was rejected carrying additional
|
||||||
|
// information as to the reason.
|
||||||
|
type ErrRejected struct {
|
||||||
|
addr NetAddress
|
||||||
|
conn net.Conn
|
||||||
|
err error
|
||||||
|
id ID
|
||||||
|
isAuthFailure bool
|
||||||
|
isDuplicate bool
|
||||||
|
isFiltered bool
|
||||||
|
isIncompatible bool
|
||||||
|
isNodeInfoInvalid bool
|
||||||
|
isSelf bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Addr returns the NetAddress for the rejected Peer.
|
||||||
|
func (e ErrRejected) Addr() NetAddress {
|
||||||
|
return e.addr
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e ErrRejected) Error() string {
|
||||||
|
if e.isAuthFailure {
|
||||||
|
return fmt.Sprintf("auth failure: %s", e.err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if e.isDuplicate {
|
||||||
|
if e.conn != nil {
|
||||||
|
return fmt.Sprintf(
|
||||||
|
"duplicate CONN<%s>: %s",
|
||||||
|
e.conn.RemoteAddr().String(),
|
||||||
|
e.err,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if e.id != "" {
|
||||||
|
return fmt.Sprintf("duplicate ID<%v>: %s", e.id, e.err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if e.isFiltered {
|
||||||
|
if e.conn != nil {
|
||||||
|
return fmt.Sprintf(
|
||||||
|
"filtered CONN<%s>: %s",
|
||||||
|
e.conn.RemoteAddr().String(),
|
||||||
|
e.err,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if e.id != "" {
|
||||||
|
return fmt.Sprintf("filtered ID<%v>: %s", e.id, e.err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if e.isIncompatible {
|
||||||
|
return fmt.Sprintf("incompatible: %s", e.err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if e.isNodeInfoInvalid {
|
||||||
|
return fmt.Sprintf("invalid NodeInfo: %s", e.err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if e.isSelf {
|
||||||
|
return fmt.Sprintf("self ID<%v>", e.id)
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Sprintf("%s", e.err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsAuthFailure when Peer authentication was unsuccessful.
|
||||||
|
func (e ErrRejected) IsAuthFailure() bool { return e.isAuthFailure }
|
||||||
|
|
||||||
|
// IsDuplicate when Peer ID or IP are present already.
|
||||||
|
func (e ErrRejected) IsDuplicate() bool { return e.isDuplicate }
|
||||||
|
|
||||||
|
// IsFiltered when Peer ID or IP was filtered.
|
||||||
|
func (e ErrRejected) IsFiltered() bool { return e.isFiltered }
|
||||||
|
|
||||||
|
// IsIncompatible when Peer NodeInfo is not compatible with our own.
|
||||||
|
func (e ErrRejected) IsIncompatible() bool { return e.isIncompatible }
|
||||||
|
|
||||||
|
// IsNodeInfoInvalid when the sent NodeInfo is not valid.
|
||||||
|
func (e ErrRejected) IsNodeInfoInvalid() bool { return e.isNodeInfoInvalid }
|
||||||
|
|
||||||
|
// IsSelf when Peer is our own node.
|
||||||
|
func (e ErrRejected) IsSelf() bool { return e.isSelf }
|
||||||
|
|
||||||
// ErrSwitchDuplicatePeerID to be raised when a peer is connecting with a known
|
// ErrSwitchDuplicatePeerID to be raised when a peer is connecting with a known
|
||||||
// ID.
|
// ID.
|
||||||
type ErrSwitchDuplicatePeerID struct {
|
type ErrSwitchDuplicatePeerID struct {
|
||||||
|
@ -47,6 +139,13 @@ func (e ErrSwitchAuthenticationFailure) Error() string {
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ErrTransportClosed is raised when the Transport has been closed.
|
||||||
|
type ErrTransportClosed struct{}
|
||||||
|
|
||||||
|
func (e ErrTransportClosed) Error() string {
|
||||||
|
return "transport has been closed"
|
||||||
|
}
|
||||||
|
|
||||||
//-------------------------------------------------------------------
|
//-------------------------------------------------------------------
|
||||||
|
|
||||||
type ErrNetAddressNoID struct {
|
type ErrNetAddressNoID struct {
|
||||||
|
|
286
p2p/listener.go
286
p2p/listener.go
|
@ -1,286 +0,0 @@
|
||||||
package p2p
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
cmn "github.com/tendermint/tendermint/libs/common"
|
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
|
||||||
"github.com/tendermint/tendermint/p2p/upnp"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Listener is a network listener for stream-oriented protocols, providing
|
|
||||||
// convenient methods to get listener's internal and external addresses.
|
|
||||||
// Clients are supposed to read incoming connections from a channel, returned
|
|
||||||
// by Connections() method.
|
|
||||||
type Listener interface {
|
|
||||||
Connections() <-chan net.Conn
|
|
||||||
InternalAddress() *NetAddress
|
|
||||||
ExternalAddress() *NetAddress
|
|
||||||
ExternalAddressHost() string
|
|
||||||
String() string
|
|
||||||
Stop() error
|
|
||||||
}
|
|
||||||
|
|
||||||
// DefaultListener is a cmn.Service, running net.Listener underneath.
|
|
||||||
// Optionally, UPnP is used upon calling NewDefaultListener to resolve external
|
|
||||||
// address.
|
|
||||||
type DefaultListener struct {
|
|
||||||
cmn.BaseService
|
|
||||||
|
|
||||||
listener net.Listener
|
|
||||||
intAddr *NetAddress
|
|
||||||
extAddr *NetAddress
|
|
||||||
connections chan net.Conn
|
|
||||||
}
|
|
||||||
|
|
||||||
var _ Listener = (*DefaultListener)(nil)
|
|
||||||
|
|
||||||
const (
|
|
||||||
numBufferedConnections = 10
|
|
||||||
defaultExternalPort = 8770
|
|
||||||
tryListenSeconds = 5
|
|
||||||
)
|
|
||||||
|
|
||||||
func splitHostPort(addr string) (host string, port int) {
|
|
||||||
host, portStr, err := net.SplitHostPort(addr)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
port, err = strconv.Atoi(portStr)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
return host, port
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewDefaultListener creates a new DefaultListener on lAddr, optionally trying
|
|
||||||
// to determine external address using UPnP.
|
|
||||||
func NewDefaultListener(
|
|
||||||
fullListenAddrString string,
|
|
||||||
externalAddrString string,
|
|
||||||
useUPnP bool,
|
|
||||||
logger log.Logger) Listener {
|
|
||||||
|
|
||||||
// Split protocol, address, and port.
|
|
||||||
protocol, lAddr := cmn.ProtocolAndAddress(fullListenAddrString)
|
|
||||||
lAddrIP, lAddrPort := splitHostPort(lAddr)
|
|
||||||
|
|
||||||
// Create listener
|
|
||||||
var listener net.Listener
|
|
||||||
var err error
|
|
||||||
for i := 0; i < tryListenSeconds; i++ {
|
|
||||||
listener, err = net.Listen(protocol, lAddr)
|
|
||||||
if err == nil {
|
|
||||||
break
|
|
||||||
} else if i < tryListenSeconds-1 {
|
|
||||||
time.Sleep(time.Second * 1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
// Actual listener local IP & port
|
|
||||||
listenerIP, listenerPort := splitHostPort(listener.Addr().String())
|
|
||||||
logger.Info("Local listener", "ip", listenerIP, "port", listenerPort)
|
|
||||||
|
|
||||||
// Determine internal address...
|
|
||||||
var intAddr *NetAddress
|
|
||||||
intAddr, err = NewNetAddressStringWithOptionalID(lAddr)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
inAddrAny := lAddrIP == "" || lAddrIP == "0.0.0.0"
|
|
||||||
|
|
||||||
// Determine external address.
|
|
||||||
var extAddr *NetAddress
|
|
||||||
|
|
||||||
if externalAddrString != "" {
|
|
||||||
var err error
|
|
||||||
extAddr, err = NewNetAddressStringWithOptionalID(externalAddrString)
|
|
||||||
if err != nil {
|
|
||||||
panic(fmt.Sprintf("Error in ExternalAddress: %v", err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// If the lAddrIP is INADDR_ANY, try UPnP.
|
|
||||||
if extAddr == nil && useUPnP && inAddrAny {
|
|
||||||
extAddr = getUPNPExternalAddress(lAddrPort, listenerPort, logger)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Otherwise just use the local address.
|
|
||||||
if extAddr == nil {
|
|
||||||
defaultToIPv4 := inAddrAny
|
|
||||||
extAddr = getNaiveExternalAddress(defaultToIPv4, listenerPort, false, logger)
|
|
||||||
}
|
|
||||||
if extAddr == nil {
|
|
||||||
panic("Could not determine external address!")
|
|
||||||
}
|
|
||||||
|
|
||||||
dl := &DefaultListener{
|
|
||||||
listener: listener,
|
|
||||||
intAddr: intAddr,
|
|
||||||
extAddr: extAddr,
|
|
||||||
connections: make(chan net.Conn, numBufferedConnections),
|
|
||||||
}
|
|
||||||
dl.BaseService = *cmn.NewBaseService(logger, "DefaultListener", dl)
|
|
||||||
err = dl.Start() // Started upon construction
|
|
||||||
if err != nil {
|
|
||||||
logger.Error("Error starting base service", "err", err)
|
|
||||||
}
|
|
||||||
return dl
|
|
||||||
}
|
|
||||||
|
|
||||||
// OnStart implements cmn.Service by spinning a goroutine, listening for new
|
|
||||||
// connections.
|
|
||||||
func (l *DefaultListener) OnStart() error {
|
|
||||||
if err := l.BaseService.OnStart(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
go l.listenRoutine()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// OnStop implements cmn.Service by closing the listener.
|
|
||||||
func (l *DefaultListener) OnStop() {
|
|
||||||
l.BaseService.OnStop()
|
|
||||||
l.listener.Close() // nolint: errcheck
|
|
||||||
}
|
|
||||||
|
|
||||||
// Accept connections and pass on the channel.
|
|
||||||
func (l *DefaultListener) listenRoutine() {
|
|
||||||
for {
|
|
||||||
conn, err := l.listener.Accept()
|
|
||||||
|
|
||||||
if !l.IsRunning() {
|
|
||||||
break // Go to cleanup
|
|
||||||
}
|
|
||||||
|
|
||||||
// listener wasn't stopped,
|
|
||||||
// yet we encountered an error.
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
l.connections <- conn
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cleanup
|
|
||||||
close(l.connections)
|
|
||||||
for range l.connections {
|
|
||||||
// Drain
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Connections returns a channel of inbound connections.
|
|
||||||
// It gets closed when the listener closes.
|
|
||||||
// It is the callers responsibility to close any connections received
|
|
||||||
// over this channel.
|
|
||||||
func (l *DefaultListener) Connections() <-chan net.Conn {
|
|
||||||
return l.connections
|
|
||||||
}
|
|
||||||
|
|
||||||
// InternalAddress returns the internal NetAddress (address used for
|
|
||||||
// listening).
|
|
||||||
func (l *DefaultListener) InternalAddress() *NetAddress {
|
|
||||||
return l.intAddr
|
|
||||||
}
|
|
||||||
|
|
||||||
// ExternalAddress returns the external NetAddress (publicly available,
|
|
||||||
// determined using either UPnP or local resolver).
|
|
||||||
func (l *DefaultListener) ExternalAddress() *NetAddress {
|
|
||||||
return l.extAddr
|
|
||||||
}
|
|
||||||
|
|
||||||
// ExternalAddressHost returns the external NetAddress IP string. If an IP is
|
|
||||||
// IPv6, it's wrapped in brackets ("[2001:db8:1f70::999:de8:7648:6e8]").
|
|
||||||
func (l *DefaultListener) ExternalAddressHost() string {
|
|
||||||
ip := l.ExternalAddress().IP
|
|
||||||
if isIpv6(ip) {
|
|
||||||
// Means it's ipv6, so format it with brackets
|
|
||||||
return "[" + ip.String() + "]"
|
|
||||||
}
|
|
||||||
return ip.String()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *DefaultListener) String() string {
|
|
||||||
return fmt.Sprintf("Listener(@%v)", l.extAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
/* external address helpers */
|
|
||||||
|
|
||||||
// UPNP external address discovery & port mapping
|
|
||||||
func getUPNPExternalAddress(externalPort, internalPort int, logger log.Logger) *NetAddress {
|
|
||||||
logger.Info("Getting UPNP external address")
|
|
||||||
nat, err := upnp.Discover()
|
|
||||||
if err != nil {
|
|
||||||
logger.Info("Could not perform UPNP discover", "err", err)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
ext, err := nat.GetExternalAddress()
|
|
||||||
if err != nil {
|
|
||||||
logger.Info("Could not get UPNP external address", "err", err)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UPnP can't seem to get the external port, so let's just be explicit.
|
|
||||||
if externalPort == 0 {
|
|
||||||
externalPort = defaultExternalPort
|
|
||||||
}
|
|
||||||
|
|
||||||
externalPort, err = nat.AddPortMapping("tcp", externalPort, internalPort, "tendermint", 0)
|
|
||||||
if err != nil {
|
|
||||||
logger.Info("Could not add UPNP port mapping", "err", err)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.Info("Got UPNP external address", "address", ext)
|
|
||||||
return NewNetAddressIPPort(ext, uint16(externalPort))
|
|
||||||
}
|
|
||||||
|
|
||||||
func isIpv6(ip net.IP) bool {
|
|
||||||
v4 := ip.To4()
|
|
||||||
if v4 != nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
ipString := ip.String()
|
|
||||||
|
|
||||||
// Extra check just to be sure it's IPv6
|
|
||||||
return (strings.Contains(ipString, ":") && !strings.Contains(ipString, "."))
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: use syscalls: see issue #712
|
|
||||||
func getNaiveExternalAddress(defaultToIPv4 bool, port int, settleForLocal bool, logger log.Logger) *NetAddress {
|
|
||||||
addrs, err := net.InterfaceAddrs()
|
|
||||||
if err != nil {
|
|
||||||
panic(fmt.Sprintf("Could not fetch interface addresses: %v", err))
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, a := range addrs {
|
|
||||||
ipnet, ok := a.(*net.IPNet)
|
|
||||||
if !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if defaultToIPv4 || !isIpv6(ipnet.IP) {
|
|
||||||
v4 := ipnet.IP.To4()
|
|
||||||
if v4 == nil || (!settleForLocal && v4[0] == 127) {
|
|
||||||
// loopback
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
} else if !settleForLocal && ipnet.IP.IsLoopback() {
|
|
||||||
// IPv6, check for loopback
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
return NewNetAddressIPPort(ipnet.IP, uint16(port))
|
|
||||||
}
|
|
||||||
|
|
||||||
// try again, but settle for local
|
|
||||||
logger.Info("Node may not be connected to internet. Settling for local address")
|
|
||||||
return getNaiveExternalAddress(defaultToIPv4, port, true, logger)
|
|
||||||
}
|
|
|
@ -1,79 +0,0 @@
|
||||||
package p2p
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"net"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
"github.com/tendermint/tendermint/libs/log"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestListener(t *testing.T) {
|
|
||||||
// Create a listener
|
|
||||||
l := NewDefaultListener("tcp://:8001", "", false, log.TestingLogger())
|
|
||||||
|
|
||||||
// Dial the listener
|
|
||||||
lAddr := l.ExternalAddress()
|
|
||||||
connOut, err := lAddr.Dial()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Could not connect to listener address %v", lAddr)
|
|
||||||
} else {
|
|
||||||
t.Logf("Created a connection to listener address %v", lAddr)
|
|
||||||
}
|
|
||||||
connIn, ok := <-l.Connections()
|
|
||||||
if !ok {
|
|
||||||
t.Fatalf("Could not get inbound connection from listener")
|
|
||||||
}
|
|
||||||
|
|
||||||
msg := []byte("hi!")
|
|
||||||
go func() {
|
|
||||||
_, err := connIn.Write(msg)
|
|
||||||
if err != nil {
|
|
||||||
t.Error(err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
b := make([]byte, 32)
|
|
||||||
n, err := connOut.Read(b)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Error reading off connection: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
b = b[:n]
|
|
||||||
if !bytes.Equal(msg, b) {
|
|
||||||
t.Fatalf("Got %s, expected %s", b, msg)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close the server, no longer needed.
|
|
||||||
l.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestExternalAddress(t *testing.T) {
|
|
||||||
{
|
|
||||||
// Create a listener with no external addr. Should default
|
|
||||||
// to local ipv4.
|
|
||||||
l := NewDefaultListener("tcp://:8001", "", false, log.TestingLogger())
|
|
||||||
lAddr := l.ExternalAddress().String()
|
|
||||||
_, _, err := net.SplitHostPort(lAddr)
|
|
||||||
require.Nil(t, err)
|
|
||||||
spl := strings.Split(lAddr, ".")
|
|
||||||
require.Equal(t, len(spl), 4)
|
|
||||||
l.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
{
|
|
||||||
// Create a listener with set external ipv4 addr.
|
|
||||||
setExAddr := "8.8.8.8:8080"
|
|
||||||
l := NewDefaultListener("tcp://:8001", setExAddr, false, log.TestingLogger())
|
|
||||||
lAddr := l.ExternalAddress().String()
|
|
||||||
require.Equal(t, lAddr, setExAddr)
|
|
||||||
l.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
{
|
|
||||||
// Invalid external addr causes panic
|
|
||||||
setExAddr := "awrlsckjnal:8080"
|
|
||||||
require.Panics(t, func() { NewDefaultListener("tcp://:8001", setExAddr, false, log.TestingLogger()) })
|
|
||||||
}
|
|
||||||
}
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue