Merge pull request #1599 from tendermint/release/v0.19.5

Release/v0.19.5
This commit is contained in:
Ethan Buchman 2018-05-20 16:45:00 -04:00 committed by GitHub
commit 931fb385d7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
53 changed files with 573 additions and 262 deletions

View File

@ -153,7 +153,7 @@ jobs:
- checkout
- run: mkdir -p $GOPATH/src/github.com/tendermint
- run: ln -sf /home/circleci/project $GOPATH/src/github.com/tendermint/tendermint
- run: bash test/circleci/p2p.sh
- run: bash test/p2p/circleci.sh
upload_coverage:
<<: *defaults

View File

@ -19,10 +19,6 @@ in a case of bug.
**ABCI app** (name for built-in, URL for self-written if it's publicly available):
**Merkleeyes version** (use `git rev-parse --verify HEAD`, skip if you don't use it):
**Environment**:
- **OS** (e.g. from /etc/os-release):
- **Install tools**:

View File

@ -1,5 +1,31 @@
# Changelog
## 0.19.5
*May 20th, 2018*
BREAKING CHANGES
- [rpc/client] TxSearch and UnconfirmedTxs have new arguments (see below)
- [rpc/client] TxSearch returns ResultTxSearch
- [version] Breaking changes to Go APIs will not be reflected in breaking
version change, but will be included in changelog.
FEATURES
- [rpc] `/tx_search` takes `page` (starts at 1) and `per_page` (max 100, default 30) args to paginate results
- [rpc] `/unconfirmed_txs` takes `limit` (max 100, default 30) arg to limit the output
- [config] `mempool.size` and `mempool.cache_size` options
IMPROVEMENTS
- [docs] Lots of updates
- [consensus] Only Fsync() the WAL before executing msgs from ourselves
BUG FIXES
- [mempool] Enforce upper bound on number of transactions
## 0.19.4 (May 17th, 2018)
IMPROVEMENTS
@ -8,7 +34,6 @@ IMPROVEMENTS
- [consensus, state] Improve logging (more consensus logs, fewer tx logs)
- [spec] Moved to `docs/spec` (TODO cleanup the rest of the docs ...)
BUG FIXES
- [consensus] Fix issue #1575 where a late proposer can get stuck

View File

@ -193,6 +193,10 @@ build-docker:
build-linux:
GOOS=linux GOARCH=amd64 $(MAKE) build
build-docker-localnode:
cd networks/local
make
# Run a 4-node testnet locally
localnet-start: localnet-stop
@if ! [ -f build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 4 --o . --populate-persistent-peers --starting-ip-address 192.167.10.2 ; fi
@ -212,7 +216,7 @@ sentry-start:
cd networks/remote/terraform && terraform init && terraform apply -var DO_API_TOKEN="$(DO_API_TOKEN)" -var SSH_KEY_FILE="$(HOME)/.ssh/id_rsa.pub"
@if ! [ -f $(CURDIR)/build/node0/config/genesis.json ]; then docker run --rm -v $(CURDIR)/build:/tendermint:Z tendermint/localnode testnet --v 0 --n 4 --o . ; fi
cd networks/remote/ansible && ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make server-config\". (Public key of validator, chain ID, peer IP and node ID.)"
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make sentry-config\". (Public key of validator, chain ID, peer IP and node ID.)"
# Configuration management
sentry-config:
@ -225,5 +229,5 @@ sentry-stop:
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker sentry-start sentry-config sentry-stop
.PHONY: check build build_race dist install check_tools get_tools update_tools get_vendor_deps draw_deps test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop

View File

@ -24,9 +24,14 @@ _NOTE: This is alpha software. Please contact us if you intend to run it in prod
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
and securely replicates it on many machines.
For more information, from introduction to installation and application development, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
For protocol details, see [the specification](/docs/spec).
For protocol details, see [the specification](./docs/specification/new-spec).
## Security
To report a security vulnerability, see our [bug bounty
program](https://tendermint.com/security).
For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.md)
## Minimum requirements
@ -36,19 +41,19 @@ Go version | Go1.9 or higher
## Install
To download pre-built binaries, see our [downloads page](https://tendermint.com/downloads).
See the [install instructions](/docs/install.rst)
To install from source, you should be able to:
## Quick Start
`go get -u github.com/tendermint/tendermint/cmd/tendermint`
For more details (or if it fails), [read the docs](https://tendermint.readthedocs.io/en/master/install.html).
- [Single node](/docs/using-tendermint.rst)
- [Local cluster using docker-compose](/networks/local)
- [Remote cluster using terraform and ansible](/networks/remote)
## Resources
### Tendermint Core
To use Tendermint, build apps on it, or develop it, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
For more, [Read The Docs](https://tendermint.readthedocs.io/en/master/).
Additional information about some - and eventually all - of the sub-projects below, can be found at Read The Docs.
### Sub-projects
@ -88,7 +93,11 @@ According to SemVer, anything in the public API can change at any time before ve
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
to signal breaking changes across a subset of the total public API. This subset includes all
interfaces exposed to other processes (cli, rpc, p2p, etc.), as well as parts of the following packages:
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not
include the in-process Go APIs.
That said, breaking changes in the following packages will be documented in the
CHANGELOG even if they don't lead to MINOR version bumps:
- types
- rpc/client

71
SECURITY.md Normal file
View File

@ -0,0 +1,71 @@
# Security
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a bug bounty.
See the policy for more details on submissions and rewards.
Here is a list of examples of the kinds of bugs we're most interested in:
## Specification
- Conceptual flaws
- Ambiguities, inconsistencies, or incorrect statements
- Mis-match between specification and implementation of any component
## Consensus
Assuming less than 1/3 of the voting power is Byzantine (malicious):
- Validation of blockchain data structures, including blocks, block parts,
votes, and so on
- Execution of blocks
- Validator set changes
- Proposer round robin
- Two nodes committing conflicting blocks for the same height (safety failure)
- A correct node signing conflicting votes
- A node halting (liveness failure)
- Syncing new and old nodes
## Networking
- Authenticated encryption (MITM, information leakage)
- Eclipse attacks
- Sybil attacks
- Long-range attacks
- Denial-of-Service
## RPC
- Write-access to anything besides sending transactions
- Denial-of-Service
- Leakage of secrets
## Denial-of-Service
Attacks may come through the P2P network or the RPC:
- Amplification attacks
- Resource abuse
- Deadlocks and race conditions
- Panics and unhandled errors
## Libraries
- Serialization (Amino)
- Reading/Writing files and databases
- Logging and monitoring
## Cryptography
- Elliptic curves for validator signatures
- Hash algorithms and Merkle trees for block validation
- Authenticated encryption for P2P connections
## Light Client
- Validation of blockchain data structures
- Correctly validating an incorrect proof
- Incorrectly validating a correct proof
- Syncing validator set changes

View File

@ -335,6 +335,7 @@ type MempoolConfig struct {
RecheckEmpty bool `mapstructure:"recheck_empty"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
Size int `mapstructure:"size"`
CacheSize int `mapstructure:"cache_size"`
}
@ -345,6 +346,7 @@ func DefaultMempoolConfig() *MempoolConfig {
RecheckEmpty: true,
Broadcast: true,
WalPath: filepath.Join(defaultDataDir, "mempool.wal"),
Size: 100000,
CacheSize: 100000,
}
}

View File

@ -179,6 +179,12 @@ recheck_empty = {{ .Mempool.RecheckEmpty }}
broadcast = {{ .Mempool.Broadcast }}
wal_dir = "{{ .Mempool.WalPath }}"
# size of the mempool
size = {{ .Mempool.Size }}
# size of the cache (used to filter transactions we saw earlier)
cache_size = {{ .Mempool.CacheSize }}
##### consensus configuration options #####
[consensus]

View File

@ -218,15 +218,15 @@ func (e ReachedHeightToStopError) Error() string {
return fmt.Sprintf("reached height to stop %d", e.height)
}
// Save simulate WAL's crashing by sending an error to the panicCh and then
// Write simulate WAL's crashing by sending an error to the panicCh and then
// exiting the cs.receiveRoutine.
func (w *crashingWAL) Save(m WALMessage) {
func (w *crashingWAL) Write(m WALMessage) {
if endMsg, ok := m.(EndHeightMessage); ok {
if endMsg.Height == w.heightToStop {
w.panicCh <- ReachedHeightToStopError{endMsg.Height}
runtime.Goexit()
} else {
w.next.Save(m)
w.next.Write(m)
}
return
}
@ -238,10 +238,14 @@ func (w *crashingWAL) Save(m WALMessage) {
runtime.Goexit()
} else {
w.msgIndex++
w.next.Save(m)
w.next.Write(m)
}
}
func (w *crashingWAL) WriteSync(m WALMessage) {
w.Write(m)
}
func (w *crashingWAL) Group() *auto.Group { return w.next.Group() }
func (w *crashingWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return w.next.SearchForEndHeight(height, options)

View File

@ -504,7 +504,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
func (cs *ConsensusState) newStep() {
rs := cs.RoundStateEvent()
cs.wal.Save(rs)
cs.wal.Write(rs)
cs.nSteps++
// newStep is called by updateToStep in NewConsensusState before the eventBus is set!
if cs.eventBus != nil {
@ -542,16 +542,16 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
case height := <-cs.mempool.TxsAvailable():
cs.handleTxsAvailable(height)
case mi = <-cs.peerMsgQueue:
cs.wal.Save(mi)
cs.wal.Write(mi)
// handles proposals, block parts, votes
// may generate internal events (votes, complete proposals, 2/3 majorities)
cs.handleMsg(mi)
case mi = <-cs.internalMsgQueue:
cs.wal.Save(mi)
cs.wal.WriteSync(mi) // NOTE: fsync
// handles proposals, block parts, votes
cs.handleMsg(mi)
case ti := <-cs.timeoutTicker.Chan(): // tockChan:
cs.wal.Save(ti)
cs.wal.Write(ti)
// if the timeout is relevant to the rs
// go to the next step
cs.handleTimeout(ti, rs)
@ -1241,7 +1241,7 @@ func (cs *ConsensusState) finalizeCommit(height int64) {
// Either way, the ConsensusState should not be resumed until we
// successfully call ApplyBlock (ie. later here, or in Handshake after
// restart).
cs.wal.Save(EndHeightMessage{height})
cs.wal.WriteSync(EndHeightMessage{height}) // NOTE: fsync
fail.Fail() // XXX

View File

@ -50,7 +50,8 @@ func RegisterWALMessages(cdc *amino.Codec) {
// WAL is an interface for any write-ahead logger.
type WAL interface {
Save(WALMessage)
Write(WALMessage)
WriteSync(WALMessage)
Group() *auto.Group
SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error)
@ -98,7 +99,7 @@ func (wal *baseWAL) OnStart() error {
if err != nil {
return err
} else if size == 0 {
wal.Save(EndHeightMessage{0})
wal.WriteSync(EndHeightMessage{0})
}
err = wal.group.Start()
return err
@ -109,20 +110,31 @@ func (wal *baseWAL) OnStop() {
wal.group.Stop()
}
// called in newStep and for each pass in receiveRoutine
func (wal *baseWAL) Save(msg WALMessage) {
// Write is called in newStep and for each receive on the
// peerMsgQueue and the timoutTicker.
// NOTE: does not call fsync()
func (wal *baseWAL) Write(msg WALMessage) {
if wal == nil {
return
}
// Write the wal message
if err := wal.enc.Encode(&TimedWALMessage{time.Now(), msg}); err != nil {
cmn.PanicQ(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
panic(cmn.Fmt("Error writing msg to consensus wal: %v \n\nMessage: %v", err, msg))
}
}
// WriteSync is called when we receive a msg from ourselves
// so that we write to disk before sending signed messages.
// NOTE: calls fsync()
func (wal *baseWAL) WriteSync(msg WALMessage) {
if wal == nil {
return
}
// TODO: only flush when necessary
wal.Write(msg)
if err := wal.group.Flush(); err != nil {
cmn.PanicQ(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
panic(cmn.Fmt("Error flushing consensus wal buf to file. Error: %v \n", err))
}
}
@ -297,8 +309,9 @@ func (dec *WALDecoder) Decode() (*TimedWALMessage, error) {
type nilWAL struct{}
func (nilWAL) Save(m WALMessage) {}
func (nilWAL) Group() *auto.Group { return nil }
func (nilWAL) Write(m WALMessage) {}
func (nilWAL) WriteSync(m WALMessage) {}
func (nilWAL) Group() *auto.Group { return nil }
func (nilWAL) SearchForEndHeight(height int64, options *WALSearchOptions) (gr *auto.GroupReader, found bool, err error) {
return nil, false, nil
}

View File

@ -83,7 +83,7 @@ func WALWithNBlocks(numBlocks int) (data []byte, err error) {
numBlocksWritten := make(chan struct{})
wal := newByteBufferWAL(logger, NewWALEncoder(wr), int64(numBlocks), numBlocksWritten)
// see wal.go#103
wal.Save(EndHeightMessage{0})
wal.Write(EndHeightMessage{0})
consensusState.wal = wal
if err := consensusState.Start(); err != nil {
@ -166,7 +166,7 @@ func newByteBufferWAL(logger log.Logger, enc *WALEncoder, nBlocks int64, signalS
// Save writes message to the internal buffer except when heightToStop is
// reached, in which case it will signal the caller via signalWhenStopsTo and
// skip writing.
func (w *byteBufferWAL) Save(m WALMessage) {
func (w *byteBufferWAL) Write(m WALMessage) {
if w.stopped {
w.logger.Debug("WAL already stopped. Not writing message", "msg", m)
return
@ -189,6 +189,10 @@ func (w *byteBufferWAL) Save(m WALMessage) {
}
}
func (w *byteBufferWAL) WriteSync(m WALMessage) {
w.Write(m)
}
func (w *byteBufferWAL) Group() *auto.Group {
panic("not implemented")
}

View File

@ -8,9 +8,9 @@ services:
- "46656-46657:46656-46657"
environment:
- ID=0
- LOG=${LOG:-tendermint.log}
- LOG=$${LOG:-tendermint.log}
volumes:
- ${FOLDER:-./build}:/tendermint:Z
- ./build:/tendermint:Z
networks:
localnet:
ipv4_address: 192.167.10.2
@ -22,9 +22,9 @@ services:
- "46659-46660:46656-46657"
environment:
- ID=1
- LOG=${LOG:-tendermint.log}
- LOG=$${LOG:-tendermint.log}
volumes:
- ${FOLDER:-./build}:/tendermint:Z
- ./build:/tendermint:Z
networks:
localnet:
ipv4_address: 192.167.10.3
@ -34,11 +34,11 @@ services:
image: "tendermint/localnode"
environment:
- ID=2
- LOG=${LOG:-tendermint.log}
- LOG=$${LOG:-tendermint.log}
ports:
- "46661-46662:46656-46657"
volumes:
- ${FOLDER:-./build}:/tendermint:Z
- ./build:/tendermint:Z
networks:
localnet:
ipv4_address: 192.167.10.4
@ -48,11 +48,11 @@ services:
image: "tendermint/localnode"
environment:
- ID=3
- LOG=${LOG:-tendermint.log}
- LOG=$${LOG:-tendermint.log}
ports:
- "46663-46664:46656-46657"
volumes:
- ${FOLDER:-./build}:/tendermint:Z
- ./build:/tendermint:Z
networks:
localnet:
ipv4_address: 192.167.10.5

View File

@ -178,21 +178,22 @@ connection, to query the local state of the app.
Mempool Connection
~~~~~~~~~~~~~~~~~~
The mempool connection is used *only* for CheckTx requests. Transactions
are run using CheckTx in the same order they were received by the
validator. If the CheckTx returns ``OK``, the transaction is kept in
memory and relayed to other peers in the same order it was received.
Otherwise, it is discarded.
The mempool connection is used *only* for CheckTx requests.
Transactions are run using CheckTx in the same order they were
received by the validator. If the CheckTx returns ``OK``, the
transaction is kept in memory and relayed to other peers in the same
order it was received. Otherwise, it is discarded.
CheckTx requests run concurrently with block processing; so they should
run against a copy of the main application state which is reset after
every block. This copy is necessary to track transitions made by a
sequence of CheckTx requests before they are included in a block. When a
block is committed, the application must ensure to reset the mempool
state to the latest committed state. Tendermint Core will then filter
through all transactions in the mempool, removing any that were included
in the block, and re-run the rest using CheckTx against the post-Commit
mempool state.
CheckTx requests run concurrently with block processing; so they
should run against a copy of the main application state which is reset
after every block. This copy is necessary to track transitions made by
a sequence of CheckTx requests before they are included in a block.
When a block is committed, the application must ensure to reset the
mempool state to the latest committed state. Tendermint Core will then
filter through all transactions in the mempool, removing any that were
included in the block, and re-run the rest using CheckTx against the
post-Commit mempool state (this behaviour can be turned off with
``[mempool] recheck = false``).
.. container:: toggle
@ -226,6 +227,23 @@ mempool state.
}
}
Replay Protection
^^^^^^^^^^^^^^^^^
To prevent old transactions from being replayed, CheckTx must
implement replay protection.
Tendermint provides the first defence layer by keeping a lightweight
in-memory cache of 100k (``[mempool] cache_size``) last transactions in
the mempool. If Tendermint is just started or the clients sent more
than 100k transactions, old transactions may be sent to the
application. So it is important CheckTx implements some logic to
handle them.
There are cases where a transaction will (or may) become valid in some
future state, in which case you probably want to disable Tendermint's
cache. You can do that by setting ``[mempool] cache_size = 0`` in the
config.
Consensus Connection
~~~~~~~~~~~~~~~~~~~~

View File

@ -186,14 +186,8 @@ if os.path.isdir(assets_dir) != True:
urllib.urlretrieve(tools_repo+tools_branch+'/docker/README.rst', filename=tools_dir+'/docker.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/README.rst', filename=tools_dir+'/mintnet-kubernetes.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce1.png', filename=assets_dir+'/gce1.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/gce2.png', filename=assets_dir+'/gce2.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/statefulset.png', filename=assets_dir+'/statefulset.png')
urllib.urlretrieve(tools_repo+tools_branch+'/mintnet-kubernetes/assets/t_plus_k.png', filename=assets_dir+'/t_plus_k.png')
urllib.urlretrieve(tools_repo+tools_branch+'/tm-bench/README.rst', filename=tools_dir+'/benchmarking.rst')
urllib.urlretrieve('https://raw.githubusercontent.com/tendermint/tools/master/tm-monitor/README.rst', filename='tools/monitoring.rst')
urllib.urlretrieve(tools_repo+tools_branch+'/tm-monitor/README.rst', filename='tools/monitoring.rst')
#### abci spec #################################

View File

@ -29,23 +29,28 @@ Here are the steps to setting up a testnet manually:
``46656``. Thus, if the IP addresses of your nodes were
``192.168.0.1, 192.168.0.2, 192.168.0.3, 192.168.0.4``, the command
would look like:
``tendermint node --proxy_app=kvstore --p2p.persistent_peers=96663a3dd0d7b9d17d4c8211b191af259621c693@192.168.0.1:46656, 429fcf25974313b95673f58d77eacdd434402665@192.168.0.2:46656, 0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@192.168.0.3:46656, f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@192.168.0.4:46656``.
::
tendermint node --proxy_app=kvstore --p2p.persistent_peers=96663a3dd0d7b9d17d4c8211b191af259621c693@192.168.0.1:46656, 429fcf25974313b95673f58d77eacdd434402665@192.168.0.2:46656, 0491d373a8e0fcf1023aaf18c51d6a1d0d4f31bd@192.168.0.3:46656, f9baeaa15fedf5e1ef7448dd60f46c01f1a9e9c4@192.168.0.4:46656
After a few seconds, all the nodes should connect to each other and start
making blocks! For more information, see the Tendermint Networks section
of `the guide to using Tendermint <using-tendermint.html>`__.
While the manual deployment is easy enough, an automated deployment is
usually quicker. The below examples show different tools that can be used
for automated deployments.
But wait! Steps 3 and 4 are quite manual. Instead, use `this script <https://github.com/tendermint/tendermint/blob/develop/docs/examples/init_testnet.sh>`__, which does the heavy lifting for you. And it gets better.
Instead of the previously linked script to initialize the files required for a testnet, we have the ``tendermint testnet`` command. By default, running ``tendermint testnet`` will create all the required files, just like the script. Of course, you'll still need to manually edit some fields in the ``config.toml``. Alternatively, see the available flags to auto-populate the ``config.toml`` with the fields that would otherwise be passed in via flags when running ``tendermint node``. As you might imagine, this command is useful for manual or automated deployments.
Automated Deployments
---------------------
The easiest and fastest way to get a testnet up in less than 5 minutes.
Local
^^^^^
With ``docker`` installed, run the command:
With ``docker`` and ``docker-compose`` installed, run the command:
::
@ -53,16 +58,7 @@ With ``docker`` installed, run the command:
from the root of the tendermint repository. This will spin up a 4-node local testnet.
Cloud Deployment using Kubernetes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `mintnet-kubernetes tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__
allows automating the deployment of a Tendermint network on an already
provisioned Kubernetes cluster. For simple provisioning of a Kubernetes
cluster, check out the `Google Cloud Platform <https://cloud.google.com/>`__.
Cloud Deployment using Terraform and Ansible
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Cloud
^^^^^
See the `next section <./terraform-and-ansible.html>`__ for details.

View File

@ -42,7 +42,6 @@ Tendermint Tools
deploy-testnets.rst
terraform-and-ansible.rst
tools/docker.rst
tools/mintnet-kubernetes.rst
tools/benchmarking.rst
tools/monitoring.rst
@ -66,6 +65,7 @@ Tendermint 201
specification.rst
determinism.rst
transactional-semantics.rst
* For a deeper dive, see `this thesis <https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769>`__.
* There is also the `original whitepaper <https://tendermint.com/static/docs/tendermint.pdf>`__, though it is now quite outdated.

View File

@ -4,53 +4,48 @@ Install Tendermint
From Binary
-----------
To download pre-built binaries, see the `Download page <https://tendermint.com/downloads>`__.
To download pre-built binaries, see the `releases page <https://github.com/tendermint/tendermint/releases>`__.
From Source
-----------
You'll need ``go``, maybe `dep <https://github.com/golang/dep>`__, and the Tendermint source code.
Install Go
^^^^^^^^^^
Make sure you have `installed Go <https://golang.org/doc/install>`__ and
set the ``GOPATH``. You should also put ``GOPATH/bin`` on your ``PATH``.
You'll need ``go`` `installed <https://golang.org/doc/install>`__ and the required
`environment variables set <https://github.com/tendermint/tendermint/wiki/Setting-GOPATH>`__
Get Source Code
^^^^^^^^^^^^^^^
You should be able to install the latest with a simple
::
mkdir -p $GOPATH/src/github.com/tendermint
cd $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint.git
cd tendermint
Get Tools & Dependencies
^^^^^^^^^^^^^^^^^^^^^^^^
::
go get github.com/tendermint/tendermint/cmd/tendermint
Run ``tendermint --help`` and ``tendermint version`` to ensure your
installation worked.
If the installation failed, a dependency may have been updated and become
incompatible with the latest Tendermint master branch. We solve this
using the ``dep`` tool for dependency management.
First, install ``dep``:
::
cd $GOPATH/src/github.com/tendermint/tendermint
make get_tools
make get_vendor_deps
Now we can fetch the correct versions of each dependency by running:
Compile
^^^^^^^
::
make get_vendor_deps
make install
Note that even though ``go get`` originally failed, the repository was
still cloned to the correct location in the ``$GOPATH``.
to put the binary in ``$GOPATH/bin`` or use:
The latest Tendermint Core version is now installed.
::
make build
to put the binary in ``./build``.
The latest ``tendermint version`` is now installed.
Reinstall
---------
@ -86,20 +81,6 @@ do, use ``dep``, as above:
Since the third option just uses ``dep`` right away, it should always
work.
Troubleshooting
---------------
If ``go get`` failing bothers you, fetch the code using ``git``:
::
mkdir -p $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint $GOPATH/src/github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
make get_tools
make get_vendor_deps
make install
Run
^^^

View File

@ -10,12 +10,17 @@ please submit them to our [bug bounty](https://tendermint.com/security)!
## Contents
- [Overview](#overview)
### Data Structures
- [Overview](#overview)
- [Encoding and Digests](encoding.md)
- [Blockchain](blockchain.md)
- [State](state.md)
- [Encoding and Digests](./blockchain/encoding.md)
- [Blockchain](./blockchain/blockchain.md)
- [State](./blockchain/state.md)
### Consensus Protocol
- TODO
### P2P and Network Protocols

View File

@ -136,6 +136,12 @@ like the file below, however, double check by inspecting the
broadcast = true
wal_dir = "data/mempool.wal"
# size of the mempool
size = 100000
# size of the cache (used to filter transactions we saw earlier)
cache_size = 100000
##### consensus configuration options #####
[consensus]

View File

@ -1 +1 @@
Spec moved to [docs/spec](./docs/spec).
Spec moved to [docs/spec](/docs/spec).

View File

@ -7,7 +7,7 @@ Automated deployments are done using `Terraform <https://www.terraform.io/>`__ t
Install
-------
NOTE: see the `integration bash script <https://github.com/tendermint/tendermint/blob/zach/ansible/networks/remote/integration.sh>`__ that can be run on a fresh DO droplet and will automatically spin up a 4 node testnet. The script more or less does everything described below.
NOTE: see the `integration bash script <https://github.com/tendermint/tendermint/blob/develop/networks/remote/integration.sh>`__ that can be run on a fresh DO droplet and will automatically spin up a 4 node testnet. The script more or less does everything described below.
- Install `Terraform <https://www.terraform.io/downloads.html>`__ and `Ansible <http://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html>`__ on a Linux machine.
- Create a `DigitalOcean API token <https://cloud.digitalocean.com/settings/api/tokens>`__ with read and write capability.

View File

@ -0,0 +1,27 @@
Transactional Semantics
=======================
In `Using
Tendermint <./using-tendermint.html#broadcast-api>`__ we
discussed different API endpoints for sending transactions and
differences between them.
What we have not yet covered is transactional semantics.
When you send a transaction using one of the available methods, it
first goes to the mempool. Currently, it does not provide strong
guarantees like "if the transaction were accepted, it would be
eventually included in a block (given CheckTx passes)."
For instance a tx could enter the mempool, but before it can be sent
to peers the node crashes.
We are planning to provide such guarantees by using a WAL and
replaying transactions (See
`GH#248 <https://github.com/tendermint/tendermint/issues/248>`__), but
it's non-trivial to do this all efficiently.
The temporary solution is for clients to monitor the node and resubmit
transaction(s) or/and send them to more nodes at once, so the
probability of all of them crashing at the same time and losing the
msg decreases substantially.

View File

@ -28,8 +28,11 @@ genesis file (``genesis.json``) containing the associated public key,
in ``$TMHOME/config``.
This is all that's necessary to run a local testnet with one validator.
For more elaborate initialization, see our `testnet deployment
tool <https://github.com/tendermint/tools/tree/master/mintnet-kubernetes>`__.
For more elaborate initialization, see the `tesnet` command:
::
tendermint testnet --help
Run
---
@ -214,7 +217,7 @@ Broadcast API
Earlier, we used the ``broadcast_tx_commit`` endpoint to send a
transaction. When a transaction is sent to a Tendermint node, it will
run via ``CheckTx`` against the application. If it passes ``CheckTx``,
it will be included in the mempool, broadcast to other peers, and
it will be included in the mempool, broadcasted to other peers, and
eventually included in a block.
Since there are multiple phases to processing a transaction, we offer

View File

@ -49,7 +49,13 @@ TODO: Better handle abci client errors. (make it automatically handle connection
*/
var ErrTxInCache = errors.New("Tx already exists in cache")
var (
// ErrTxInCache is returned to the client if we saw tx earlier
ErrTxInCache = errors.New("Tx already exists in cache")
// ErrMempoolIsFull means Tendermint & an application can't handle that much load
ErrMempoolIsFull = errors.New("Mempool is full")
)
// Mempool is an ordered in-memory pool for transactions before they are proposed in a consensus
// round. Transaction validity is checked using the CheckTx abci message before the transaction is
@ -80,7 +86,6 @@ type Mempool struct {
}
// NewMempool returns a new Mempool with the given configuration and connection to an application.
// TODO: Extract logger into arguments.
func NewMempool(config *cfg.MempoolConfig, proxyAppConn proxy.AppConnMempool, height int64) *Mempool {
mempool := &Mempool{
config: config,
@ -202,11 +207,14 @@ func (mem *Mempool) CheckTx(tx types.Tx, cb func(*abci.Response)) (err error) {
mem.proxyMtx.Lock()
defer mem.proxyMtx.Unlock()
if mem.Size() >= mem.config.Size {
return ErrMempoolIsFull
}
// CACHE
if mem.cache.Exists(tx) {
if !mem.cache.Push(tx) {
return ErrTxInCache
}
mem.cache.Push(tx)
// END CACHE
// WAL
@ -264,8 +272,6 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) {
// remove from cache (it might be good later)
mem.cache.Remove(tx)
// TODO: handle other retcodes
}
default:
// ignore other messages
@ -463,14 +469,6 @@ func (cache *txCache) Reset() {
cache.mtx.Unlock()
}
// Exists returns true if the given tx is cached.
func (cache *txCache) Exists(tx types.Tx) bool {
cache.mtx.Lock()
_, exists := cache.map_[string(tx)]
cache.mtx.Unlock()
return exists
}
// Push adds the given tx to the txCache. It returns false if tx is already in the cache.
func (cache *txCache) Push(tx types.Tx) bool {
cache.mtx.Lock()

79
networks/local/README.md Normal file
View File

@ -0,0 +1,79 @@
# Local Cluster with Docker Compose
## Requirements
- [Install tendermint](/docs/install.rst)
- [Install docker](https://docs.docker.com/engine/installation/)
- [Install docker-compose](https://docs.docker.com/compose/install/)
## Build
Build the `tendermint` binary and the `tendermint/localnode` docker image.
Note the binary will be mounted into the container so it can be updated without
rebuilding the image.
```
cd $GOPATH/src/github.com/tendermint/tendermint
# Build the linux binary in ./build
make build-linux
# Build tendermint/localnode image
make build-docker-localnode
```
## Run a testnet
To start a 4 node testnet run:
```
make localnet-start
```
The nodes bind their RPC servers to ports 46657, 46660, 46662, and 46664 on the host.
This file creates a 4-node network using the localnode image.
The nodes of the network expose their P2P and RPC endpoints to the host machine on ports 46656-46657, 46659-46660, 46661-46662, and 46663-46664 respectively.
To update the binary, just rebuild it and restart the nodes:
```
make build-linux
make localnet-stop
make localnet-start
```
## Configuration
The `make localnet-start` creates files for a 4-node testnet in `./build` by calling the `tendermint testnet` command.
The `./build` directory is mounted to the `/tendermint` mount point to attach the binary and config files to the container.
For instance, to create a single node testnet:
```
cd $GOPATH/src/github.com/tendermint/tendermint
# Clear the build folder
rm -rf ./build
# Build binary
make build-linux
# Create configuration
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
#Run the node
docker run -v `pwd`/build:/tendermint tendermint/localnode
```
## Logging
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
## Special binaries
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.

View File

@ -1,40 +0,0 @@
localnode
=========
It is assumed that you have already `setup docker <https://docs.docker.com/engine/installation/>`__.
Description
-----------
Image for local testnets.
Add the tendermint binary to the image by attaching it in a folder to the `/tendermint` mount point.
It assumes that the configuration was created by the `tendermint testnet` command and it is also attached to the `/tendermint` mount point.
Example:
This example builds a linux tendermint binary under the `build/` folder, creates tendermint configuration for a single-node validator and runs the node:
```
cd $GOPATH/src/github.com/tendermint/tendermint
#Build binary
make build-linux
#Create configuration
docker run -e LOG="stdout" -v `pwd`/build:/tendermint tendermint/localnode testnet --o . --v 1
#Run the node
docker run -v `pwd`/build:/tendermint tendermint/localnode
```
Logging
-------
Log is saved under the attached volume, in the `tendermint.log` file. If the `LOG` environment variable is set to `stdout` at start, the log is not saved, but printed on the screen.
Special binaries
----------------
If you have multiple binaries with different names, you can specify which one to run with the BINARY environment variable. The path of the binary is relative to the attached volume.
docker-compose.yml
==================
This file creates a 4-node network using the localnode image. The nodes of the network are exposed to the host machine on ports 46656-46657, 46659-46660, 46661-46662, 46663-46664 respectively.

View File

@ -9,7 +9,7 @@ VOLUME [ /tendermint ]
WORKDIR /tendermint
EXPOSE 46656 46657
ENTRYPOINT ["/usr/bin/wrapper.sh"]
CMD ["node", "--proxy_app", "dummy"]
CMD ["node", "--proxy_app", "kvstore"]
STOPSIGNAL SIGTERM
COPY wrapper.sh /usr/bin/wrapper.sh

View File

@ -0,0 +1 @@
# Remote Cluster with Terraform and Ansible

View File

@ -31,7 +31,7 @@ func TestNodeStartStop(t *testing.T) {
assert.NoError(t, err)
select {
case <-blockCh:
case <-time.After(5 * time.Second):
case <-time.After(10 * time.Second):
t.Fatal("timed out waiting for the node to produce a block")
}

View File

@ -523,7 +523,7 @@ FOR_LOOP:
break FOR_LOOP
}
if msgBytes != nil {
c.Logger.Debug("Received bytes", "chID", pkt.ChannelID, "msgBytes", msgBytes)
c.Logger.Debug("Received bytes", "chID", pkt.ChannelID, "msgBytes", fmt.Sprintf("%X", msgBytes))
// NOTE: This means the reactor.Receive runs in the same thread as the p2p recv routine
c.onReceive(pkt.ChannelID, msgBytes)
}

View File

@ -204,17 +204,19 @@ func (c *HTTP) Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
return result, nil
}
func (c *HTTP) TxSearch(query string, prove bool) ([]*ctypes.ResultTx, error) {
results := new([]*ctypes.ResultTx)
func (c *HTTP) TxSearch(query string, prove bool, page, perPage int) (*ctypes.ResultTxSearch, error) {
result := new(ctypes.ResultTxSearch)
params := map[string]interface{}{
"query": query,
"prove": prove,
"query": query,
"prove": prove,
"page": page,
"per_page": perPage,
}
_, err := c.rpc.Call("tx_search", params, results)
_, err := c.rpc.Call("tx_search", params, result)
if err != nil {
return nil, errors.Wrap(err, "TxSearch")
}
return *results, nil
return result, nil
}
func (c *HTTP) Validators(height *int64) (*ctypes.ResultValidators, error) {

View File

@ -50,7 +50,7 @@ type SignClient interface {
Commit(height *int64) (*ctypes.ResultCommit, error)
Validators(height *int64) (*ctypes.ResultValidators, error)
Tx(hash []byte, prove bool) (*ctypes.ResultTx, error)
TxSearch(query string, prove bool) ([]*ctypes.ResultTx, error)
TxSearch(query string, prove bool, page, perPage int) (*ctypes.ResultTxSearch, error)
}
// HistoryClient shows us data from genesis to now in large chunks.

View File

@ -128,8 +128,8 @@ func (Local) Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
return core.Tx(hash, prove)
}
func (Local) TxSearch(query string, prove bool) ([]*ctypes.ResultTx, error) {
return core.TxSearch(query, prove)
func (Local) TxSearch(query string, prove bool, page, perPage int) (*ctypes.ResultTxSearch, error) {
return core.TxSearch(query, prove, page, perPage)
}
func (c *Local) Subscribe(ctx context.Context, subscriber string, query tmpubsub.Query, out chan<- interface{}) error {

View File

@ -334,11 +334,11 @@ func TestTxSearch(t *testing.T) {
// now we query for the tx.
// since there's only one tx, we know index=0.
results, err := c.TxSearch(fmt.Sprintf("tx.hash='%v'", txHash), true)
result, err := c.TxSearch(fmt.Sprintf("tx.hash='%v'", txHash), true, 1, 30)
require.Nil(t, err, "%+v", err)
require.Len(t, results, 1)
require.Len(t, result.Txs, 1)
ptx := results[0]
ptx := result.Txs[0]
assert.EqualValues(t, txHeight, ptx.Height)
assert.EqualValues(t, tx, ptx.Tx)
assert.Zero(t, ptx.Index)
@ -352,14 +352,14 @@ func TestTxSearch(t *testing.T) {
}
// we query for non existing tx
results, err = c.TxSearch(fmt.Sprintf("tx.hash='%X'", anotherTxHash), false)
result, err = c.TxSearch(fmt.Sprintf("tx.hash='%X'", anotherTxHash), false, 1, 30)
require.Nil(t, err, "%+v", err)
require.Len(t, results, 0)
require.Len(t, result.Txs, 0)
// we query using a tag (see kvstore application)
results, err = c.TxSearch("app.creator='jae'", false)
result, err = c.TxSearch("app.creator='jae'", false, 1, 30)
require.Nil(t, err, "%+v", err)
if len(results) == 0 {
if len(result.Txs) == 0 {
t.Fatal("expected a lot of transactions")
}
}

View File

@ -13,3 +13,9 @@ go get github.com/melekes/godoc2md
godoc2md -template rpc/core/doc_template.txt github.com/tendermint/tendermint/rpc/core | grep -v -e "pipe.go" -e "routes.go" -e "dev.go" | sed 's$/src/target$https://github.com/tendermint/tendermint/tree/master/rpc/core$'
```
## Pagination
Requests that return multiple items will be paginated to 30 items by default.
You can specify further pages with the ?page parameter. You can also set a
custom page size up to 100 with the ?per_page parameter.

View File

@ -209,7 +209,7 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
}
}
// Get unconfirmed transactions including their number.
// Get unconfirmed transactions (maximum ?limit entries) including their number.
//
// ```shell
// curl 'localhost:46657/unconfirmed_txs'
@ -232,9 +232,18 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) {
// "id": "",
// "jsonrpc": "2.0"
// }
//
// ### Query Parameters
//
// | Parameter | Type | Default | Required | Description |
// |-----------+------+---------+----------+--------------------------------------|
// | limit | int | 30 | false | Maximum number of entries (max: 100) |
// ```
func UnconfirmedTxs() (*ctypes.ResultUnconfirmedTxs, error) {
txs := mempool.Reap(-1)
func UnconfirmedTxs(limit int) (*ctypes.ResultUnconfirmedTxs, error) {
// reuse per_page validator
limit = validatePerPage(limit)
txs := mempool.Reap(limit)
return &ctypes.ResultUnconfirmedTxs{len(txs), txs}, nil
}

View File

@ -14,6 +14,12 @@ import (
"github.com/tendermint/tmlibs/log"
)
const (
// see README
defaultPerPage = 30
maxPerPage = 100
)
var subscribeTimeout = 5 * time.Second
//----------------------------------------------
@ -117,3 +123,21 @@ func SetLogger(l log.Logger) {
func SetEventBus(b *types.EventBus) {
eventBus = b
}
func validatePage(page, perPage, totalCount int) int {
pages := ((totalCount - 1) / perPage) + 1
if page < 1 {
page = 1
} else if page > pages {
page = pages
}
return page
}
func validatePerPage(perPage int) int {
if perPage < 1 || perPage > maxPerPage {
return defaultPerPage
}
return perPage
}

67
rpc/core/pipe_test.go Normal file
View File

@ -0,0 +1,67 @@
package core
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func TestPaginationPage(t *testing.T) {
cases := []struct {
totalCount int
perPage int
page int
newPage int
}{
{0, 10, 0, 1},
{0, 10, 1, 1},
{0, 10, 2, 1},
{5, 10, -1, 1},
{5, 10, 0, 1},
{5, 10, 1, 1},
{5, 10, 2, 1},
{5, 10, 2, 1},
{5, 5, 1, 1},
{5, 5, 2, 1},
{5, 5, 3, 1},
{5, 3, 2, 2},
{5, 3, 3, 2},
{5, 2, 2, 2},
{5, 2, 3, 3},
{5, 2, 4, 3},
}
for _, c := range cases {
p := validatePage(c.page, c.perPage, c.totalCount)
assert.Equal(t, c.newPage, p, fmt.Sprintf("%v", c))
}
}
func TestPaginationPerPage(t *testing.T) {
cases := []struct {
totalCount int
perPage int
newPerPage int
}{
{5, 0, defaultPerPage},
{5, 1, 1},
{5, 2, 2},
{5, defaultPerPage, defaultPerPage},
{5, maxPerPage - 1, maxPerPage - 1},
{5, maxPerPage, maxPerPage},
{5, maxPerPage + 1, defaultPerPage},
}
for _, c := range cases {
p := validatePerPage(c.perPage)
assert.Equal(t, c.newPerPage, p, fmt.Sprintf("%v", c))
}
}

View File

@ -22,11 +22,11 @@ var Routes = map[string]*rpc.RPCFunc{
"block_results": rpc.NewRPCFunc(BlockResults, "height"),
"commit": rpc.NewRPCFunc(Commit, "height"),
"tx": rpc.NewRPCFunc(Tx, "hash,prove"),
"tx_search": rpc.NewRPCFunc(TxSearch, "query,prove"),
"tx_search": rpc.NewRPCFunc(TxSearch, "query,prove,page,per_page"),
"validators": rpc.NewRPCFunc(Validators, "height"),
"dump_consensus_state": rpc.NewRPCFunc(DumpConsensusState, ""),
"consensus_state": rpc.NewRPCFunc(ConsensusState, ""),
"unconfirmed_txs": rpc.NewRPCFunc(UnconfirmedTxs, ""),
"unconfirmed_txs": rpc.NewRPCFunc(UnconfirmedTxs, "limit"),
"num_unconfirmed_txs": rpc.NewRPCFunc(NumUnconfirmedTxs, ""),
// broadcast API

View File

@ -6,6 +6,7 @@ import (
ctypes "github.com/tendermint/tendermint/rpc/core/types"
"github.com/tendermint/tendermint/state/txindex/null"
"github.com/tendermint/tendermint/types"
cmn "github.com/tendermint/tmlibs/common"
tmquery "github.com/tendermint/tmlibs/pubsub/query"
)
@ -104,7 +105,8 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
}, nil
}
// TxSearch allows you to query for multiple transactions results.
// TxSearch allows you to query for multiple transactions results. It returns a
// list of transactions (maximum ?per_page entries) and the total count.
//
// ```shell
// curl "localhost:46657/tx_search?query=\"account.owner='Ivan'\"&prove=true"
@ -120,43 +122,46 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
//
// ```json
// {
// "result": [
// {
// "proof": {
// "Proof": {
// "aunts": [
// "J3LHbizt806uKnABNLwG4l7gXCA=",
// "iblMO/M1TnNtlAefJyNCeVhjAb0=",
// "iVk3ryurVaEEhdeS0ohAJZ3wtB8=",
// "5hqMkTeGqpct51ohX0lZLIdsn7Q=",
// "afhsNxFnLlZgFDoyPpdQSe0bR8g="
// ]
// },
// "Data": "mvZHHa7HhZ4aRT0xMDA=",
// "RootHash": "F6541223AA46E428CB1070E9840D2C3DF3B6D776",
// "Total": 32,
// "Index": 31
// },
// "tx": "mvZHHa7HhZ4aRT0xMDA=",
// "tx_result": {},
// "index": 31,
// "height": 12,
// "hash": "2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"
// }
// ],
// "jsonrpc": "2.0",
// "id": "",
// "jsonrpc": "2.0"
// "result": {
// "txs": [
// {
// "proof": {
// "Proof": {
// "aunts": [
// "J3LHbizt806uKnABNLwG4l7gXCA=",
// "iblMO/M1TnNtlAefJyNCeVhjAb0=",
// "iVk3ryurVaEEhdeS0ohAJZ3wtB8=",
// "5hqMkTeGqpct51ohX0lZLIdsn7Q=",
// "afhsNxFnLlZgFDoyPpdQSe0bR8g="
// ]
// },
// "Data": "mvZHHa7HhZ4aRT0xMDA=",
// "RootHash": "F6541223AA46E428CB1070E9840D2C3DF3B6D776",
// "Total": 32,
// "Index": 31
// },
// "tx": "mvZHHa7HhZ4aRT0xMDA=",
// "tx_result": {},
// "index": 31,
// "height": 12,
// "hash": "2B8EC32BA2579B3B8606E42C06DE2F7AFA2556EF"
// }
// ],
// "total_count": 1
// }
// }
// ```
//
// Returns transactions matching the given query.
//
// ### Query Parameters
//
// | Parameter | Type | Default | Required | Description |
// |-----------+--------+---------+----------+-----------------------------------------------------------|
// | query | string | "" | true | Query |
// | prove | bool | false | false | Include proofs of the transactions inclusion in the block |
// | page | int | 1 | false | Page number (1-based) |
// | per_page | int | 30 | false | Number of entries per page (max: 100) |
//
// ### Returns
//
@ -166,7 +171,7 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) {
// - `index`: `int` - index of the transaction
// - `height`: `int` - height of the block where this transaction was in
// - `hash`: `[]byte` - hash of the transaction
func TxSearch(query string, prove bool) ([]*ctypes.ResultTx, error) {
func TxSearch(query string, prove bool, page, perPage int) (*ctypes.ResultTxSearch, error) {
// if index is disabled, return error
if _, ok := txIndexer.(*null.TxIndex); ok {
return nil, fmt.Errorf("Transaction indexing is disabled")
@ -182,11 +187,15 @@ func TxSearch(query string, prove bool) ([]*ctypes.ResultTx, error) {
return nil, err
}
// TODO: we may want to consider putting a maximum on this length and somehow
// informing the user that things were truncated.
apiResults := make([]*ctypes.ResultTx, len(results))
totalCount := len(results)
page = validatePage(page, perPage, totalCount)
perPage = validatePerPage(perPage)
skipCount := (page - 1) * perPage
apiResults := make([]*ctypes.ResultTx, cmn.MinInt(perPage, totalCount-skipCount))
var proof types.TxProof
for i, r := range results {
for i := 0; i < len(apiResults); i++ {
r := results[skipCount+i]
height := r.Height
index := r.Index
@ -205,5 +214,5 @@ func TxSearch(query string, prove bool) ([]*ctypes.ResultTx, error) {
}
}
return apiResults, nil
return &ctypes.ResultTxSearch{Txs: apiResults, TotalCount: totalCount}, nil
}

View File

@ -172,6 +172,12 @@ type ResultTx struct {
Proof types.TxProof `json:"proof,omitempty"`
}
// Result of searching for txs
type ResultTxSearch struct {
Txs []*ResultTx `json:"txs"`
TotalCount int `json:"total_count"`
}
// List of mempool txs
type ResultUnconfirmedTxs struct {
N int `json:"n_txs"`

View File

@ -1,9 +0,0 @@
#! /bin/bash
# update the `tester` image by copying in the latest tendermint binary
docker run --name builder tester true
docker cp $GOPATH/bin/tendermint builder:/go/bin/tendermint
docker commit builder tester
docker rm -vf builder

View File

@ -1,5 +0,0 @@
#! /bin/bash
# clean everything
docker rm -vf $(docker ps -aq)
docker network rm local_testnet

View File

@ -4,13 +4,13 @@ package version
const (
Maj = "0"
Min = "19"
Fix = "4"
Fix = "5"
)
var (
// Version is the current version of Tendermint
// Must be a string because scripts like dist.sh read this file.
Version = "0.19.4"
Version = "0.19.5"
// GitCommit is the current HEAD set using ldflags.
GitCommit string