The `CoinbaseData` parses the block height separately from the rest of the
free-form coinbase data. However, it had two bugs:
1. It did not require that the height was canonically encoded;
2. Its canonical encoding was incorrect relative to the BIP34-inherited encoding.
This meant that we computed some transaction hashes incorrectly, because when
we re-serialized the coinbase transaction, we would canonically serialize the
coinbase transaction (using the incorrect definition of canonical, bug 2). And
we didn't notice that the wrong definition of canonical encoding was being used
because we accepted what we thought were non-canonically encoded heights.
The relevant rules are here: 877212414a/src/script/script.h (L307-L346)
This commit changes the encoding to reject non-canonically encoded heights, and
to match the correct encoding rules. We check that at least one
non-canonically encoded height is correctly rejected using a new test vector.
The database format increments because we saved a bunch of wrongly encoded blocks.
This discrepancy was originally noticed by @teor2345, who pointed out that a
previous version of the block 202 test vector (now preserved as "bad block
202") did not match the block from zcashd.
As a side effect of computing Merkle roots, we build a list of
transaction hashes. Instead of discarding these, add them to
PreparedBlock and FinalizedBlock so that they can be reused rather than
recomputed.
This commit adds Merkle root validation to:
1. the block verifier;
2. the checkpoint verifier.
In the first case, Bitcoin Merkle tree malleability has no effect,
because only a single Merkle tree in each malleablity set is valid (the
others have duplicate transactions).
In the second case, we need to check that the Merkle tree does not contain any
duplicate transactions.
Closes#1385Closes#906
* implement inbound `FindBlocks`
* Handle inbound peer FindHeaders requests
* handle request before having any chain tip
* Split `find_chain_hashes` into smaller functions
Add a `max_len` argument to support `FindHeaders` requests.
Rewrite the hash collection code to use heights, so we can handle the
`stop` hash and "no intersection" cases correctly.
* Split state height functions into "any chain" and "best chain"
* Rename the best chain block method to `best_block`
* Move fmt utilities to zebra_chain::fmt
* Summarise Debug for some Message variants
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: Jane Lusby <jlusby42@gmail.com>
Some checks use the same blocks, so we take a copy of the block borrows
before using them. That way, we don't have to manage the position of the
iterator between checks.
Temporary fix so that Zebra's default logs support a typical workflow:
1. Developer or user runs Zebra with the default config
2. They send the logs to a terminal
3. When they see a bug, they copy-paste the last few log lines into a
bug report
This is the same change that was merged in #1373 and reverted in #1375.
We'll create a consistent logging design for Zebra in ticket #1381.
* Make debug_stop_at_height and ephemeral work together
* if `debug_stop_at_height` and `ephemeral` are set, delete the database
files after reaching the stop height
* drop or flush the database before `debug_stop_at_height` exits Zebra
This commit changes the state system and database format to track the
provenance of UTXOs, in addition to the outputs themselves.
Specifically, it tracks the following additional metadata:
- the height at which the UTXO was created;
- whether or not the UTXO was created from a coinbase transaction or
not.
This metadata will allow us to:
- check the coinbase maturity consensus rule;
- check the coinbase inputs => no transparent outputs rule;
- implement lookup of transactions by utxo (using the height to find the
block and then scanning the block) for a future RPC mechanism.
Closes#1342
This provides useful and not too noisy output at INFO level. We do an
info-level message on every block commit instead of trying to do one
message every N blocks, because this is useful both for initial block
sync as well as continuous state updates on new blocks.
This change introduces two new types:
- `PreparedBlock`, representing a block which has undergone semantic
validation and has been prepared for contextual validation;
- `FinalizedBlock`, representing a block which is ready to be finalized
immediately;
and changes the `Request::CommitBlock`,`Request::CommitFinalizedBlock`
variants to use these types instead of their previous fields.
This change solves the problem of passing data between semantic
validation and contextual validation, and cleans up the state code by
allowing it to pass around a bundle of data. Previously, the state code
just passed around an `Arc<Block>`, which forced it to needlessly
recompute block hashes and other data, and was incompatible with the
already-known but not-yet-implemented data transfer requirements, namely
passing in the Sprout and Sapling anchors computed during contextual
validation.
This commit propagates the `PreparedBlock` and `FinalizedBlock` types
through the state code but only uses their data opportunistically, e.g.,
changing .hash() computations to use the precomputed hash. In the
future, these structures can be extended to pass data through the
verification pipeline for reuse as appropriate. For instance, these
changes allow the sprout and sapling anchors to be propagated through
the state.
The behavior of a request for a UTXO from a previous block depends on
whether that block has already been submitted to the state, or not:
* if it has, the state should be able to find it and answer immediately.
* if it has not, the state should see it in a later request.
However, the previous code only checked committed blocks, not queued
blocks, so if the block containing the UTXO had already arrived but had
not been committed, it would never be scanned.
This patch fixes the problem but is a bad solution, duplicating
computation between the block verifier and the state. A better fix
follows in the next commit.
Make tracing messages more concise by omitting information already
contained in a parent span and by shortening messages. This makes them
easier to read.
Previously, this function was instrumented with a span containing the
parent hash that was the entry to the function. But it doesn't make
sense to consider the work done by the function as happening in the
context of the supplied parent hash (as distinct from the context of the
hash of the newly arrived block, which is already contained in an outer
span), so this adds noise without conveying extra context.
Instead, use events that occur within the context of the existing spans.
Here the span is added to the body of the `Service::call`
implementation, not to the futures it returns, because the state service
does all of the work synchronously in `call` rather than in the futures
it returns.
The service is skipped as a span field. We could either include or
exclude the request itself. It would be useful, but the request body
can be very large. Instead, we make two spans, one at info level and
one at trace level, and filter that way.
This change is mostly mechanical, with the exception of the changes to the
`tower-batch` middleware. This middleware was adapted from `tower::buffer`,
and the `tower::buffer` code was changed to implement its own bounded queue,
because Tokio 0.3 removed the `mpsc::Sender::poll_send` method. See
ddc64e8d4d
for more context on the Tower changes. To match Tower as closely as possible
in order to be able to upstream `tower-batch`, those changes are copied from
`tower::Buffer` to `tower-batch`.
Some systems have a very small /dev/shm, for example, see:
https://github.com/docker-library/postgres/issues/416
So we should just use the temporary directory on all operating systems.
Also:
* use TempDir to generate the temporary path
* delete the code that we copied from sled
* prefix the temporary path with the state version and network
## Motivation
Prior to this PR we've been using `sled` as our database for storing persistent chain data on the disk between boots. We picked sled over rocksdb to minimize our c++ dependencies despite it being a less mature codebase. The theory was if it worked well enough we'd prefer to have a pure rust codebase, but if we ever ran into problems we knew we could easily swap it out with rocksdb.
Well, we ran into problems. Sled's memory usage was particularly high, and it seemed to be leaking memory. On top of all that, the performance for writes was pretty poor, causing us to become bottle-necked on sled instead of the network.
## Solution
This PR replaces `sled` with `rocksdb`. We've seen a 10x improvement in memory usage out of the box, no more leaking, and much better write performance. With this change writing chain data to disk is no longer a limiting factor in how quickly we can sync the chain.
The code in this pull request has:
- [x] Documentation Comments
- [x] Unit Tests and Property Tests
## Review
@hdevalence