* Bump versions where appropriate
Tested with cargo install --locked --path etc
* Remove fixed panics from 'Known Issues'
* Change to alpha release series in the README
Co-authored-by: teor <teor@riseup.net>
The clippy unknown lints attribute was deprecated in
nightly in rust-lang/rust#80524. The old lint name now produces a
warning.
Since we're using `allow(unknown_lints)` to suppress warnings, we need to
add the canonical name, so we can continue to build without warnings on
nightly.
But we also need to keep the old name, so we can continue to build
without warnings on stable.
And therefore, we also need to disable the "removed lints" warning,
otherwise we'll get warnings about the old name on nightly.
We'll need to keep this transitional clippy config until rustc 1.51 is
stable.
## Motivation
This PR is motivated by the regression identified in https://github.com/ZcashFoundation/zebra/issues/1349. That PR notes that the metrics stopped working for most of the crates other than `zebrad`.
## Solution
This PR resolves the regression by deduplicating the `metrics` crate dependency. During a recent change we upgraded the metrics version in `zebrad` and a couple other of our crates, but we never updated the dependencies in `zebra-state`, `zebra-consensus`, or `zebra-network`. This caused the metrics macros to attempt to retrieve the current metrics exporter through the wrong function. We would install the metrics exporter in `0.13`, but then attempt to look it up through the `0.12` crate, which contains a different instance of the metrics exporter static variable which is unset. Doing this causes the metrics macros to return `None` for the current exporter after which they just silently give up.
## Related Issues
closes https://github.com/ZcashFoundation/zebra/issues/1349
## Follow Up Work
I noticed we have quite a few duplicate dependencies in our tree. We might be able to save some compilation time by auditing those and deduplicating them as much as possible.
- https://github.com/ZcashFoundation/zebra/issues/1582
Co-authored-by: teor <teor@riseup.net>
Previously we set the crate versions to 3.x, so that the major version was
aligned with the NU version. But we want to be able to make API changes
independently of the NU schedule.
Zcashd will blindly request more block headers as long as it got 160
block headers in response to a previous query, EVEN IF THOSE HEADERS ARE
ALREADY KNOWN. To dodge this behavior, return slightly fewer than the
maximum, to get it to go away.
0ccc885371/src/main.cpp (L6274-L6280)
Without this change, communication between a partially-synced `zebrad`
and fully-synced `zcashd` looked like this:
1. `zebrad` connects to `zcashd`, which sends an initial `getheaders`
request;
2. `zebrad` correctly computes the intersection of the provided block
locator with the node's current chain and returns 160 following
headers;
3. `zcashd` does not check whether it already has those headers and
assumes that any provided headers are new and re-validates them;
4. `zcashd` assumes that because `zebrad` responded with 160 headers,
the `zebrad` node is ahead of it, and requests the next 160 headers.
5. Because block locators are sparse, the intersection between the
`zcashd` and `zebrad` chains is likely well behind the `zebrad` tip,
so this process continues for thousands of blocks.
To avoid this problem, we return slightly fewer than the protocol
maximum (158 rather than 160, to guard against off-by-one errors in
zcashd). This does not interfere with use of the returned headers by
peers that check the headers, but does prevent `zcashd` from trying to
download thousands of block headers it already has.
This problem does not occur in the `zcashd<->zcashd` case only because
`zcashd` does not respond to `getheaders` messages while it is syncing.
However, implementing this behavior in Zebra would be more complicated,
because we don't have a distinct "initial block sync" state (we do
poll-based syncing continuously) and we don't have shared global
variables to modify to set that state.
Relevant links (thanks @str4d):
- The PR that introduced this behavior: https://github.com/bitcoin/bitcoin/pull/4468/files#r17026905
- https://github.com/bitcoin/bitcoin/issues/6861
- https://github.com/bitcoin/bitcoin/issues/6755
- https://github.com/bitcoin/bitcoin/pull/8306#issuecomment-614916454
We modeled a Bitcoin `headers` message as being a list of block headers.
However, the actual data structure is slightly different: it's a list of (block
header, transaction count) pairs. This caused zcashd to reject our headers
messages.
To fix this, introduce a new `CountedHeader` struct with a `block::Header` and
transaction count `usize`, then thread it through the inbound service and the
state.
I tested this locally by running Zebra with these changes and inspecting a
trace-level log of the span of a peer connection that requested a nontrivial
headers packet from us, and verified that it did not reject our message.
If the limit is less than the ideal, try to increase it to the ideal.
If that doesn't work, try to increase the limit as high as possible.
If the limit is still less than the minimum, panic.
The `CoinbaseData` parses the block height separately from the rest of the
free-form coinbase data. However, it had two bugs:
1. It did not require that the height was canonically encoded;
2. Its canonical encoding was incorrect relative to the BIP34-inherited encoding.
This meant that we computed some transaction hashes incorrectly, because when
we re-serialized the coinbase transaction, we would canonically serialize the
coinbase transaction (using the incorrect definition of canonical, bug 2). And
we didn't notice that the wrong definition of canonical encoding was being used
because we accepted what we thought were non-canonically encoded heights.
The relevant rules are here: 877212414a/src/script/script.h (L307-L346)
This commit changes the encoding to reject non-canonically encoded heights, and
to match the correct encoding rules. We check that at least one
non-canonically encoded height is correctly rejected using a new test vector.
The database format increments because we saved a bunch of wrongly encoded blocks.
This discrepancy was originally noticed by @teor2345, who pointed out that a
previous version of the block 202 test vector (now preserved as "bad block
202") did not match the block from zcashd.
As a side effect of computing Merkle roots, we build a list of
transaction hashes. Instead of discarding these, add them to
PreparedBlock and FinalizedBlock so that they can be reused rather than
recomputed.
This commit adds Merkle root validation to:
1. the block verifier;
2. the checkpoint verifier.
In the first case, Bitcoin Merkle tree malleability has no effect,
because only a single Merkle tree in each malleablity set is valid (the
others have duplicate transactions).
In the second case, we need to check that the Merkle tree does not contain any
duplicate transactions.
Closes#1385Closes#906
* implement inbound `FindBlocks`
* Handle inbound peer FindHeaders requests
* handle request before having any chain tip
* Split `find_chain_hashes` into smaller functions
Add a `max_len` argument to support `FindHeaders` requests.
Rewrite the hash collection code to use heights, so we can handle the
`stop` hash and "no intersection" cases correctly.
* Split state height functions into "any chain" and "best chain"
* Rename the best chain block method to `best_block`
* Move fmt utilities to zebra_chain::fmt
* Summarise Debug for some Message variants
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: Jane Lusby <jlusby42@gmail.com>
Some checks use the same blocks, so we take a copy of the block borrows
before using them. That way, we don't have to manage the position of the
iterator between checks.
Temporary fix so that Zebra's default logs support a typical workflow:
1. Developer or user runs Zebra with the default config
2. They send the logs to a terminal
3. When they see a bug, they copy-paste the last few log lines into a
bug report
This is the same change that was merged in #1373 and reverted in #1375.
We'll create a consistent logging design for Zebra in ticket #1381.
* Make debug_stop_at_height and ephemeral work together
* if `debug_stop_at_height` and `ephemeral` are set, delete the database
files after reaching the stop height
* drop or flush the database before `debug_stop_at_height` exits Zebra
This commit changes the state system and database format to track the
provenance of UTXOs, in addition to the outputs themselves.
Specifically, it tracks the following additional metadata:
- the height at which the UTXO was created;
- whether or not the UTXO was created from a coinbase transaction or
not.
This metadata will allow us to:
- check the coinbase maturity consensus rule;
- check the coinbase inputs => no transparent outputs rule;
- implement lookup of transactions by utxo (using the height to find the
block and then scanning the block) for a future RPC mechanism.
Closes#1342
This provides useful and not too noisy output at INFO level. We do an
info-level message on every block commit instead of trying to do one
message every N blocks, because this is useful both for initial block
sync as well as continuous state updates on new blocks.
This change introduces two new types:
- `PreparedBlock`, representing a block which has undergone semantic
validation and has been prepared for contextual validation;
- `FinalizedBlock`, representing a block which is ready to be finalized
immediately;
and changes the `Request::CommitBlock`,`Request::CommitFinalizedBlock`
variants to use these types instead of their previous fields.
This change solves the problem of passing data between semantic
validation and contextual validation, and cleans up the state code by
allowing it to pass around a bundle of data. Previously, the state code
just passed around an `Arc<Block>`, which forced it to needlessly
recompute block hashes and other data, and was incompatible with the
already-known but not-yet-implemented data transfer requirements, namely
passing in the Sprout and Sapling anchors computed during contextual
validation.
This commit propagates the `PreparedBlock` and `FinalizedBlock` types
through the state code but only uses their data opportunistically, e.g.,
changing .hash() computations to use the precomputed hash. In the
future, these structures can be extended to pass data through the
verification pipeline for reuse as appropriate. For instance, these
changes allow the sprout and sapling anchors to be propagated through
the state.
The behavior of a request for a UTXO from a previous block depends on
whether that block has already been submitted to the state, or not:
* if it has, the state should be able to find it and answer immediately.
* if it has not, the state should see it in a later request.
However, the previous code only checked committed blocks, not queued
blocks, so if the block containing the UTXO had already arrived but had
not been committed, it would never be scanned.
This patch fixes the problem but is a bad solution, duplicating
computation between the block verifier and the state. A better fix
follows in the next commit.
Make tracing messages more concise by omitting information already
contained in a parent span and by shortening messages. This makes them
easier to read.
Previously, this function was instrumented with a span containing the
parent hash that was the entry to the function. But it doesn't make
sense to consider the work done by the function as happening in the
context of the supplied parent hash (as distinct from the context of the
hash of the newly arrived block, which is already contained in an outer
span), so this adds noise without conveying extra context.
Instead, use events that occur within the context of the existing spans.
Here the span is added to the body of the `Service::call`
implementation, not to the futures it returns, because the state service
does all of the work synchronously in `call` rather than in the futures
it returns.
The service is skipped as a span field. We could either include or
exclude the request itself. It would be useful, but the request body
can be very large. Instead, we make two spans, one at info level and
one at trace level, and filter that way.
This change is mostly mechanical, with the exception of the changes to the
`tower-batch` middleware. This middleware was adapted from `tower::buffer`,
and the `tower::buffer` code was changed to implement its own bounded queue,
because Tokio 0.3 removed the `mpsc::Sender::poll_send` method. See
ddc64e8d4d
for more context on the Tower changes. To match Tower as closely as possible
in order to be able to upstream `tower-batch`, those changes are copied from
`tower::Buffer` to `tower-batch`.
Some systems have a very small /dev/shm, for example, see:
https://github.com/docker-library/postgres/issues/416
So we should just use the temporary directory on all operating systems.
Also:
* use TempDir to generate the temporary path
* delete the code that we copied from sled
* prefix the temporary path with the state version and network
## Motivation
Prior to this PR we've been using `sled` as our database for storing persistent chain data on the disk between boots. We picked sled over rocksdb to minimize our c++ dependencies despite it being a less mature codebase. The theory was if it worked well enough we'd prefer to have a pure rust codebase, but if we ever ran into problems we knew we could easily swap it out with rocksdb.
Well, we ran into problems. Sled's memory usage was particularly high, and it seemed to be leaking memory. On top of all that, the performance for writes was pretty poor, causing us to become bottle-necked on sled instead of the network.
## Solution
This PR replaces `sled` with `rocksdb`. We've seen a 10x improvement in memory usage out of the box, no more leaking, and much better write performance. With this change writing chain data to disk is no longer a limiting factor in how quickly we can sync the chain.
The code in this pull request has:
- [x] Documentation Comments
- [x] Unit Tests and Property Tests
## Review
@hdevalence
This change explicitly documents cancellation contracts for our Tower services,
and tries to correct a bug in the implementation of the CheckpointVerifier,
which duplicates information from the state service but did not ensure that it
would be kept in sync.
This change has two benefits:
* reduces conflicts with the sled refactor and any replacement
* allows the function to be called independently for testing
`check_contextual_validity` mistakenly used the new block's hash to try
to get the parent block from the state. This caused a panic, because the
new block isn't in the state yet.
Use `StateService::chain` to get the parent block, because we'll be
using `chain` for difficulty adjustment contextual verification anyway.
* Add internal iterator API for accessing relevant chain blocks
* get blocks from all chains in non_finalized state
* Impl FusedIterator for service::Iter
* impl ExactSizedIterator for service::Iter
* let size_hint find heights in side chains
Co-authored-by: teor <teor@riseup.net>
* Add transcript test for requests while state is empty
* Add happy path test for each query once the state is populated
* let populate logic handle out of order blocks
* Add a maximum queued height metric to the finalized state
And rename all the finalized state metrics to contain "finalized".
* Use i32 and -1 instead of Option<Height>
Co-authored-by: Jane Lusby <jlusby42@gmail.com>