* wip
Co-authored-by: Jane Lusby <jlusby42@gmail.com>
* wip2: add nullifiers
Co-authored-by: Jane Lusby <jlusby42@gmail.com>
* Update book/src/dev/rfcs/0003-state-updates.md
Co-authored-by: teor <teor@riseup.net>
* Move to RFC number 5
* rfc: add PR link to state update RFC
* rfc: change state RFC to store blocks by height.
The rationale for this change is described in the document: it means
that we write blocks only to one end of the Sled tree, and hopefully
helps us with spatial access patterns.
This should help alleviate a major cause of memory use in Zebra's
current WIP Sled structure, which is that:
- blocks are stored in random, sparse order (by hash) in the B-tree;
- the `Request::GetDepth` method opens the entire block store and
queries a random part of its block data to determine whether a hash is
present;
- if present, it deserializes the complete block data of both the given
block and the current tip block, to compute the difference in block
heights.
This access pattern forces a large amount of B-tree data to remain
resident, and could probably be avoided if we didn't do that.
* rfc: add sprout and sapling anchors to sled trees.
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
* rfc: fill in details of state service requests.
* rfc: extract commit process from API description
* rfc: add anchor parameters to CommitBlock.
These have to be computed by a verifier, so passing them as parameters
means we don't recompute them.
* WIP for in memory state structs
* tweeks from end of session with henry
* more updates from pairing
* rewrite non-finalized state sections
* update query instructions for each request
* more updates
* updates from pairing with henry
* updates from proofreading solo
* add guide level explanation to state rfc
* add drawbacks section
* Update book/src/dev/rfcs/0005-state-updates.md
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* Apply suggestions from code review
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* Update book/src/dev/rfcs/0005-state-updates.md
Co-authored-by: Henry de Valence <hdevalence@hdevalence.ca>
* apply changes from code review
* clarify iteration
* Apply suggestions from code review
Co-authored-by: teor <teor@riseup.net>
* apply changes from code review
* Update book/src/dev/rfcs/0005-state-updates.md
Co-authored-by: teor <teor@riseup.net>
* Apply suggestions from code review
Co-authored-by: teor <teor@riseup.net>
* Apply suggestions from code review
Co-authored-by: teor <teor@riseup.net>
* Apply suggestions from code review
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
* Apply suggestions from code review
Co-authored-by: teor <teor@riseup.net>
* add info about default constructing chains when forking from finalized state
* Update book/src/dev/rfcs/0005-state-updates.md
Co-authored-by: teor <teor@riseup.net>
* move contextual verification out of Chain
Co-authored-by: Jane Lusby <jlusby42@gmail.com>
Co-authored-by: teor <teor@riseup.net>
Co-authored-by: Deirdre Connolly <deirdre@zfnd.org>
Co-authored-by: Jane Lusby <jane@zfnd.org>
* refactor block and tx validation errors
* rename errors module to error
* move NoTransactions to BlockError
* clarify some errors, use dbg format for hash in error
* mnake is_coinbase_first return BlockError
* add new error types for each consensus Service
Co-authored-by: Jane Lusby <jane@zfnd.org>
Using a Buffer with size 1 is a footgun because it allows only one
sender to call poll_ready at a time. This is usually undesirable
because it means that a task or service that calls poll_ready but only
makes a service call later (potentially much later) will block all other
callers.
The GetPeers requests sent while crawling the network are randomly
load-balanced over available peers. But at the very beginning, they may
be both routed to the same peer, causing network initialization to be
delayed while the second one times out (since zcashd only ever responds
to the first addr message).
Only sending one GetPeers request per candidate set update means we
crawl the network a little more slowly, but avoids hanging on start.
The Inbound service only needs the network setup for some requests, but
it can service other requests without it. Making it return
Poll::Pending until the network setup finishes means that initial
network connections may view the Inbound service as overloaded and
attempt to load-shed.
This cleans up the response processing logic a little bit along the way,
but the overall division of responsibility should be better documented
in a future commit.
This lets us distinguish between cases where the message was unsupported
(e.g., BIP11 messages), and cases where the message was uninterpretable
in context (e.g., unsolicited messages).
The original version of this commit ran into
https://github.com/rust-lang/rust/issues/64552
again. Thanks to @yaahc for suggesting a workaround (using futures combinators
to avoid writing an async block).
Remove the seed command entirely, and make the behavior it provided
(responding to `Request::Peers`) part of the ordinary functioning of the
start command.
The new `Inbound` service should be expanded to handle all request
types.
> Added a test that the handshake's version message matches specified fields, but the test does not compile, because rustc doesn't believe that the Box<dyn std::error::Error + Send + Sync + 'static> is 'static, and therefore isn't a Box<dyn std::error::Error + Send + Sync + 'static>. This manifests as being unable to spawn the connect_isolated task. From digging through Tokio issues I believe that this is an instance of rust-lang/rust#64552 .
Co-authored-by: Jane Lusby <jlusby42@gmail.com>
The peer set provides an automatically managed connection pool, abstracting
away all the details of handling individual peer connections. However, it's
also useful to be able to create completely isolated and
minimally-distinguishable connections to individual peers, in order to be able
to send specific messages over Tor, or to implement some custom network crawler
logic.
* add test for first checkpoint sync
Prior this this change we've not had any tests that verify our sync /
network logic is well behaved. This PR cleans up the test helper code to
make error reports more consistent and uses this cleaned up API to
implement a checkpoint sync test which runs zebrad until it reads the
first checkpoint event from stdout.
Co-authored-by: teor <teor@riseup.net>
* move include out of unix cfg
Co-authored-by: teor <teor@riseup.net>
The previous code filled in block height 0 for a missing coinbase height
in `SledState::commit_finalized`, since the genesis block is the only
block without a coinbase height (because of a mistake when it was
created).
However, @teor2345 noticed that this is incorrect, because we already
parse the genesis block specially and fill in its coinbase height
correctly. So instead, we can .expect it to be present, because we can
assume that all finalized blocks are valid.
This makes the component verifiers both always return `poll_ready`,
because they do not exert backpressure and cannot fail.
The checkpoint verifier now immediately rejects any blocks that arrive
after it finishes checkpointing, instead of marking the service itself
as failed.
The chain verifier is agnostic to the readiness behavior of its
components, and reports readiness when they are both ready.
This is a really nice function but there might be a bug in its future
implementation: https://github.com/tower-rs/tower/issues/469
This bug may have already been fixed for the 0.4.0 release, so we could change
back then.