Implement RFC5: State updates `Chain` type (#1069)
* Begin work on RFC5 implementation * I think this is necessary * holy shit supertrait implemented via subtrait * implement most of the chain functions * change to slightly better name * implement fork * fix outpoint handling in Chain struct * update expect for work * resolve review comment * split utxo into two sets * update the Chain definition * just a little more * update comment * Apply suggestions from code review Co-authored-by: teor <teor@riseup.net> * apply changes from code review * remove allow attribute in zebra-state/lib.rs * Update zebra-state/src/memory_state.rs Co-authored-by: teor <teor@riseup.net> * merge ChainSet type into MemoryState * rename state impl types * Add error messages to asserts * add module doc comment * update RFC for utxos * add missing header Co-authored-by: teor <teor@riseup.net>
This commit is contained in:
parent
d5ce5eeee2
commit
352721bd88
|
@ -14,20 +14,21 @@ on the ordering of operations in the state layer.
|
|||
|
||||
As in the rest of Zebra, we want to express our work as a collection of
|
||||
work-items with explicit dependencies, then execute these items concurrently
|
||||
and in parallel on a thread pool.
|
||||
and in parallel on a thread pool.
|
||||
|
||||
# Definitions
|
||||
[definitions]: #definitions
|
||||
|
||||
- *UTXO*: unspent transaction output. Transaction outputs are modeled in `zebra-chain` by the [`TransparentOutput`][transout] structure.
|
||||
- Transaction input: an output of a previous transaction consumed by a later transaction (the one it is an input to). Modeled in `zebra-chain` by the [`TransparentInput`][transin] structure.
|
||||
- lock script: the script that defines the conditions under which some UTXO can be spent. Stored in the [`TransparentOutput::lock_script`][lock_script] field.
|
||||
- unlock script: a script satisfying the conditions of the lock script, allowing a UTXO to be spent. Stored in the [`TransparentInput::PrevOut::lock_script`][lock_script] field.
|
||||
- *UTXO*: unspent transaction output. Transaction outputs are modeled in `zebra-chain` by the [`transparent::Output`][transout] structure.
|
||||
- Transaction input: an output of a previous transaction consumed by a later transaction (the one it is an input to). Modeled in `zebra-chain` by the [`transparent::Input`][transin] structure.
|
||||
- lock script: the script that defines the conditions under which some UTXO can be spent. Stored in the [`transparent::Output::lock_script`][lock_script] field.
|
||||
- unlock script: a script satisfying the conditions of the lock script, allowing a UTXO to be spent. Stored in the [`transparent::Input::PrevOut::lock_script`][lock_script] field.
|
||||
|
||||
[transout]: https://doc.zebra.zfnd.org/zebra_chain/transparent/struct.Output.html
|
||||
[lock_script]: https://doc.zebra.zfnd.org/zebra_chain/transparent/struct.Output.html#structfield.lock_script
|
||||
[transin]: https://doc.zebra.zfnd.org/zebra_chain/transparent/enum.Input.html
|
||||
[unlock_script]: https://doc.zebra.zfnd.org/zebra_chain/transparent/enum.Input.html#variant.PrevOut.field.unlock_script
|
||||
|
||||
[transout]: https://doc.zebra.zfnd.org/zebra_chain/transaction/struct.TransparentOutput.html
|
||||
[lock_script]: https://doc.zebra.zfnd.org/zebra_chain/transaction/struct.TransparentOutput.html#structfield.lock_script
|
||||
[transin]: https://doc.zebra.zfnd.org/zebra_chain/transaction/enum.TransparentInput.html
|
||||
[unlock_script]: https://doc.zebra.zfnd.org/zebra_chain/transaction/enum.TransparentInput.html#variant.PrevOut.field.unlock_script
|
||||
|
||||
# Guide-level explanation
|
||||
[guide-level-explanation]: #guide-level-explanation
|
||||
|
@ -55,8 +56,8 @@ done later, at the point that its containing block is committed to the chain.
|
|||
|
||||
At a high level, this adds a new request/response pair to the state service:
|
||||
|
||||
- `Request::AwaitUtxo(OutPoint)` requests a `TransparentOutput` specified by `OutPoint` from the state layer;
|
||||
- `Response::Utxo(TransparentOutput)` supplies requested the `TransparentOutput`.
|
||||
- `Request::AwaitUtxo(OutPoint)` requests a `transparent::Output` specified by `OutPoint` from the state layer;
|
||||
- `Response::Utxo(transparent::Output)` supplies requested the `transparent::Output`.
|
||||
|
||||
Note that this request is named differently from the other requests,
|
||||
`AwaitUtxo` rather than `GetUtxo` or similar. This is because the request has
|
||||
|
@ -72,7 +73,7 @@ is available. For instance, if we begin parallel download and verification of
|
|||
500 blocks, we should be able to begin script verification of all scripts
|
||||
referencing outputs from existing blocks in parallel, and begin verification
|
||||
of scripts referencing outputs from new blocks as soon as they are committed
|
||||
to the chain.
|
||||
to the chain.
|
||||
|
||||
Because spending outputs from older blocks is more common than spending
|
||||
outputs from recent blocks, this should allow a significant amount of
|
||||
|
@ -82,7 +83,7 @@ parallelism.
|
|||
[reference-level-explanation]: #reference-level-explanation
|
||||
|
||||
We add a `Request::AwaitUtxo(OutPoint)` and
|
||||
`Response::Utxo(TransparentOutput)` to the state protocol. As described
|
||||
`Response::Utxo(transparent::Output)` to the state protocol. As described
|
||||
above, the request name is intended to indicate the request's behavior: the
|
||||
request does not resolve until the state layer learns of a UTXO described by
|
||||
the request.
|
||||
|
@ -163,7 +164,7 @@ structure described below.
|
|||
```rust
|
||||
// sketch
|
||||
#[derive(Default, Debug)]
|
||||
struct PendingUtxos(HashMap<OutPoint, oneshot::Sender<TransparentOutput>>);
|
||||
struct PendingUtxos(HashMap<OutPoint, oneshot::Sender<transparent::Output>>);
|
||||
|
||||
impl PendingUtxos {
|
||||
// adds the outpoint and returns (wrapped) rx end of oneshot
|
||||
|
@ -171,7 +172,7 @@ impl PendingUtxos {
|
|||
pub fn queue(&mut self, outpoint: OutPoint) -> impl Future<Output=Result<Response, ...>>;
|
||||
|
||||
// if outpoint is a hashmap key, remove the entry and send output on the channel
|
||||
pub fn respond(&mut self, outpoint: OutPoint, output: TransparentOutput);
|
||||
pub fn respond(&mut self, outpoint: OutPoint, output: transparent::Output);
|
||||
|
||||
|
||||
// scans the hashmap and removes any entries with closed senders
|
||||
|
|
|
@ -264,16 +264,18 @@ is completely empty.
|
|||
The `Chain` type is defined by the following struct and API:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Default, Clone)]
|
||||
struct Chain {
|
||||
blocks: BTreeMap<block::Height, Arc<Block>>,
|
||||
height_by_hash: HashMap<block::Hash, block::Height>,
|
||||
tx_by_hash: HashMap<transaction::Hash, (block::Height, tx_index)>,
|
||||
tx_by_hash: HashMap<transaction::Hash, (block::Height, usize)>,
|
||||
|
||||
utxos: HashSet<transparent::Output>,
|
||||
sapling_anchors: HashSet<sapling::tree::Root>,
|
||||
created_utxos: HashSet<transparent::OutPoint>,
|
||||
spent_utxos: HashSet<transparent::OutPoint>,
|
||||
sprout_anchors: HashSet<sprout::tree::Root>,
|
||||
sapling_nullifiers: HashSet<sapling::Nullifier>,
|
||||
sapling_anchors: HashSet<sapling::tree::Root>,
|
||||
sprout_nullifiers: HashSet<sprout::Nullifier>,
|
||||
sapling_nullifiers: HashSet<sapling::Nullifier>,
|
||||
partial_cumulative_work: PartialCumulativeWork,
|
||||
}
|
||||
```
|
||||
|
@ -283,14 +285,16 @@ struct Chain {
|
|||
Push a block into a chain as the new tip
|
||||
|
||||
1. Update cumulative data members
|
||||
- Add block to end of `self.blocks`
|
||||
- Add hash to `height_by_hash`
|
||||
- for each `transaction` in `block`
|
||||
- add key: `transaction.hash` and value: `(height, tx_index)` to `tx_by_hash`
|
||||
- Add new utxos and remove consumed utxos from `self.utxos`
|
||||
- Add anchors to the appropriate `self.<version>_anchors`
|
||||
- Add nullifiers to the appropriate `self.<version>_nullifiers`
|
||||
- Add the block's hash to `height_by_hash`
|
||||
- Add work to `self.partial_cumulative_work`
|
||||
- For each `transaction` in `block`
|
||||
- Add key: `transaction.hash` and value: `(height, tx_index)` to `tx_by_hash`
|
||||
- Add created utxos to `self.created_utxos`
|
||||
- Add spent utxos to `self.spent_utxos`
|
||||
- Add anchors to the appropriate `self.<version>_anchors`
|
||||
- Add nullifiers to the appropriate `self.<version>_nullifiers`
|
||||
|
||||
2. Add block to `self.blocks`
|
||||
|
||||
#### `pub fn pop_root(&mut self) -> Arc<Block>`
|
||||
|
||||
|
@ -300,11 +304,13 @@ Remove the lowest height block of the non-finalized portion of a chain.
|
|||
|
||||
2. Update cumulative data members
|
||||
- Remove the block's hash from `self.height_by_hash`
|
||||
- for each `transaction` in `block`
|
||||
- remove `transaction.hash` from `tx_by_hash`
|
||||
- Remove new utxos from `self.utxos`
|
||||
- Remove the anchors from the appropriate `self.<version>_anchors`
|
||||
- Remove the nullifiers from the appropriate `self.<version>_nullifiers`
|
||||
- Subtract work from `self.partial_cumulative_work`
|
||||
- For each `transaction` in `block`
|
||||
- Remove `transaction.hash` from `tx_by_hash`
|
||||
- Remove created utxos from `self.created_utxos`
|
||||
- Remove spent utxos from `self.spent_utxos`
|
||||
- Remove the anchors from the appropriate `self.<version>_anchors`
|
||||
- Remove the nullifiers from the appropriate `self.<version>_nullifiers`
|
||||
|
||||
3. Return the block
|
||||
|
||||
|
@ -321,7 +327,7 @@ Fork a chain at the block with the given hash, if it is part of this chain.
|
|||
|
||||
4. Return `forked`
|
||||
|
||||
#### `fn pop_tip(&mut self) -> Arc<Block>`
|
||||
#### `fn pop_tip(&mut self)`
|
||||
|
||||
Remove the highest height block of the non-finalized portion of a chain.
|
||||
|
||||
|
@ -329,14 +335,13 @@ Remove the highest height block of the non-finalized portion of a chain.
|
|||
|
||||
2. Update cumulative data members
|
||||
- Remove the corresponding hash from `self.height_by_hash`
|
||||
- Subtract work from `self.partial_cumulative_work`
|
||||
- for each `transaction` in `block`
|
||||
- remove `transaction.hash` from `tx_by_hash`
|
||||
- Add consumed utxos and remove new utxos from `self.utxos`
|
||||
- Remove anchors from the appropriate `self.<version>_anchors`
|
||||
- Remove the nullifiers from the appropriate `self.<version>_nullifiers`
|
||||
- Subtract work from `self.partial_cumulative_work`
|
||||
|
||||
3. Return the block
|
||||
- Remove created utxos from `self.created_utxos`
|
||||
- Remove spent utxos from `self.spent_utxos`
|
||||
- Remove anchors from the appropriate `self.<version>_anchors`
|
||||
- Remove the nullifiers from the appropriate `self.<version>_nullifiers`
|
||||
|
||||
#### `Ord`
|
||||
|
||||
|
@ -358,7 +363,8 @@ handled by `#[derive(Default)]`.
|
|||
|
||||
1. initialise cumulative data members
|
||||
- Construct an empty `self.blocks`, `height_by_hash`, `tx_by_hash`,
|
||||
`self.utxos`, `self.<version>_anchors`, `self.<version>_nullifiers`
|
||||
`self.created_utxos`, `self.spent_utxos`, `self.<version>_anchors`,
|
||||
`self.<version>_nullifiers`
|
||||
- Zero `self.partial_cumulative_work`
|
||||
|
||||
**Note:** The chain can be empty if:
|
||||
|
@ -367,23 +373,35 @@ handled by `#[derive(Default)]`.
|
|||
all its blocks have been `pop`ped
|
||||
|
||||
|
||||
### `ChainSet` Type
|
||||
[chainset-type]: #chainset-type
|
||||
### `NonFinalizedState` Type
|
||||
[nonfinalizedstate-type]: #nonfinalizedstate-type
|
||||
|
||||
The `ChainSet` type represents the set of all non-finalized state. It
|
||||
consists of a set of non-finalized but verified chains and a set of
|
||||
The `NonFinalizedState` type represents the set of all non-finalized state.
|
||||
It consists of a set of non-finalized but verified chains and a set of
|
||||
unverified blocks which are waiting for the full context needed to verify
|
||||
them to become available.
|
||||
|
||||
`ChainSet` is defined by the following structure and API:
|
||||
`NonFinalizedState` is defined by the following structure and API:
|
||||
|
||||
```rust
|
||||
struct ChainSet {
|
||||
chains: BTreeSet<Chain>,
|
||||
/// The state of the chains in memory, incuding queued blocks.
|
||||
#[derive(Debug, Default)]
|
||||
pub struct NonFinalizedState {
|
||||
/// Verified, non-finalized chains.
|
||||
chain_set: BTreeSet<Chain>,
|
||||
/// Blocks awaiting their parent blocks for contextual verification.
|
||||
contextual_queue: QueuedBlocks,
|
||||
}
|
||||
|
||||
queued_blocks: BTreeMap<block::Hash, QueuedBlock>,
|
||||
queued_by_parent: BTreeMap<block::Hash, Vec<block::Hash>>,
|
||||
queued_by_height: BTreeMap<block::Height, Vec<block::Hash>>,
|
||||
/// A queue of blocks, awaiting the arrival of parent blocks.
|
||||
#[derive(Debug, Default)]
|
||||
struct QueuedBlocks {
|
||||
/// Blocks awaiting their parent blocks for contextual verification.
|
||||
blocks: HashMap<block::Hash, QueuedBlock>,
|
||||
/// Hashes from `queued_blocks`, indexed by parent hash.
|
||||
by_parent: HashMap<block::Hash, Vec<block::Hash>>,
|
||||
/// Hashes from `queued_blocks`, indexed by block height.
|
||||
by_height: BTreeMap<block::Height, Vec<block::Hash>>,
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -470,10 +488,10 @@ cannot be committed due to missing context.
|
|||
|
||||
- `Chain` represents the non-finalized portion of a single chain
|
||||
|
||||
- `ChainSet` represents the non-finalized portion of all chains and all
|
||||
- `NonFinalizedState` represents the non-finalized portion of all chains and all
|
||||
unverified blocks that are waiting for context to be available.
|
||||
|
||||
- `ChainSet::queue` handles queueing and or commiting blocks and
|
||||
- `NonFinalizedState::queue` handles queueing and or commiting blocks and
|
||||
reorganizing chains (via `commit_block`) but not finalizing them
|
||||
|
||||
- Finalized blocks are returned from `finalize` and must still be committed
|
||||
|
@ -759,6 +777,20 @@ if the block is not in any non-finalized chain:
|
|||
the `block_by_height` tree (to get the block data).
|
||||
|
||||
|
||||
### `Request::AwaitUtxo(OutPoint)`
|
||||
|
||||
Returns
|
||||
|
||||
- `Response::Utxo(transparent::Output)`
|
||||
|
||||
Implemented by querying:
|
||||
|
||||
- (non-finalized) if any `Chains` contain an `OutPoint` in their `created_utxos` and not their `spent_utxo` get the `transparent::Output` from `OutPoint`'s transaction
|
||||
- (finalized) else if `OutPoint` is in `utxos_by_outpoint` return the associated `transparent::Output`.
|
||||
- else wait for `OutPoint` to be created as described in [RFC0004]
|
||||
|
||||
[RFC0004]: https://zebra.zfnd.org/dev/rfcs/0004-asynchronous-script-verification.html
|
||||
|
||||
# Drawbacks
|
||||
[drawbacks]: #drawbacks
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ use byteorder::{ByteOrder, LittleEndian, ReadBytesExt, WriteBytesExt};
|
|||
type Result<T, E = Error> = std::result::Result<T, E>;
|
||||
|
||||
/// A runtime validated type for representing amounts of zatoshis
|
||||
#[derive(Debug, Eq, PartialEq, Clone, Copy, Serialize, Deserialize)]
|
||||
#[derive(Debug, Eq, PartialEq, Clone, Copy, Serialize, Deserialize, Hash)]
|
||||
#[serde(try_from = "i64")]
|
||||
#[serde(bound = "C: Constraint")]
|
||||
pub struct Amount<C = NegativeAllowed>(i64, PhantomData<C>);
|
||||
|
@ -234,7 +234,7 @@ impl Constraint for NegativeAllowed {
|
|||
/// 0..=MAX_MONEY,
|
||||
/// );
|
||||
/// ```
|
||||
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
|
||||
#[derive(Clone, Copy, Debug, Eq, PartialEq, Hash)]
|
||||
pub struct NonNegative {}
|
||||
|
||||
impl Constraint for NonNegative {
|
||||
|
|
|
@ -26,7 +26,7 @@ fn prf_nf(nk: [u8; 32], rho: [u8; 32]) -> [u8; 32] {
|
|||
}
|
||||
|
||||
/// A Nullifier for Sapling transactions
|
||||
#[derive(Clone, Copy, Debug, Eq, PartialEq, Serialize, Deserialize)]
|
||||
#[derive(Clone, Copy, Debug, Eq, PartialEq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(
|
||||
any(test, feature = "proptest-impl"),
|
||||
derive(proptest_derive::Arbitrary)
|
||||
|
|
|
@ -62,7 +62,7 @@ struct SaplingNoteCommitmentTree;
|
|||
/// commitment tree corresponding to the final Sapling treestate of
|
||||
/// this block. A root of a note commitment tree is associated with
|
||||
/// each treestate.
|
||||
#[derive(Clone, Copy, Default, Eq, PartialEq, Serialize, Deserialize)]
|
||||
#[derive(Clone, Copy, Default, Eq, PartialEq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(any(test, feature = "proptest-impl"), derive(Arbitrary))]
|
||||
pub struct Root(pub [u8; 32]);
|
||||
|
||||
|
|
|
@ -62,7 +62,7 @@ impl From<NullifierSeed> for [u8; 32] {
|
|||
}
|
||||
|
||||
/// A Nullifier for Sprout transactions
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(
|
||||
any(test, feature = "proptest-impl"),
|
||||
derive(proptest_derive::Arbitrary)
|
||||
|
|
|
@ -22,7 +22,7 @@ use proptest_derive::Arbitrary;
|
|||
/// commitment tree corresponding to the final Sprout treestate of
|
||||
/// this block. A root of a note commitment tree is associated with
|
||||
/// each treestate.
|
||||
#[derive(Clone, Copy, Default, Eq, PartialEq, Serialize, Deserialize)]
|
||||
#[derive(Clone, Copy, Default, Eq, PartialEq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(any(test, feature = "proptest-impl"), derive(Arbitrary))]
|
||||
pub struct Root([u8; 32]);
|
||||
|
||||
|
|
|
@ -93,10 +93,10 @@ pub enum Transaction {
|
|||
expiry_height: block::Height,
|
||||
/// The net value of Sapling spend transfers minus output transfers.
|
||||
value_balance: Amount,
|
||||
/// The shielded data for this transaction, if any.
|
||||
shielded_data: Option<ShieldedData>,
|
||||
/// The JoinSplit data for this transaction, if any.
|
||||
joinsplit_data: Option<JoinSplitData<Groth16Proof>>,
|
||||
/// The shielded data for this transaction, if any.
|
||||
shielded_data: Option<ShieldedData>,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ impl AsRef<[u8]> for CoinbaseData {
|
|||
/// OutPoint
|
||||
///
|
||||
/// A particular transaction output reference.
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(any(test, feature = "proptest-impl"), derive(Arbitrary))]
|
||||
pub struct OutPoint {
|
||||
/// References the transaction that contains the UTXO being spent.
|
||||
|
@ -86,7 +86,7 @@ pub enum Input {
|
|||
/// I only own one UTXO worth 2 ZEC, I would construct a transaction
|
||||
/// that spends my UTXO and sends 1 ZEC to you and 1 ZEC back to me
|
||||
/// (just like receiving change).
|
||||
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)]
|
||||
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(any(test, feature = "proptest-impl"), derive(Arbitrary))]
|
||||
pub struct Output {
|
||||
/// Transaction value.
|
||||
|
|
|
@ -8,7 +8,7 @@ use std::{
|
|||
};
|
||||
|
||||
/// An encoding of a Bitcoin script.
|
||||
#[derive(Clone, Eq, PartialEq, Serialize, Deserialize)]
|
||||
#[derive(Clone, Eq, PartialEq, Serialize, Deserialize, Hash)]
|
||||
#[cfg_attr(
|
||||
any(test, feature = "proptest-impl"),
|
||||
derive(proptest_derive::Arbitrary)
|
||||
|
|
|
@ -320,3 +320,44 @@ impl AddAssign for Work {
|
|||
*self = *self + rhs;
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, PartialOrd, Ord)]
|
||||
/// Partial work used to track relative work in non-finalized chains
|
||||
pub struct PartialCumulativeWork(u128);
|
||||
|
||||
impl std::ops::Add<Work> for PartialCumulativeWork {
|
||||
type Output = PartialCumulativeWork;
|
||||
|
||||
fn add(self, rhs: Work) -> Self::Output {
|
||||
let result = self
|
||||
.0
|
||||
.checked_add(rhs.0)
|
||||
.expect("Work values do not overflow");
|
||||
|
||||
PartialCumulativeWork(result)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::ops::AddAssign<Work> for PartialCumulativeWork {
|
||||
fn add_assign(&mut self, rhs: Work) {
|
||||
*self = *self + rhs;
|
||||
}
|
||||
}
|
||||
|
||||
impl std::ops::Sub<Work> for PartialCumulativeWork {
|
||||
type Output = PartialCumulativeWork;
|
||||
|
||||
fn sub(self, rhs: Work) -> Self::Output {
|
||||
let result = self.0
|
||||
.checked_sub(rhs.0)
|
||||
.expect("PartialCumulativeWork values do not underflow: all subtracted Work values must have been previously added to the PartialCumulativeWork");
|
||||
|
||||
PartialCumulativeWork(result)
|
||||
}
|
||||
}
|
||||
|
||||
impl std::ops::SubAssign<Work> for PartialCumulativeWork {
|
||||
fn sub_assign(&mut self, rhs: Work) {
|
||||
*self = *self - rhs;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -19,9 +19,9 @@ mod util;
|
|||
#[cfg(test)]
|
||||
mod tests;
|
||||
|
||||
use memory_state::MemoryState;
|
||||
use memory_state::NonFinalizedState;
|
||||
use service::QueuedBlock;
|
||||
use sled_state::SledState;
|
||||
use sled_state::FinalizedState;
|
||||
|
||||
pub use config::Config;
|
||||
pub use request::{HashOrHeight, Request};
|
||||
|
|
|
@ -1,3 +1,461 @@
|
|||
pub struct MemoryState {
|
||||
// TODO
|
||||
//! Non-finalized chain state management as defined by [RFC0005]
|
||||
//!
|
||||
//! [RFC0005]: https://zebra.zfnd.org/dev/rfcs/0005-state-updates.html
|
||||
#![allow(dead_code)]
|
||||
use std::{
|
||||
cmp::Ordering,
|
||||
collections::BTreeSet,
|
||||
collections::{BTreeMap, HashMap, HashSet},
|
||||
ops::Deref,
|
||||
sync::Arc,
|
||||
};
|
||||
|
||||
use zebra_chain::{
|
||||
block::{self, Block},
|
||||
primitives::Groth16Proof,
|
||||
sapling, sprout, transaction, transparent,
|
||||
work::difficulty::PartialCumulativeWork,
|
||||
};
|
||||
|
||||
use crate::service::QueuedBlock;
|
||||
|
||||
/// The state of the chains in memory, incuding queued blocks.
|
||||
#[derive(Debug, Default)]
|
||||
pub struct NonFinalizedState {
|
||||
/// Verified, non-finalized chains.
|
||||
chain_set: BTreeSet<Chain>,
|
||||
/// Blocks awaiting their parent blocks for contextual verification.
|
||||
contextual_queue: QueuedBlocks,
|
||||
}
|
||||
|
||||
/// A queue of blocks, awaiting the arrival of parent blocks.
|
||||
#[derive(Debug, Default)]
|
||||
struct QueuedBlocks {
|
||||
/// Blocks awaiting their parent blocks for contextual verification.
|
||||
blocks: HashMap<block::Hash, QueuedBlock>,
|
||||
/// Hashes from `queued_blocks`, indexed by parent hash.
|
||||
by_parent: HashMap<block::Hash, Vec<block::Hash>>,
|
||||
/// Hashes from `queued_blocks`, indexed by block height.
|
||||
by_height: BTreeMap<block::Height, Vec<block::Hash>>,
|
||||
}
|
||||
|
||||
impl NonFinalizedState {
|
||||
pub fn finalize(&mut self) -> Arc<Block> {
|
||||
todo!()
|
||||
}
|
||||
|
||||
pub fn queue(&mut self, _block: QueuedBlock) {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn process_queued(&mut self, _new_parent: block::Hash) {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn commit_block(&mut self, _block: QueuedBlock) -> Option<block::Hash> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone)]
|
||||
struct Chain {
|
||||
blocks: BTreeMap<block::Height, Arc<Block>>,
|
||||
height_by_hash: HashMap<block::Hash, block::Height>,
|
||||
tx_by_hash: HashMap<transaction::Hash, (block::Height, usize)>,
|
||||
|
||||
created_utxos: HashSet<transparent::OutPoint>,
|
||||
spent_utxos: HashSet<transparent::OutPoint>,
|
||||
sprout_anchors: HashSet<sprout::tree::Root>,
|
||||
sapling_anchors: HashSet<sapling::tree::Root>,
|
||||
sprout_nullifiers: HashSet<sprout::Nullifier>,
|
||||
sapling_nullifiers: HashSet<sapling::Nullifier>,
|
||||
partial_cumulative_work: PartialCumulativeWork,
|
||||
}
|
||||
|
||||
impl Chain {
|
||||
/// Push a contextually valid non-finalized block into a chain as the new tip.
|
||||
pub fn push(&mut self, block: Arc<Block>) {
|
||||
let block_height = block
|
||||
.coinbase_height()
|
||||
.expect("valid non-finalized blocks have a coinbase height");
|
||||
|
||||
// update cumulative data members
|
||||
self.update_chain_state_with(&block);
|
||||
self.blocks.insert(block_height, block);
|
||||
}
|
||||
|
||||
/// Remove the lowest height block of the non-finalized portion of a chain.
|
||||
pub fn pop_root(&mut self) -> Arc<Block> {
|
||||
let block_height = self.lowest_height();
|
||||
|
||||
// remove the lowest height block from self.blocks
|
||||
let block = self
|
||||
.blocks
|
||||
.remove(&block_height)
|
||||
.expect("only called while blocks is populated");
|
||||
|
||||
// update cumulative data members
|
||||
self.revert_chain_state_with(&block);
|
||||
|
||||
// return the block
|
||||
block
|
||||
}
|
||||
|
||||
fn lowest_height(&self) -> block::Height {
|
||||
self.blocks
|
||||
.keys()
|
||||
.next()
|
||||
.cloned()
|
||||
.expect("only called while blocks is populated")
|
||||
}
|
||||
|
||||
/// Fork a chain at the block with the givin hash, if it is part of this
|
||||
/// chain.
|
||||
pub fn fork(&self, fork_tip: block::Hash) -> Option<Self> {
|
||||
if !self.height_by_hash.contains_key(&fork_tip) {
|
||||
return None;
|
||||
}
|
||||
|
||||
let mut forked = self.clone();
|
||||
|
||||
while forked.non_finalized_tip_hash() != fork_tip {
|
||||
forked.pop_tip();
|
||||
}
|
||||
|
||||
Some(forked)
|
||||
}
|
||||
|
||||
fn non_finalized_tip_hash(&self) -> block::Hash {
|
||||
self.blocks
|
||||
.values()
|
||||
.next_back()
|
||||
.expect("only called while blocks is populated")
|
||||
.hash()
|
||||
}
|
||||
|
||||
/// Remove the highest height block of the non-finalized portion of a chain.
|
||||
fn pop_tip(&mut self) {
|
||||
let block_height = self.non_finalized_tip_height();
|
||||
|
||||
let block = self
|
||||
.blocks
|
||||
.remove(&block_height)
|
||||
.expect("only called while blocks is populated");
|
||||
|
||||
assert!(
|
||||
!self.blocks.is_empty(),
|
||||
"Non-finalized chains must have at least one block to be valid"
|
||||
);
|
||||
|
||||
self.revert_chain_state_with(&block);
|
||||
}
|
||||
|
||||
fn non_finalized_tip_height(&self) -> block::Height {
|
||||
*self
|
||||
.blocks
|
||||
.keys()
|
||||
.next_back()
|
||||
.expect("only called while blocks is populated")
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper trait to organize inverse operations done on the `Chain` type. Used to
|
||||
/// overload the `update_chain_state_with` and `revert_chain_state_with` methods
|
||||
/// based on the type of the argument.
|
||||
///
|
||||
/// This trait was motivated by the length of the `push` and `pop_root` functions
|
||||
/// and fear that it would be easy to introduce bugs when updating them unless
|
||||
/// the code was reorganized to keep related operations adjacent to eachother.
|
||||
trait UpdateWith<T> {
|
||||
/// Update `Chain` cumulative data members to add data that are derived from
|
||||
/// `T`
|
||||
fn update_chain_state_with(&mut self, _: &T);
|
||||
|
||||
/// Update `Chain` cumulative data members to remove data that are derived
|
||||
/// from `T`
|
||||
fn revert_chain_state_with(&mut self, _: &T);
|
||||
}
|
||||
|
||||
impl UpdateWith<Arc<Block>> for Chain {
|
||||
fn update_chain_state_with(&mut self, block: &Arc<Block>) {
|
||||
let block_height = block
|
||||
.coinbase_height()
|
||||
.expect("valid non-finalized blocks have a coinbase height");
|
||||
let block_hash = block.hash();
|
||||
|
||||
// add hash to height_by_hash
|
||||
let prior_height = self.height_by_hash.insert(block_hash, block_height);
|
||||
assert!(
|
||||
prior_height.is_none(),
|
||||
"block heights must be unique within a single chain"
|
||||
);
|
||||
|
||||
// add work to partial cumulative work
|
||||
let block_work = block
|
||||
.header
|
||||
.difficulty_threshold
|
||||
.to_work()
|
||||
.expect("work has already been validated");
|
||||
self.partial_cumulative_work += block_work;
|
||||
|
||||
// for each transaction in block
|
||||
for (transaction_index, transaction) in block.transactions.iter().enumerate() {
|
||||
let (inputs, outputs, shielded_data, joinsplit_data) = match transaction.deref() {
|
||||
transaction::Transaction::V4 {
|
||||
inputs,
|
||||
outputs,
|
||||
shielded_data,
|
||||
joinsplit_data,
|
||||
..
|
||||
} => (inputs, outputs, shielded_data, joinsplit_data),
|
||||
_ => unreachable!(
|
||||
"older transaction versions only exist in finalized blocks pre sapling",
|
||||
),
|
||||
};
|
||||
|
||||
// add key `transaction.hash` and value `(height, tx_index)` to `tx_by_hash`
|
||||
let transaction_hash = transaction.hash();
|
||||
let prior_pair = self
|
||||
.tx_by_hash
|
||||
.insert(transaction_hash, (block_height, transaction_index));
|
||||
assert!(
|
||||
prior_pair.is_none(),
|
||||
"transactions must be unique within a single chain"
|
||||
);
|
||||
|
||||
// add the utxos this produced
|
||||
self.update_chain_state_with(&(transaction_hash, outputs));
|
||||
// add the utxos this consumed
|
||||
self.update_chain_state_with(inputs);
|
||||
// add sprout anchor and nullifiers
|
||||
self.update_chain_state_with(joinsplit_data);
|
||||
// add sapling anchor and nullifier
|
||||
self.update_chain_state_with(shielded_data);
|
||||
}
|
||||
}
|
||||
|
||||
fn revert_chain_state_with(&mut self, block: &Arc<Block>) {
|
||||
let block_hash = block.hash();
|
||||
|
||||
// remove the blocks hash from `height_by_hash`
|
||||
assert!(
|
||||
self.height_by_hash.remove(&block_hash).is_some(),
|
||||
"hash must be present if block was"
|
||||
);
|
||||
|
||||
// remove work from partial_cumulative_work
|
||||
let block_work = block
|
||||
.header
|
||||
.difficulty_threshold
|
||||
.to_work()
|
||||
.expect("work has already been validated");
|
||||
self.partial_cumulative_work -= block_work;
|
||||
|
||||
// for each transaction in block
|
||||
for transaction in &block.transactions {
|
||||
let (inputs, outputs, shielded_data, joinsplit_data) = match transaction.deref() {
|
||||
transaction::Transaction::V4 {
|
||||
inputs,
|
||||
outputs,
|
||||
shielded_data,
|
||||
joinsplit_data,
|
||||
..
|
||||
} => (inputs, outputs, shielded_data, joinsplit_data),
|
||||
_ => unreachable!(
|
||||
"older transaction versions only exist in finalized blocks pre sapling",
|
||||
),
|
||||
};
|
||||
|
||||
// remove `transaction.hash` from `tx_by_hash`
|
||||
let transaction_hash = transaction.hash();
|
||||
assert!(
|
||||
self.tx_by_hash.remove(&transaction_hash).is_some(),
|
||||
"transactions must be present if block was"
|
||||
);
|
||||
|
||||
// remove the utxos this produced
|
||||
self.revert_chain_state_with(&(transaction_hash, outputs));
|
||||
// remove the utxos this consumed
|
||||
self.revert_chain_state_with(inputs);
|
||||
// remove sprout anchor and nullifiers
|
||||
self.revert_chain_state_with(joinsplit_data);
|
||||
// remove sapling anchor and nullfier
|
||||
self.revert_chain_state_with(shielded_data);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl UpdateWith<(transaction::Hash, &Vec<transparent::Output>)> for Chain {
|
||||
fn update_chain_state_with(
|
||||
&mut self,
|
||||
(transaction_hash, outputs): &(transaction::Hash, &Vec<transparent::Output>),
|
||||
) {
|
||||
for (utxo_index, _) in outputs.iter().enumerate() {
|
||||
self.created_utxos.insert(transparent::OutPoint {
|
||||
hash: *transaction_hash,
|
||||
index: utxo_index as u32,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
fn revert_chain_state_with(
|
||||
&mut self,
|
||||
(transaction_hash, outputs): &(transaction::Hash, &Vec<transparent::Output>),
|
||||
) {
|
||||
for (utxo_index, _) in outputs.iter().enumerate() {
|
||||
assert!(
|
||||
self.created_utxos.remove(&transparent::OutPoint {
|
||||
hash: *transaction_hash,
|
||||
index: utxo_index as u32,
|
||||
}),
|
||||
"created_utxos must be present if block was"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl UpdateWith<Vec<transparent::Input>> for Chain {
|
||||
fn update_chain_state_with(&mut self, inputs: &Vec<transparent::Input>) {
|
||||
for consumed_utxo in inputs {
|
||||
match consumed_utxo {
|
||||
transparent::Input::PrevOut { outpoint, .. } => {
|
||||
self.spent_utxos.insert(*outpoint);
|
||||
}
|
||||
transparent::Input::Coinbase { .. } => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn revert_chain_state_with(&mut self, inputs: &Vec<transparent::Input>) {
|
||||
for consumed_utxo in inputs {
|
||||
match consumed_utxo {
|
||||
transparent::Input::PrevOut { outpoint, .. } => {
|
||||
assert!(
|
||||
self.spent_utxos.remove(outpoint),
|
||||
"spent_utxos must be present if block was"
|
||||
);
|
||||
}
|
||||
transparent::Input::Coinbase { .. } => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl UpdateWith<Option<transaction::JoinSplitData<Groth16Proof>>> for Chain {
|
||||
fn update_chain_state_with(
|
||||
&mut self,
|
||||
joinsplit_data: &Option<transaction::JoinSplitData<Groth16Proof>>,
|
||||
) {
|
||||
if let Some(joinsplit_data) = joinsplit_data {
|
||||
for sprout::JoinSplit {
|
||||
anchor, nullifiers, ..
|
||||
} in joinsplit_data.joinsplits()
|
||||
{
|
||||
self.sprout_anchors.insert(*anchor);
|
||||
self.sprout_nullifiers.insert(nullifiers[0]);
|
||||
self.sprout_nullifiers.insert(nullifiers[1]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn revert_chain_state_with(
|
||||
&mut self,
|
||||
joinsplit_data: &Option<transaction::JoinSplitData<Groth16Proof>>,
|
||||
) {
|
||||
if let Some(joinsplit_data) = joinsplit_data {
|
||||
for sprout::JoinSplit {
|
||||
anchor, nullifiers, ..
|
||||
} in joinsplit_data.joinsplits()
|
||||
{
|
||||
assert!(
|
||||
self.sprout_anchors.remove(anchor),
|
||||
"anchor must be present if block was"
|
||||
);
|
||||
assert!(
|
||||
self.sprout_nullifiers.remove(&nullifiers[0]),
|
||||
"nullifiers must be present if block was"
|
||||
);
|
||||
assert!(
|
||||
self.sprout_nullifiers.remove(&nullifiers[1]),
|
||||
"nullifiers must be present if block was"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl UpdateWith<Option<transaction::ShieldedData>> for Chain {
|
||||
fn update_chain_state_with(&mut self, shielded_data: &Option<transaction::ShieldedData>) {
|
||||
if let Some(shielded_data) = shielded_data {
|
||||
for sapling::Spend {
|
||||
anchor, nullifier, ..
|
||||
} in shielded_data.spends()
|
||||
{
|
||||
self.sapling_anchors.insert(*anchor);
|
||||
self.sapling_nullifiers.insert(*nullifier);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn revert_chain_state_with(&mut self, shielded_data: &Option<transaction::ShieldedData>) {
|
||||
if let Some(shielded_data) = shielded_data {
|
||||
for sapling::Spend {
|
||||
anchor, nullifier, ..
|
||||
} in shielded_data.spends()
|
||||
{
|
||||
assert!(
|
||||
self.sapling_anchors.remove(anchor),
|
||||
"anchor must be present if block was"
|
||||
);
|
||||
assert!(
|
||||
self.sapling_nullifiers.remove(nullifier),
|
||||
"nullifier must be present if block was"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq for Chain {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
self.partial_cmp(other) == Some(Ordering::Equal)
|
||||
}
|
||||
}
|
||||
|
||||
impl Eq for Chain {}
|
||||
|
||||
impl PartialOrd for Chain {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
|
||||
impl Ord for Chain {
|
||||
fn cmp(&self, other: &Self) -> Ordering {
|
||||
if self.partial_cumulative_work != other.partial_cumulative_work {
|
||||
self.partial_cumulative_work
|
||||
.cmp(&other.partial_cumulative_work)
|
||||
} else {
|
||||
let self_hash = self
|
||||
.blocks
|
||||
.values()
|
||||
.last()
|
||||
.expect("always at least 1 element")
|
||||
.hash();
|
||||
|
||||
let other_hash = other
|
||||
.blocks
|
||||
.values()
|
||||
.last()
|
||||
.expect("always at least 1 element")
|
||||
.hash();
|
||||
|
||||
// This comparison is a tie-breaker within the local node, so it does not need to
|
||||
// be consistent with the ordering on `ExpandedDifficulty` and `block::Hash`.
|
||||
match self_hash.0.cmp(&other_hash.0) {
|
||||
Ordering::Equal => unreachable!("Chain tip block hashes are always unique"),
|
||||
ordering => ordering,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -13,9 +13,10 @@ use zebra_chain::{
|
|||
parameters::Network,
|
||||
};
|
||||
|
||||
use crate::{BoxError, Config, MemoryState, Request, Response, SledState};
|
||||
use crate::{BoxError, Config, FinalizedState, NonFinalizedState, Request, Response};
|
||||
|
||||
// todo: put this somewhere
|
||||
#[derive(Debug)]
|
||||
pub struct QueuedBlock {
|
||||
pub block: Arc<Block>,
|
||||
// TODO: add these parameters when we can compute anchors.
|
||||
|
@ -26,15 +27,15 @@ pub struct QueuedBlock {
|
|||
|
||||
struct StateService {
|
||||
/// Holds data relating to finalized chain state.
|
||||
sled: SledState,
|
||||
sled: FinalizedState,
|
||||
/// Holds data relating to non-finalized chain state.
|
||||
_mem: MemoryState,
|
||||
_mem: NonFinalizedState,
|
||||
}
|
||||
|
||||
impl StateService {
|
||||
pub fn new(config: Config, network: Network) -> Self {
|
||||
let sled = SledState::new(&config, network);
|
||||
let _mem = MemoryState {};
|
||||
let sled = FinalizedState::new(&config, network);
|
||||
let _mem = NonFinalizedState::default();
|
||||
Self { sled, _mem }
|
||||
}
|
||||
}
|
||||
|
|
|
@ -18,16 +18,16 @@ use crate::{BoxError, Config, HashOrHeight, QueuedBlock};
|
|||
/// - *asynchronous* methods that perform reads.
|
||||
///
|
||||
/// For more on this distinction, see RFC5. The synchronous methods are
|
||||
/// implemented as ordinary methods on the [`SledState`]. The asynchronous
|
||||
/// implemented as ordinary methods on the [`FinalizedState`]. The asynchronous
|
||||
/// methods are not implemented using `async fn`, but using normal methods that
|
||||
/// return `impl Future<Output = ...>`. This allows them to move data (e.g.,
|
||||
/// clones of handles for [`sled::Tree`]s) into the futures they return.
|
||||
///
|
||||
/// This means that the returned futures have a `'static` lifetime and don't
|
||||
/// borrow any resources from the [`SledState`], and the actual database work is
|
||||
/// borrow any resources from the [`FinalizedState`], and the actual database work is
|
||||
/// performed asynchronously when the returned future is polled, not while it is
|
||||
/// created. This is analogous to the way [`tower::Service::call`] works.
|
||||
pub struct SledState {
|
||||
pub struct FinalizedState {
|
||||
/// Queued blocks that arrived out of order, indexed by their parent block hash.
|
||||
queued_by_prev_hash: HashMap<block::Hash, QueuedBlock>,
|
||||
|
||||
|
@ -42,7 +42,7 @@ pub struct SledState {
|
|||
// sapling_anchors: sled::Tree,
|
||||
}
|
||||
|
||||
impl SledState {
|
||||
impl FinalizedState {
|
||||
pub fn new(config: &Config, network: Network) -> Self {
|
||||
let db = config.sled_config(network).open().unwrap();
|
||||
|
||||
|
@ -86,7 +86,7 @@ impl SledState {
|
|||
/// It's the caller's responsibility to ensure that blocks are committed in
|
||||
/// order. This function is called by [`process_queue`], which ensures order.
|
||||
/// It is intentionally not exposed as part of the public API of the
|
||||
/// [`SledState`].
|
||||
/// [`FinalizedState`].
|
||||
fn commit_finalized(&mut self, queued_block: QueuedBlock) {
|
||||
let QueuedBlock { block, rsp_tx } = queued_block;
|
||||
|
||||
|
|
Loading…
Reference in New Issue