The primary purpose of this commit is an exercise in using `cargo vet`
for tracking audits of our Rust dependency updates. `cargo update` was
run, and then a simple-to-audit subset of the dependency updates were
audited and committed.
In practice we are using 14.0.0 in most cases, as the LLVM Project have
not published Ubuntu binaries for any point release after 14.0.0 (which
we are using here).
Previously when a transaction was queried for batch trial decryption, we
identified it by its txid. This is sufficient to uniquely identify the
transaction within the wallet, but was _not_ sufficient to uniquely
identify it within a `ThreadNotifyWallets` loop. In particular, when a
reorg occurs, and the same transaction is present in blocks on both
sides of the reorg (or is reorged into the mempool and then conflicted
out):
- The first occurrence would batch the transaction's outputs and store a
result receiver.
- The second occurrence would overwrite the first occurrence's result
receiver with its own.
- The first occurrence would read the second's result receiver (which
has identical results to the first batch), removing it from the
`pending_results` map.
- The second occurrence would not find any receiver in the map, and
would mark the transaction as having no decrypted results.
We fix this by annotating each batched transaction with the hash of the
block that triggered it being trial-decrypted: either the block being
disconnected, the block being connected, or the null hash to indicate
a new transaction in the mempool. This is sufficient to domain-separate
all possible sources of duplicate txids:
- If a transaction is moved to the mempool via a block disconnection, or
from the mempool (either mined or conflicted) via a block connection,
its txid will appear twice: once with the block in question's hash,
and once with the null hash.
- If a transaction is present in both a disconnected and a connected
block (mined on both sides of the fork), its txid will appear twice:
once each with the two block's txids.
Both of the above rely on the assumption that block hashes are collision
resistant, which in turn relies on SHA-256 being collision resistant.
`CValidationInterface` listeners can either listen directly to
`CValidationInterface::SyncTransaction` as they currently do, or they
can listen to `CValidationInterface::InitBatchScanner` and then process
transactions via `BatchScanner::SyncTransaction`. The latter approach
allows listeners to perform trial decryption via whatever strategy is
most optimal for them.
From:
```
1) for-each tx in wallet : call CopyPreviousWitnesses.
2) for-each tx in block:
a) for-each note in tx:
a1) for-each tx in wallet : call AppendNoteCommitment.
a2) if note is mine : call WitnessNoteIfMine.
3) for-each tx in wallet : call UpdateWitnessHeights.
```
To:
```
1) for-each tx in block:
a) gather note commitments in vecComm.
b) witness note if ours.
2) for-each shield tx in wallet:
a) copy the previous witness.
b) append vecComm notes commitment.
c) Update witness last processed height.
```
`zcbenchmark` internally loops within the same process to run the same
benchmark multiple times. This meant it was being caught up in the
global validity cache, giving faster results for every iteration except
the first. This was not noticeable for the historic slow transparent
block, but became noticeable once we started caching Sapling and Orchard
bundle validity in zcash/zcash#6073.
As the intention of the benchmarks is to measure the worst case where
the block in question has not had any of its transactions observed
before (as is the case for IBD), we now disable cache storage if calling
`ConnectBlock` from a slow block benchmark.