This integrates `cxxbridge` into the build system, adding its generated
source files to `libzcash`. We currently need to manually specify each
Rust file containing a bridge description.
Zcash: Moved conditional into GetNextWorkRequired(), as we had rewritten
CalculateNextWorkRequired() to not have the necessary information. This
means that CalculateNextWorkRequired() will in unit tests calculate what
regtest would use were the new field not set; this is irrelevant, as only
GetNextWorkRequired() is used directly in consensus rules.
A few "a->an" and "an->a".
"Shows, if the supplied default SOCKS5 proxy" -> "Shows if the supplied default SOCKS5 proxy". Change made on 3 occurrences.
"without fully understanding the ramification of a command" -> "without fully understanding the ramifications of a command".
Removed duplicate words such as "the the".
Zcash: Only the changes to files and code that we have.
This reverts commit 49f9584613.
Now that we are depending unconditionally on the Rust Equihash
validator, CheckEquihashSolution() can revert to being a non-contextual
check.
This also fixes a segfault that would occur during reindexing if the
consensus rules were altered such that a previously-valid block would
become invalid, and the node's block files contained blocks in a
specific order. It was encountered while testing the Canopy NU on
testnet (due to a bug in the implementation of ZIP 212 that was
separately fixed in zcash/zcash#4604).
The C++ and Rust Equihash validators are intended to have an identical
set of valid Equihash solutions, so this should merely be an
implementation detail. However, deploying the Rust validator at the same
time as a network upgrade reduces the risk of an unintentional consensus
divergence due to undocumented behaviour in either implementation.
Once Heartwood has activated on mainnet, we can verify that all
pre-Heartwood blocks satisfy the Rust validator, and then remove the C++
validator and make Equihash-checking non-contextual again.
This requires moving CheckEquihashSolution() to
ContextualCheckBlockHeader() for all but the genesis block, which has no
effect on consensus; it just means that an invalid Equihash solution is
rejected slightly later in the block validation process.
It isn't clear how a boost::optional that holds 0 (which is the case for
regtest) is coerced to a boolean, unless you pore over the Boost
documentation. An explicit check is clearer.
The min-difficulty change is a bilateral consensus rule change, and so
must be conditionally enabled in order for the earlier section of the
chain to synchronise.
Technically this could be implemented as a network upgrade, but as this will
never be deployed to mainnet, a targeted fork will suffice.
A block may be mined with nBits set to the minimum difficulty if its
nTime is set more than six block intervals (15 minutes) after its parent
block.
This is a consensus rule change on testnet that will result in a chain
split (as desired).
The min-difficulty blocks are incompatible with difficulty averaging.
Network difficulty is also now defined as the difficulty the network is
currently working to solve, rather than the last non-min-difficulty block
difficulty.
This was unintentionally committed, and caused Equihash verification of blocks
without parents to be skipped. This only affects the genesis block on the test
network, but also causes the "time verifyequihash" benchmark to incorrectly
appear instantaneous.
The genesis blocks and miner tests have been regenerated, because changing the
block header serialisation format changes the block hash, and thus validity.
The Equihash solutions have been removed from the bloom test inputs for
simplicity (block validity is not checked there; only a valid serialisation is
necessary).
When the difficulty adjustment algorithm was altered, the special testnet
min-difficulty case was maintained, but the difficulty adjustment for the
following block then adjusted from min-difficulty instead of from the last
non-min-difficulty block. This caused the difficulty on the testnet to sawtooth
instead of stabilising. The intended behaviour is restored here.
Changing the order of difficulty calculation operations to divide first doesn't
affect the result significantly, but ensures we never overflow the arith_uint256
during multiplication and get an artificial jump in difficulty.
The main and test networks are configured to use parameters that are currently
low-memory but usable with the basic solver; they will be increased once the
solver is optimised. The regtest network is configured to have extremely low
memory usage for speed.
Note that Bitcoin's double-hasher is used for the difficulty check. This does
not match the paper, but is simpler than changing the block header
serialization. Single hashing is kept for the EquiHash solver because there is
no requirement on execution time there, only on memory usage.
Split GetNextWorkRequired() into two functions to allow the difficulty calculations to
be tested without requiring a full blockchain.
Add unit tests to cover basic difficulty calculation, plus each of the min/max actual
time, and maximum difficulty target conditions.