For duplicate blocks prevention we want to verify that the last erasure
batch was sufficiently propagated through turbine. This requires
additional bookkeeping because, depending on the erasure coding schema,
the entire batch might be recovered from only a few coding shreds.
In order to simplify above, this commit instead ensures that the last
erasure batch has >= 32 data shreds so that the batch cannot be
recovered unless 32+ shreds are received from turbine or repair.
* add PacketFlags::FROM_STAKED_NODE
* Only forward packets from staked node
* fix local-cluster test forwarding
* review comment
* tpu_votes get marked as from_staked_node
The IP echo server currently spins up a worker thread for every thread
on the machine. Observing some data for nodes,
- MNB validators and RPC nodes look to get several hundred of these
requests per day
- MNB entrypoint nodes look to get 2-3 requests per second on average
In both instances, the current threadpool is severely overprovisioned
which is a waste of resources. This PR plumnbs a flag to control the
number of worker threads for this pool as well as setting a default of
two threads for this server. Two threads allow for one thread to always
listen on the TCP port while the other thread processes requests
* add metric for duplicate push messages
* add in num_total_push
* address comments. don't lock stats each time
* address comments. remove num_total_push
* change dup push message name in code to reflect metric name
This is port of firedancer's implementation of weighted shuffle:
https://github.com/firedancer-io/firedancer/blob/3401bfc26/src/ballet/wsample/fd_wsample.chttps://github.com/anza-xyz/agave/pull/185
implemented weighted shuffle using binary tree. Though asymptotically a
binary tree has better performance, compared to a Fenwick tree, it has
less cache locality resulting in smaller improvements and in particular
slower WeightedShuffle::new.
In order to improve cache locality and reduce the overheads of
traversing the tree, this commit instead uses a generalized N-ary tree
with fanout of 16, showing significant improvements in both
WeightedShuffle::new and WeightedShuffle::shuffle.
With 4000 weights:
N-ary tree (fanout 16):
test bench_weighted_shuffle_new ... bench: 36,244 ns/iter (+/- 243)
test bench_weighted_shuffle_shuffle ... bench: 149,082 ns/iter (+/- 1,474)
Binary tree:
test bench_weighted_shuffle_new ... bench: 58,514 ns/iter (+/- 229)
test bench_weighted_shuffle_shuffle ... bench: 269,961 ns/iter (+/- 16,446)
Fenwick tree:
test bench_weighted_shuffle_new ... bench: 39,413 ns/iter (+/- 179)
test bench_weighted_shuffle_shuffle ... bench: 364,771 ns/iter (+/- 2,078)
The improvements become even more significant as there are more items to
shuffle. With 20_000 weights:
N-ary tree (fanout 16):
test bench_weighted_shuffle_new ... bench: 200,659 ns/iter (+/- 4,395)
test bench_weighted_shuffle_shuffle ... bench: 941,928 ns/iter (+/- 26,492)
Binary tree:
test bench_weighted_shuffle_new ... bench: 881,114 ns/iter (+/- 12,343)
test bench_weighted_shuffle_shuffle ... bench: 1,822,257 ns/iter (+/- 12,772)
Fenwick tree:
test bench_weighted_shuffle_new ... bench: 276,936 ns/iter (+/- 14,692)
test bench_weighted_shuffle_shuffle ... bench: 2,644,713 ns/iter (+/- 49,252)
This is partial port of firedancer's implementation of weighted shuffle:
https://github.com/firedancer-io/firedancer/blob/3401bfc26/src/ballet/wsample/fd_wsample.c
Though Fenwick trees use less space, inverse queries require an
additional O(log n) factor for binary search resulting an overall
O(n log n log n) performance for weighted shuffle.
This commit instead uses a binary tree where each node contains the sum
of all weights in its left sub-tree. The weights themselves are
implicitly stored at the leaves. Inverse queries and updates to the tree
all can be done O(log n) resulting an overall O(n log n) weighted
shuffle implementation.
Based on benchmarks, this results in 24% improvement in
WeightedShuffle::shuffle:
Fenwick tree:
test bench_weighted_shuffle_new ... bench: 36,686 ns/iter (+/- 191)
test bench_weighted_shuffle_shuffle ... bench: 342,625 ns/iter (+/- 4,067)
Binary tree:
test bench_weighted_shuffle_new ... bench: 59,131 ns/iter (+/- 362)
test bench_weighted_shuffle_shuffle ... bench: 260,194 ns/iter (+/- 11,195)
Though WeightedShuffle::new is now slower, it generally can be cached
and reused as in Turbine:
https://github.com/anza-xyz/agave/blob/b3fd87fe8/turbine/src/cluster_nodes.rs#L68
Additionally the new code has better asymptotic performance. For
example with 20_000 weights WeightedShuffle::shuffle is 31% faster:
Fenwick tree:
test bench_weighted_shuffle_new ... bench: 255,071 ns/iter (+/- 9,591)
test bench_weighted_shuffle_shuffle ... bench: 2,466,058 ns/iter (+/- 9,873)
Binary tree:
test bench_weighted_shuffle_new ... bench: 830,727 ns/iter (+/- 10,210)
test bench_weighted_shuffle_shuffle ... bench: 1,696,160 ns/iter (+/- 75,271)
The name was previously hard-coded to solReceiver. The use of the same
name makes it hard to figure out which thread is which when these
threads are handling many services (Gossip, Tvu, etc).
* gossip: notify state machine of duplicate proofs
* Add feature flag for ingesting duplicate proofs from Gossip.
* Use the Epoch the shred is in instead of the root bank epoch.
* Fix unittest by activating the feature.
* Add a test for feature disabled case.
* EpochSchedule is now not copyable, clone it explicitly.
* pr feedback: read epoch schedule on startup, add guard for ff recache
* pr feedback: bank_forks lock, -cached_slots_in_epoch, init ff
* pr feedback: bank.forks_try_read() -> read()
* pr feedback: fix local-cluster setup
* local-cluster: do not expose gossip internals, use retry mechanism instead
* local-cluster: split out case 4b into separate test and ignore
* pr feedback: avoid taking lock if ff is already found
* pr feedback: do not cache ff epoch
* pr feedback: bank_forks lock, revert to cached_slots_in_epoch
* pr feedback: move local variable into helper function
* pr feedback: use let else, remove epoch 0 hack
---------
Co-authored-by: Wen <crocoxu@gmail.com>
* Add RestartHeaviestFork to Gossip.
* Add a test for out of bound value.
* Send observed_stake and total_epoch_stake in ResatartHeaviestFork.
* Remove total_epoch_stake from RestartHeaviestFork.
* Forgot to update ABI digest.
* Remove checking of whether stake is zero.
* Remove unnecessary new function and make new_rand pub(crate).
* handle ContactInfo in places where only LegacyContactInfo was used
* missed a spot
* missed a spot
* import contact info for crds lookup
* cargo fmt
* rm contactinfo from crds_entry. not supported yet
* typo
* remove crds.nodes insert for ContactInfo. not supported yet
* forgot to remove clusterinfo in remove()
* move around contactinfo match arm
* remove contactinfo updating crds.shred_version
* Add push and get methods for RestartLastVotedForkSlots
* Improve expression format.
* Remove fill() from RestartLastVotedForkSlots and move into constructor.
* Update ABI signature.
* Use flate2 compress directly instead of relying on CompressedSlots.
* Make constructor of RestartLastVotedForkSlots return error if necessary.
* Use minmax and remove unnecessary code.
* Replace flate2 with run-length encoding in RestartLastVotedForkSlots.
* Remove accidentally added file.
* The passed in last_voted_fork don't need to be mutable any more.
* Switch to different type of run-length encoding.
* Fix typo.
* Move constant into RestartLastVotedForkSlots.
* Use BitVec in RawOffsets.
* Remove the unnecessary clone.
* Use iter functions for RLE.
* Use take_while instead of loop.
* Change Run length encoding to iterator implementation.
* Allow one slot in RestartLastVotedForkSlots.
* Various simplifications.
* Fix various errors and use customized error type.
* Various simplifications.
* Return error from push_get_restart_last_voted_fork_slots and
remove unnecessary constraints in to_slots.
* Allow 81k slots on RestartLastVotedForkSlots.
* Limit MAX_SLOTS to 65535 so we can go back to u16.
* Use u16::MAX instead of 65535.
The Blockstore currently maintains a RwLock<Slot> of the maximum root
it has seen inserted. The value is initialized during
Blockstore::open() and updated during calls to Blockstore::set_roots().
The max root is queried fairly often for several use cases, and caching
the value is cheaper than constructing an iterator to look it up every
time.
However, the access patterns of these RwLock match that of an atomic.
That is, there is no critical section of code that is run while the
lock is head. Rather, read/write locks are acquired in order to read/
update, respectively. So, change the RwLock<u64> to an AtomicU64.
* Initialize fork graph in program cache during bank_forks creation
* rename BankForks::new to BankForks::new_rw_arc
* fix compilation
* no need to set fork_graph on insert()
* fix partition tests
Push message propagation has improved in recent versions of the gossip
code and we don't rely on pull requests as much as before. Handling pull
requests is also inefficient and expensive.
The commit reduces number of outgoing pull requests by down sampling.
* Add RestartLastVotedForkSlots and RestartHeaviestFork for wen_restart.
* Fix linter errors.
* Revert RestartHeaviestFork, it will be added in another PR.
* Update frozen abi message.
* Fix wrong number in test generation, change to pub(crate) to limit scope.
* Separate push_epoch_slots and push_restart_last_voted_fork_slots.
* Add RestartLastVotedForkSlots data structure.
* Remove unused parts to make PR smaller.
* Remove unused clone.
* Use CompressedSlotsVec to share code between EpochSlots and RestartLastVotedForkSlots.
* Add total_messages to show how many messages are there.
* Reduce RestartLastVotedForkSlots to one packet (16k slots).
* Replace last_vote_slot with shred_version, revert CompressedSlotsVec.
* Add wen_restart module:
- Implement reading LastVotedForkSlots from blockstore.
- Add proto file to record the intermediate results.
- Also link wen_restart into validator.
- Move recreation of tower outside replay_stage so we can get last_vote.
* Update lock file.
* Fix linter errors.
* Fix depencies order.
* Update wen_restart explanation and small fixes.
* Generate tower outside tvu.
* Update validator/src/cli.rs
Co-authored-by: Tyera <teulberg@gmail.com>
* Update wen-restart/protos/wen_restart.proto
Co-authored-by: Tyera <teulberg@gmail.com>
* Update wen-restart/build.rs
Co-authored-by: Tyera <teulberg@gmail.com>
* Update wen-restart/src/wen_restart.rs
Co-authored-by: Tyera <teulberg@gmail.com>
* Rename proto directory.
* Rename InitRecord to MyLastVotedForkSlots, add imports.
* Update wen-restart/Cargo.toml
Co-authored-by: Tyera <teulberg@gmail.com>
* Update wen-restart/src/wen_restart.rs
Co-authored-by: Tyera <teulberg@gmail.com>
* Move prost-build dependency to project toml.
* No need to continue if the distance between slot and last_vote is
already larger than MAX_SLOTS_ON_VOTED_FORKS.
* Use 16k slots instead of 81k slots, a few more wording changes.
* Use AncestorIterator which does the same thing.
* Update Cargo.lock
* Update Cargo.lock
---------
Co-authored-by: Tyera <teulberg@gmail.com>
* Move vote related code to its own crate
* Update imports in code and tests
* update programs/sbf/Cargo.lock
* fix check errors
* update abi_digest
* rebase fixes
* fixes after rebase
removes outdated matches crate from the dependencies
std::matches has been stable since rust 1.42.0.
Other use-cases are covered by assert_matches crate.
* allow pedantic invalid cast lint
* allow lint with false-positive triggered by `test-case` crate
* nightly `fmt` correction
* adapt to rust layout changes
* remove dubious test
* Use transmute instead of pointer cast and de/ref when check_aligned is false.
* Renames clippy::integer_arithmetic to clippy::arithmetic_side_effects.
* bump rust nightly to 2023-08-25
* Upgrades Rust to 1.72.0
---------
Co-authored-by: Trent Nelson <trent@solana.com>