ReplayStage owning the pool allows for subsequent work to configure
the size of the pool; configuring the size of the pool inside of the
lazy_static would have been a little messy
RuntimeConfig doesn't use anything SVM specific and logically belongs
in program runtime rather than SVM. This change moves the definition
of RuntimeConfig struct from the SVM crate to program-runtime and
adjusts `use` statements accordingly.
* Combine builtin and BPF execution cost into programs_execution_cost since VM has started to consume CUs uniformly
* update tests
* apply suggestions from code review
* Push and aggregate RestartLastVotedForkSlots.
* Fix API and lint errors.
* Reduce clutter.
* Put my own LastVotedForkSlots into the aggregate.
* Write LastVotedForkSlots aggregate progress into local file.
* Fix typo and name constants.
* Fix flaky test.
* Clarify the comments.
* - Use constant for wait_for_supermajority
- Avoid waiting after first shred when repair is in wen_restart
* Fix delay_after_first_shred and remove loop in wen_restart.
* Read wen_restart slots inside the loop instead.
* Discard turbine shreds while in wen_restart in windows insert rather than
shred_fetch_stage.
* Use the new Gossip API.
* Rename slots_to_repair_for_wen_restart and a few others.
* Rename a few more and list all states.
* Pipe exit down to aggregate loop so we can exit early.
* Fix import of RestartLastVotedForkSlots.
* Use the new method to generate test bank.
* Make linter happy.
* Use new bank constructor for tests.
* Fix a bad merge.
* - add new const for wen_restart
- fix the test to cover more cases
- add generate_repairs_for_slot_not_throtted_by_tick and
generate_repairs_for_slot_throtted_by_tick to make it readable
* Add initialize and put the main logic into a loop.
* Change aggregate interface and other fixes.
* Add failure tests and tests for state transition.
* Add more tests and add ability to recover from written records in
last_voted_fork_slots_aggregate.
* Various name changes.
* We don't really care what type of error is returned.
* Wait on expected progress message in proto file instead of sleep.
* Code reorganization and cleanup.
* Make linter happy.
* Add WenRestartError.
* Split WenRestartErrors into separate erros per state.
* Revert "Split WenRestartErrors into separate erros per state."
This reverts commit 4c920cb8f8d492707560441912351cca779129f6.
* Use individual functions when testing for failures.
* Move initialization errors into initialize().
* Use anyhow instead of thiserror to generate backtrace for error.
* Add missing Cargo.lock.
* Add error log when last_vote is missing in the tower storage.
* Change error log info.
* Change test to match exact error.
The name was previously hard-coded to solReceiver. The use of the same
name makes it hard to figure out which thread is which when these
threads are handling many services (Gossip, Tvu, etc).
Currently, ReplayStage sends new roots to BlockstoreCleanupService, and
BlockstoreCleanupService decides when to clean based on advancement of
the latest root. This is totally unnecessary as the latest root is
cached by the Blockstore, and this value can simply be fetched.
This change removes the channel completely, and instead just fetches
the latest root from Blockstore directly. Moreso, some logic is added
to check the latest root less frequently, based on the set purge
interval.
All in all, we went from sending > 100 slots/min across a crossbeam
channel to reading an atomic roughly 3 times/min, while also removing
the need for an additional thread that read from the channel.
The directory is currently named with the expected_shred_version;
however, the backup contains shreds that do NOT match the
expected_shred_version. So, use the found (incorrect) shred version in
the name instead.
* Adding metrics for prioritization fees min/max per thread
* Adding scheduled transaction prioritization fees to the metrics
* Changes after andrews comments
* fixing Taos comments
* Adding metrics to the new scheduler
* Fixing getting of min max for TransactionStateContainer
* Fix clippy CI Issue
* Changes after andrews comments about min/max for new scheduler
* Creating a new structure to store prio fee metrics
* Reporting with prio fee stats banking_stage_scheduler_counts
* merging prioritization stats into SchedulerCountMetrics
* Minor changes after andrews review
During a cluster upgrade when only half of the cluster can ingest the new shred
variant, sending shreds of the new variant can cause nodes to diverge.
The commit adds a feature to enable chained Merkle shreds explicitly.