The core/src/ directory is already pretty crowded, and moving these
items into the subdirectory more clearly identifies that they are tied
to banking_stage.
* Allow banking_stage to update prioritization_fee_cache
* Update core/src/banking_stage/committer.rs
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
* move use to top
---------
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
* Add fully-reproducible online tracer for banking
* Don't use eprintln!()...
* Update programs/sbf/Cargo.lock...
* Remove meaningless assert_eq
* Group test-only code under aptly named mod
* Remove needless overflow handling in receive_until
* Delay stat aggregation as it's possible now
* Use Cow to avoid needless heap allocs
* Properly consume metrics action as soon as hold
* Trace UnprocessedTransactionStorage::len() instead
* Loosen joining api over type safety for replaystage
* Introce hash event to override these when simulating
* Use serde_with/serde_as instead of hacky workaround
* Update another Cargo.lock...
* Add detailed comment for Packet::buffer serialize
* Rename sender_overhead_minimized_receiver_loop()
* Use type interference for TraceError
* Another minor rename
* Retire now useless ForEach to simplify code
* Use type alias as much as possible
* Properly translate and propagate tracing errors
* Clarify --enable-banking-trace with better naming
* Consider unclean (signal-based) node restarts..
* Tweak logging and cli
* Remove Bank events as it's not needed anymore
* Make tpu own banking tracer thread
* Reduce diff a bit..
* Use latest serde_with
* Finally use the published rolling-file crate
* Make test code change more consistent
* Revive dead and non-terminating test code path...
* Dispose batches early now that possible
* Split off thread handle very early at ::new()
* Tweak message for TooSmallDirByteLimitl
* Remove too much of indirection
* Remove needless pub from ::channel()
* Clarify test comments
* Avoid needless event creation if tracer is disabled
* Write tests around file rotation and spill-over
* Remove unneeded PathBuf::clone()s...
* Introduce inner struct instead of tuple...
* Remove unused enum BankStatus...
* Avoid .unwrap() for the case of disabled tracer...
* Update cost model to use requested_cu instead of estimated cu #27608
* remove CostUpdate and CostModel from replay/tvu
* revive cost update service to send cost tracker stats
* CostModel is now static
* remove unused package
Co-authored-by: Tao Zhu <tao@solana.com>
* Move ConnectionCache back to solana-client, and duplicate ThinClient, TpuClient there
* Dedupe thin_client modules
* Dedupe tpu_client modules
* Move TpuClient to TpuConnectionCache
* Move ThinClient to TpuConnectionCache
* Move TpuConnection and quic/udp trait implementations back to solana-client
* Remove enum_dispatch from solana-tpu-client
* Move udp-client to its own crate
* Move quic-client to its own crate
* Split out voting and banking threads in banking stage
Additionally this allows us to aggressively prune the buffer for voting threads
as with the new vote state only the latest vote from each validator is
necessary.
* Update local cluster test to use new Vote ix
* Encapsulate transaction storage filtering better
* Address pr comments
* Commit cargo lock change
* clippy
* Remove unsafe impls
* pr comments
* compute_sanitized_transaction -> build_sanitized_transaction
* &Arc -> Arc
* Move test
* Refactor metrics enums
* clippy
- Batch filtering invalid transactions (fail to sanitize, too old or already processed) before forwarding
- Combine packet filtering and forwarding to share sanitized transactions
- `iter_desc` is no longer needed, remove it;
- Add a method to share the logic of removing packets from buffer after they were removed from MinMaxHeap
- Add test coverage for forward_packet_batches_by_accounts
- rebase, resolve conflicts
Given the 32:32 erasure recovery schema, current implementation requires
exactly 32 data shreds to generate coding shreds for the batch (except
for the final erasure batch in each slot).
As a result, when serializing ledger entries to data shreds, if the
number of data shreds is not a multiple of 32, the coding shreds for the
last batch cannot be generated until there are more data shreds to
complete the batch to 32 data shreds. This adds latency in generating
and broadcasting coding shreds.
In addition, with Merkle variants for shreds, data shreds cannot be
signed and broadcasted until coding shreds are also generated. As a
result *both* code and data shreds will be delayed before broadcast if
we still require exactly 32 data shreds for each batch.
This commit instead always generates and broadcast coding shreds as soon
as there any number of data shreds available. When serializing entries
to shreds:
* if the number of resulting data shreds is less than 32, then more
coding shreds will be generated so that the resulting erasure batch
has the same recovery probabilities as a 32:32 batch.
* if the number of data shreds is more than 32, then the data shreds are
split uniformly into erasure batches with _at least_ 32 data shreds in
each batch. Each erasure batch will have the same number of code and
data shreds.
For example:
* If there are 19 data shreds, 27 coding shreds are generated. The
resulting 19(data):27(code) erasure batch has the same recovery
probabilities as a 32:32 batch.
* If there are 107 data shreds, they are split into 3 batches of 36:36,
36:36 and 35:35 data:code shreds each.
A consequence of this change is that code and data shreds indices will
no longer align as there will be more coding shreds than data shreds
(not only in the last batch in each slot but also in the intermediate
ones);