Refactor a few functions that are on the load-from-snapshot path, to facilitate
adding in incremental snapshots more easily.
Additionally, add some tests and doc comments.
* Add BlockHeight CF to blockstore
* Rename CacheBlockTimeService to be more general
* Cache block-height using service
* Fixup previous proto mishandling
* Add block_height to block structs
* Add block-height to solana block
* Fallback to BankForks if block time or block height are not yet written to Blockstore
* Add docs
* Review comments
* Add blockstore-root-scan for api nodes on boot
* Ensure cluster-confirmed root and parents are set as root in blockstore in load_frozen_forks()
* Plumb rpc-scan-and-fix-roots validator flag
* Upgrade Rust to 1.52.0
update nightly_version to newly pushed docker image
fix clippy lint errors
1.52 comes with grcov 0.8.0, include this version to script
* upgrade to Rust 1.52.1
* disabling Serum from downstream projects until it is upgraded to Rust 1.52.1
* Require that blockstore block-time only be recognized slot, instead of root
* Move cache_block_time to after Bank freeze
* Single use statement
* Pass transaction_status_sender by reference
* Remove unnecessary slot-existence check before caching block time altogether
* Move block-time existence check into Blockstore::cache_block_time, Blockstore no longer needed in blockstore_processor helper
CodingShredHeader.position is equal to
ShredCommonHeader.index - ShredCommonHeader.fec_set_index
and is so redundant. The extra position field can add bugs if not
consistent with index and fec_set_index.
Strip the zero-padding off of data shreds before insertion into blockstore
Co-authored-by: Stephen Akridge <sakridge@gmail.com>
Co-authored-by: Nathan Hawkins <utsl@utsl.org>
Number of parity coding shreds is always less than the number of data
shreds in FEC blocks:
https://github.com/solana-labs/solana/blob/6907a2366/ledger/src/shred.rs#L719
Data shreds are batched in chunks of 32 shreds each:
https://github.com/solana-labs/solana/blob/6907a2366/ledger/src/shred.rs#L714
However the very last batch of data shreds in a slot can be small, in
which case the loss rate can be exacerbated.
This commit expands the number of coding shreds in the last FEC block in
slots to: 64 - number of data shreds; so that FEC blocks are always 64
data and parity coding shreds each.
As a consequence of this, the last FEC block has more parity coding
shreds than data shreds. So for some shred indices we will have a coding
shred but no data shreds. This should not cause any kind of overlapping
FEC blocks as in:
https://github.com/solana-labs/solana/pull/10095
since this is done only for the very last batch in a slot, and the next
slot will reset the shred index.
* Track transaction check time separately from account loads
* banking packet process metrics
* Remove signature clone in status cache lookup
* Reduce allocations when converting packets to transactions
* Add blake3 hash of transaction messages in status cache
* Bug fixes
* fix tests and run fmt
* Address feedback
* fix simd tx entry verification
* Fix rebase
* Feedback
* clean up
* Add tests
* Remove feature switch and fall back to signature check
* Bump programs/bpf Cargo.lock
* clippy
* nudge benches
* Bump `BankSlotDelta` frozen ABI hash`
* Add blake3 to sdk/programs/Cargo.lock
* nudge bpf tests
* short circuit status cache checks
Co-authored-by: Trent Nelson <trent@solana.com>
If the vector is pinned and has a recycler, From<PinnedVec>
implementation of Vec should clone (instead of consuming) the underlying
vector so that the next allocation of a PinnedVec will recycle an
already pinned one.
* Update blockstore method to allow return of unfinalized signature
* Support confirmed sigs in getConfirmedSignaturesForAddress2
* Add deprecated comments
* Update docs
* Enable confirmed transaction-history in cli
* Return real confirmation_status; fill in not-yet-finalized block time if possible