Currently, the cleanup service counts the number of shreds in the
database by iterating the entire SlotMeta column and reading the number
of received shreds for each slot. This gives us a fairly accurate count
at the expense of performing a good amount of IO.
Instead of counting the individual slots, use the live_files()
rust-rocksdb entrypoint that we expose in Blockstore. This API allows us
to get the number of entries (shreds) in the data shred column family by
reading file metadata. This is much more efficient from IO perspective.
When broadcasting shreds, turbine excludes the slot leader from the
random shuffle. Doing so, shreds should never loopback to the leader.
If shreds reaching retransmit stage are from the node's own leader slots
they should not be retransmited to any nodes.
* First draft of ingesting duplicate proofs in Gossip into blockstore.
* Add more unittests.
* Add more unittests for bad cases.
* Fix lint errors for tests.
* More linter fixes for tests.
* Lint fixes
* Rename get_entries, move location of comment.
* Some renaming changes and comment fixes.
* Fix compile warning, this enum is not used.
* Fix lint errors.
* Slow down cleanup because this could potentially be expensive.
* Forgot to reset cleanup count.
* Add protection against attackers when constructing chunk map when
we ingest Gossip proofs.
* Use duplicate shred index instead of get_entries.
* Rename ClusterInfoDuplicateShredListener and fix a few small problems.
* Use into_shreds to piece together the proof.
* Remove redundant code.
* Address a few small errors.
* Discard slots too advanced in the future.
* - Use oldest proof for each pubkey
- Limit number of pubkeys in each slot to 100
* Disable duplicate shred handling for now.
* Revert "Disable duplicate shred handling for now."
This reverts commit c3fcf403876cfbf90afe4d2265a826f21a5e24ab.
* Increase turbine propagation const
Value is used as a delay threshold for issuing shred repairs and analysis is showing we are overly aggressive in requesting repairs. Shreds show up via turbine before the repair completes the vast majority of the time
* Use Duration type for MAX_TURBINE_PROPAGATION
Store non-vote transaction counts that are now recorded by the banks
into the `blockstore`.
`SamplePerformanceService` now populates `PerfSampleV2` with counts from
the banks.
Problem
The plugins need to know when all transactions for a block have been all notified to serve getBlock request correctly. As block and transaction notifications are sent asynchronously to each other it will be difficult.
Summary of Changes
Include the executed transaction count in block notification which can be used to check if all transactions have been notified.
The commit adds an associated SignedData type to Shred trait so that
merkle and legacy shreds can return different types for signed_data
method.
This would allow legacy shreds to point to a section of the shred
payload, whereas merkle shreds would compute and return the merkle root.
Ultimately this would allow to remove the merkle root from the shreds
binary.
The commit allocates 2% of slots to running experiments with different
turbine fanouts based on the slot number.
The experiment is feature gated with an additional feature to disable
the experiment.
* Plumb dumps from replay_stage to repair
When dumping a slot from replay_stage as a result of duplicate or
ancestor hashes, properly update repair subtrees to keep weighting and
forks view accurate.
* add test
* pr comments
* Support bi-directional quic communication, use the same endpoint for the quic server and client
This is needed for supporting using quic for repair
* Added comments on the bi-directional communication tests
* Removed some debug logs
* clippy issue
The num_repair field is only blockstore insertion metric being updated
outside of Blockstore::insert() call chain; move the update to insert()
with the rest of the fields in BlockstoreInsertionMetrics struct.
* Add dump_node to update stake for heaviest subtrees
Additionally refactor subtrees to store children as a hashset
* Add a more complicated forks test
* chose -> choose
* remove is_dumped flag and reuse latest_invalid_ancestor instead
* Update cost model to use requested_cu instead of estimated cu #27608
* remove CostUpdate and CostModel from replay/tvu
* revive cost update service to send cost tracker stats
* CostModel is now static
* remove unused package
Co-authored-by: Tao Zhu <tao@solana.com>
* Move ConnectionCache back to solana-client, and duplicate ThinClient, TpuClient there
* Dedupe thin_client modules
* Dedupe tpu_client modules
* Move TpuClient to TpuConnectionCache
* Move ThinClient to TpuConnectionCache
* Move TpuConnection and quic/udp trait implementations back to solana-client
* Remove enum_dispatch from solana-tpu-client
* Move udp-client to its own crate
* Move quic-client to its own crate
In the quic server handle_connection, when we timed out in receiving the chunks, we loop forever to wait for the chunk. If the client never provide another chunk, the server can hopelessly wait for that chunk and wasting server resources. Instead WAIT_FOR_CHUNK_TIMEOUT_MS is introduced to bound this to 10 seconds at maximum. The stream will be dropped if it times out.
The manual Blockstore compaction that was being initiated from
LedgerCleanupService has been disabled for quite some time in favor of
several optimizations.
Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
* Split out voting and banking threads in banking stage
Additionally this allows us to aggressively prune the buffer for voting threads
as with the new vote state only the latest vote from each validator is
necessary.
* Update local cluster test to use new Vote ix
* Encapsulate transaction storage filtering better
* Address pr comments
* Commit cargo lock change
* clippy
* Remove unsafe impls
* pr comments
* compute_sanitized_transaction -> build_sanitized_transaction
* &Arc -> Arc
* Move test
* Refactor metrics enums
* clippy
https://github.com/solana-labs/solana/pull/27193
added hash domain to ping-pong protocol.
For backward compatibility responses both with and without domain were
generated and accepted.
Now that all clusters are upgraded, this commit enforces the hash domain
by removing the response without the domain.
We have transactions counted in replay-slot-end-to-end-stats, but that
metric is broken down to report things per thread.
So, report total_transactions for the entire slot (all threads) in
replay-slot-stats.
- Batch filtering invalid transactions (fail to sanitize, too old or already processed) before forwarding
- Combine packet filtering and forwarding to share sanitized transactions
- `iter_desc` is no longer needed, remove it;
- Add a method to share the logic of removing packets from buffer after they were removed from MinMaxHeap
- Add test coverage for forward_packet_batches_by_accounts
- rebase, resolve conflicts
Separate storage for voting and transaction threads:
- Voting threads utilize a shared reference in order to dedup extraneous
votes
- Transactions have thread local storage like before
* Add structure to collect and coalesce vote packets
Will be used in banking stage to throw out extraneous vote packets
before processing
* pr comments
* Update inner lock to arc to improve performance
--tpu-enable-udp is introduced. And when this is on, the transaction receive and transaction forward is enabled using udp.
Except for a few tests which was hard-coded sending transactions using udp, most tests are being done with udp based tpu disabled.
* Plumb priority_fee_cache into rpc
* Add PrioritizationFeeCache api
* Add getRecentPrioritizationFees rpc endpoint
* Use MAX_TX_ACCOUNT_LOCKS to limit input keys
* Remove unused cache apis
* Map fee data by slot, and make rpc account inputs optional
* Add priority_fee_cache to rpc test framework, and add test
* Add endpoint to jsonrpc docs
* Update docs/src/developing/clients/jsonrpc-api.md
* Update docs/src/developing/clients/jsonrpc-api.md
In kin-sim, we found that bounded channel causes halt for account
background services. As the number of accounts grows, the time for
pruning and cleaning increases, which would leads to longer intervals
between the pruning of deaded bank slots. With 1.7B accounts, we will
exceed the 10K bounded channel threshold that causes halt of account
back ground services. Without pruning, the node will eventually run out
of memory.
* Check overflow on vote tx compaction boundary
Check for overflow during the conversion between VoteStateUpdate and
CompactVoteStateUpdate.
* Try removing clippy supress
Tenets:
1. Limit thread names to 15 characters
2. Prefix all Solana-controlled threads with "sol"
3. Use Camel case. It's more character dense than Snake or Kebab case
* Create a new function cleanup_accounts_paths, a trivial change
* Remove account files asynchronously
* Update and simplify the implementation after the validator test runs.
* Fixes after testing on the dev device
* Discard tokio. Use thread instead
* Fix comments format
* Fix config type to pass the github test
* Fix failed tests. Handle the case of non-existing path
* Final cleanup, addressing the review comments
Avoided OsString.
Made the function more generic with "impl AsRef<Path>"
Co-authored-by: Jeff Washington <jeff.washington@solana.com>