Commit Graph

308 Commits

Author SHA1 Message Date
Brooks Prumo d1ba42180d
clippy for rust 1.65.0 (#28765) 2022-11-09 19:39:38 +00:00
steviez 2272fd807e
Remove Blockstore manual compaction code (#28409)
The manual Blockstore compaction that was being initiated from
LedgerCleanupService has been disabled for quite some time in favor of
several optimizations.

Co-authored-by: Ryo Onodera <ryoqun@gmail.com>
2022-10-28 10:39:00 +02:00
Justin Starry 2d8665d307
Record inner instruction stack height (#28430)
* Record inner instruction stack height

* fix sbf tests

* feedback
2022-10-26 10:37:44 +08:00
steviez 60f6e24b76
Make Blockstore::get_entries_in_data_block() use multi_get() (#28245) 2022-10-09 15:34:03 -04:00
steviez 49dbae7e53
Use VecDeque as a queue instead of Vec (#28190) 2022-10-03 22:55:59 -05:00
Yueh-Hsuan Chiang 6b17bee5a8
Remove the const default for RocksFifo (#27965)
#### Summary of Changes
Removes the constant default for ShredStorageType::RocksFifo
as the shred storage size is either user-specified or derived
from --limit-ledger-size in #27459.
2022-10-01 15:10:54 -07:00
steviez f38ed1c266
Use more descriptive variable names in blockstore chaining tests (#28131) 2022-09-29 10:24:09 -05:00
behzad nouri 72537e7e07
bypasses rayon thread-pool for single entry batches (#28077)
With no parallelization, thread-pool only adds overhead.
2022-09-26 21:32:58 +00:00
behzad nouri f49beb0cbc
caches reed-solomon encoder/decoder instance (#27510)
ReedSolomon::new(...) initializes a matrix and a data-decode-matrix cache:
https://github.com/rust-rse/reed-solomon-erasure/blob/273ebbced/src/core.rs#L460-L466

In order to cache this computation, this commit caches the reed-solomon
encoder/decoder instance for each (data_shards, parity_shards) pair.
2022-09-25 18:09:47 +00:00
behzad nouri 45e26574f3
removes redundant shred.sanitize() from blockstore (#28016)
Shreds received from other nodes over the socket are sanitized when the
payload is deserialized:
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/legacy.rs#L137
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/legacy.rs#L77
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/merkle.rs#L355
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/merkle.rs#L439

Similarly, shreds recovered from erasure codes are also sanitized at
deserialization:
https://github.com/solana-labs/solana/blob/f02fe9c7e/ledger/src/shredder.rs#L330
or explicitly so for Merkle shreds:
https://github.com/solana-labs/solana/blob/f02fe9c7e/ledger/src/shred/merkle.rs#L753

Shreds generated locally by the node itself during its leader slots do
not need to be sanitized.

So sanitizing shreds in blockstore is redundant and wasteful. In
particular this becomes more wasteful with Merkle shreds because
sanitizing shreds would require verifying Merkle proof.
As such the commit removes redundant shred.sanitize() from blockstore.
2022-09-24 16:31:50 +00:00
behzad nouri 97c9af4c6b plumbs through flag to generate merkle variant of shreds 2022-09-23 16:45:18 +00:00
steviez e4affb9fea
Add Blockstore::highest_slot() method (#27981) 2022-09-23 04:53:43 -05:00
steviez eaa4787201
Cleanup blockstore test (#27999) 2022-09-22 23:37:47 +00:00
behzad nouri 9a57c64f21
patches clippy errors from new rust nightly release (#27996) 2022-09-22 22:23:03 +00:00
Yueh-Hsuan Chiang cccade42b3
Optimize get_slots_since() using the batched version of multi_get() (#27686)
#### Problem
The current implementation of get_slots_since() invokes multiple rocksdb::get().
As a result, each get() operation may end up requiring one disk read.  This leads
to poor performance of get_slots_since described in #24878.

#### Summary of Changes
This PR makes get_slots_since() use the batched version of multi_get() instead,
which allows multiple get operations to be processed in batch so that they can
be answered with fewer disk reads.
2022-09-19 21:52:13 -07:00
Yueh-Hsuan Chiang ba3d9cd325
Add LedgerColumn::multi_get() (#26354)
#### Problem
Blockstore operations such as get_slots_since() issues multiple rocksdb::get()
at once which is not optimal for performance.

#### Summary of Changes
This PR adds LedgerColumn::multi_get() based on rocksdb::batched_multi_get(),
the optimized version of multi_get() where get requests are processed in batch
to minimize read I/O.
2022-09-12 15:01:22 -07:00
Yueh-Hsuan Chiang ed00365101
Add ledger tool command print-file-metadata (#26790)
Add ledger-tool command print-file-metadata

#### Summary of Changes
This PR adds a ledger tool subcommand print-file-metadata.
```
USAGE:
    solana-ledger-tool print-file-metadata [FLAGS] [OPTIONS] [SST_FILE_NAME]

    Prints the metadata of the specified ledger-store file.
    If no file name is unspecified, then it will print the metadata of all ledger files
```
2022-09-06 21:46:35 -07:00
Yueh-Hsuan Chiang c62aef6e02
Add code comments for lowest_cleanup_slot functions (#27497)
#### Summary of Changes
Add code comments for lowest_cleanup_slot related functions to improve
the code readability for the consistency between blockstore purge logic
and the read side.
2022-09-06 16:03:42 -07:00
Yueh-Hsuan Chiang 6d070cee08
Fix the boundary inconsistency between delete_file_in_range and delete_range (#27201)
#### Problem
RocksDB's delete_range applies to [from, to) while delete_file_in_range
applies to [from, to] by default, and the rust-rocksdb api does not include
the option to make delete_file_in_range apply to [from, to).  Such inconsistency
might cause `blockstore::run_purge` to produce an inconsistent result as it
invokes both delete_range and delete_file_in_range.

#### Summary of Changes
This PR makes all our purge / delete related functions to be inclusive
on both starting and ending slots.
2022-08-30 21:38:54 -07:00
Brennan Watt e4a7d01e10
Rust v1.63 (#27303)
* Upgrade to Rust v1.63.0

* Add nightly_clippy_allows

* Resolve some new clippy nightly lints

* Increase QUIC packets completion timeout

* Update quinn-udp crate

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-08-22 18:01:03 -07:00
Michael Vines 3f4731b37f Standardize thread names
Tenets:
1. Limit thread names to 15 characters
2. Prefix all Solana-controlled threads with "sol"
3. Use Camel case. It's more character dense than Snake or Kebab case
2022-08-20 07:49:39 -07:00
behzad nouri c0b63351ae
recovers merkle shreds from erasure codes (#27136)
The commit
* Identifies Merkle shreds when recovering from erasure codes and
  dispatches specialized code to reconstruct shreds.
* Coding shred headers are added to recovered erasure shards.
* Merkle tree is reconstructed for the erasure batch and added to
  recovered shreds.
* The common signature (for the root of Merkle tree) is attached to all
  recovered shreds.
2022-08-19 21:07:32 +00:00
apfitzge 40b9f2f2be
slots_connected: check if the range is connected (>= ending_slot) (#27152) 2022-08-19 09:33:50 -05:00
Brennan Watt 7573000d87
Revert "Rust v1.63.0 (#27148)" (#27245)
This reverts commit a2e7bdf50a.
2022-08-19 09:19:44 +01:00
HaoranYi 4634fb944c
Use from_secs api to create duration (#27222)
use from_secs api to create duration
2022-08-18 11:06:52 -05:00
Yueh-Hsuan Chiang 6d12bb6ec3
Fix a corner-case panic in get_entries_in_data_block() (#27195)
#### Problem
get_entries_in_data_block() panics when there's inconsistency between
slot_meta and data_shred.

However, as we don't lock on reads, reading across multiple column families is
not atomic (especially for older slots) and thus does not guarantee consistency
as the background cleanup service could purge the slot in the middle.  Such
panic was reported in #26980 when the validator serves a high load of RPC calls.

#### Summary of Changes
This PR makes get_entries_in_data_block() panic only when the inconsistency
between slot-meta and data-shred happens on a slot older than lowest_cleanup_slot.
2022-08-18 02:37:19 -07:00
Brennan Watt a2e7bdf50a
Rust v1.63.0 (#27148)
* Upgrade to Rust v1.63.0

* Add nightly_clippy_allows

* Resolve some new clippy nightly lints

* Increase QUIC packets completion timeout

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-08-17 15:48:33 -07:00
behzad nouri ac91cdab74
removes buffering when generating coding shreds in broadcast (#25807)
Given the 32:32 erasure recovery schema, current implementation requires
exactly 32 data shreds to generate coding shreds for the batch (except
for the final erasure batch in each slot).
As a result, when serializing ledger entries to data shreds, if the
number of data shreds is not a multiple of 32, the coding shreds for the
last batch cannot be generated until there are more data shreds to
complete the batch to 32 data shreds. This adds latency in generating
and broadcasting coding shreds.

In addition, with Merkle variants for shreds, data shreds cannot be
signed and broadcasted until coding shreds are also generated. As a
result *both* code and data shreds will be delayed before broadcast if
we still require exactly 32 data shreds for each batch.

This commit instead always generates and broadcast coding shreds as soon
as there any number of data shreds available. When serializing entries
to shreds:
* if the number of resulting data shreds is less than 32, then more
  coding shreds will be generated so that the resulting erasure batch
  has the same recovery probabilities as a 32:32 batch.
* if the number of data shreds is more than 32, then the data shreds are
  split uniformly into erasure batches with _at least_ 32 data shreds in
  each batch. Each erasure batch will have the same number of code and
  data shreds.

For example:
* If there are 19 data shreds, 27 coding shreds are generated. The
  resulting 19(data):27(code) erasure batch has the same recovery
  probabilities as a 32:32 batch.
* If there are 107 data shreds, they are split into 3 batches of 36:36,
  36:36 and 35:35 data:code shreds each.

A consequence of this change is that code and data shreds indices will
no longer align as there will be more coding shreds than data shreds
(not only in the last batch in each slot but also in the intermediate
ones);
2022-08-11 12:44:27 +00:00
Richard Patel 270315a7f6
transaction-status, storage-proto: add compute_units_consumed (#26528)
* transaction-status, storage-proto: add compute_units_consumed

* fix bpf test

Co-authored-by: Justin Starry <justin@solana.com>
2022-08-06 17:14:31 +00:00
Yueh-Hsuan Chiang dcbda2c6c5
Allow ledger tool to automatically detect the shred compaction style (#26182)
#### Problem
Ledger-tool doesn't support shred-compaction-type other than the default rocksdb level compaction.

#### Summary of Changes
This PR enables ledger-tool to automatically detect the shred-compaction-type of the specified ledger.

#### Test Plan
New ledger-tool tests are added for both level and fifo compactions.
2022-07-19 01:19:17 +08:00
Yueh-Hsuan Chiang 493108f026
Move Blockstore::blockstore_directory() to ShredStorageType (#26179)
#### Summary of Changes
Blockstore::blockstore_directory() function takes a ShredStorageType and returns a path.
As it's a ShredStorageType related function, moving it under ShredStorageType as a
member function is a better fit.

In addition, as we start doing automatic shred compaction type selection under ledger-tool,
it becomes more convenient to reuse the function and const under blockstore_options
namespace instead of blockstore.
2022-07-13 00:05:54 +08:00
apfitzge 1c2f6a4c41
Bugfix: slots_connected snapshot_slot not full (#26506)
* slots_connected check used by ledger-tool should not require a full slot for snapshot slot

* Cleaner Result<Option<>> unwrap/default

* return false if no meta for starting slot

* Add clarifying comments
2022-07-11 13:45:12 -05:00
behzad nouri ba785cf8ab
removes erroneous uses of std::mem::swap (#26536)
All instances should be replace by std::mem::{replace,take},
or just plain assignment.
2022-07-11 11:33:15 +00:00
Jeff Washington (jwash) 1f2e830391
helpful error message when mmap limit can't change (#26450) 2022-07-06 17:31:10 -05:00
steviez 54cd31e9e2
Reduce repeated discard_shreds checks (#26329) 2022-06-30 14:45:10 -05:00
steviez d964774fde
Removing pub keyword from blockstore unit tests (#26334) 2022-06-30 14:41:04 -05:00
behzad nouri 88599fd760
skips shreds deserialization before retransmit (#26230)
Fully deserializing shreds in window-service before sending them to
retransmit stage adds latency to shreds propagation.
This commit instead channels through the payload and relies on only
partial deserialization of a few required fields: slot, shred-index,
shred-type.
2022-06-30 12:13:00 +00:00
behzad nouri 348fe9ebe2
verifies shred slot and parent in fetch stage (#26225)
Shred slot and parent are not verified until window-service where
resources are already wasted to sig-verify and deserialize shreds.
This commit moves above verification to earlier in the pipeline in fetch
stage.
2022-06-28 12:45:50 +00:00
Ryo Onodera cd2878acf9
Avoid to miss to root for local slots before the hard fork (#19912)
* Make sure to root local slots even with hard fork

* Address review comments

* Cleanup a bit

* Further clean up

* Further clean up a bit

* Add comment

* Tweak hard fork reconciliation code placement
2022-06-26 15:14:17 +09:00
Jeff Washington (jwash) bf97a99dca
increase mmap file limit to 1M to match published instructions (#26203) 2022-06-25 19:04:54 -05:00
behzad nouri f534b8981b
maps number of data shreds to erasure batch size (#25917)
In prepration of
https://github.com/solana-labs/solana/pull/25807
which reworks erasure batch sizes, this commit:
* adds a helper function mapping the number of data shreds to the
  erasure batch size.
* adds ProcessShredsStats to Shredder::entries_to_shreds in order to
  replace and remove entries_to_data_shreds from the public interface.
2022-06-23 13:27:54 +00:00
apfitzge f4189c0305
ledger-tool minimized snapshots (#25334)
* working on local snapshot

* Parallelization for slot storage minimization

* Additional clean-up and fixes

* make --minimize an option of create-snapshot

* remove now unnecessary function

* Parallelize parts of minimized account set generation

* clippy fixes

* Add rent collection accounts and voting node_pubkeys

* Simplify programdata_accounts generation

* Loop over storages to get slot set

* Parallelize minimized slot set generation

* Parallelize adding owners and programdata_accounts

* Remove some now unncessary checks on the blockstore

* Add a warning for minimized snapshots across epoch boundary

* Simplify ledger-tool minimize

* Clarify names of bank's minimization helper functions

* Remove unnecesary funciton, fix line spacing

* Use DashSets instead of HashSets for minimized account and slot sets

* Filter storages uses all threads instead of thread_pool

* Add some additional comments on functions for minimization

* Moved more into bank and parallelized

* Update programs/bpf/Cargo.lock for dashmap in ledger

* Clippy fix

* ledger-tool: convert minimize_bank_for_snapshot Measure into measure!

* bank.rs: convert minimize_bank_for_snapshot Measure into measure!

* accounts_db.rs: convert minimize_accounts_db Measure into measure!

* accounts_db.rs: add comment about use of minimize_accounts_db

* ledger-tool: CLI argument clarification

* minimization functions: make infos unique

* bank.rs: Add test_get_rent_collection_accounts_between_slots

* bank.rs: Add test_minimization_add_vote_accounts

* bank.rs: Add test_minimization_add_stake_accounts

* bank.rs: Add test_minimization_add_owner_accounts

* bank.rs: Add test_minimization_add_programdata_accounts

* accounts_db.rs: Add test_minimize_accounts_db

* bank.rs: Add negative case and comments in test_get_rent_collection_accounts_between_slots

* bank.rs: Negative test in test_minimization_add_programdata_accounts

* use new static runtime and sdk ids

* bank comments to doc comments

* Only need to insert the maximum slot a key is found in

* rename remove_pubkeys to purge_pubkeys

* add comment on builtins::get_pubkeys

* prevent excessive logging of removed dead slots

* don't need to remove slot from shrink slot candidates

* blockstore.rs: get_accounts_used_in_range shouldn't return Result

* blockstore.rs: get_accounts_used_in_range: parallelize slot loop

* report filtering progress on time instead of count

* parallelize loop over snapshot storages

* WIP: move some bank minimization functionality into a new class

* WIP: move some accounts_db minimization functionality into SnapshotMinimizer

* WIP: Use new SnapshotMinimizer

* SnapshotMinimizer: fix use statements

* remove bank and accounts_db minimization code, where possible

* measure! doesn't take a closure

* fix use statement in blockstore

* log_dead_slots does not need pub(crate)

* get_unique_accounts_from_storages does not need pub(crate)

* different way to get stake accounts/nodes

* fix tests

* move rent collection account functionality to snapshot minimizer

* move accounts_db minimize behavior to snapshot minimizer

* clean up

* Use bank reference instead of Arc. Additional comments

* Add a comment to blockstore function

* Additional clarifying comments

* Moved all non-transaction account accumulation into the SnapshotMinimizer.

* transaction_account_set does not need to be mutable now

* Add comment about load_to_collect_rent_eagerly

* Update log_dead_slots comment

* remove duplicate measure/print of get_minimized_slot_set
2022-06-22 13:17:43 -04:00
Michael Vines f3639b76ce Remove some clippy lints 2022-06-22 09:23:22 -07:00
behzad nouri 1f0f5dc03e verifies shred-version in fetch stage
Shred versions are not verified until window-service where resources are
already wasted to sig-verify and deserialize shreds.
The commit verifies shred-version earlier in the pipeline in fetch stage.
2022-06-22 12:17:37 +00:00
steviez 1165a7f3fc
Make clear_unconfirmed_slot() update parent's next_slots list (#26085)
A slot may be purged from the blockstore with clear_unconfirmed_slot().
If the slot is added back, the slot should only exist once in its'
parent SlotMeta::next_slots Vec. Prior to this change, repeated clearing
and re-adding of a slot could result in the slot existing in parent's
next_slots multiple times. The result is that if the next time the
parent slot is processed (node restart or ledger-tool-replay), slot
could be added to the queue of slots to play multiple times.

Added test that failed before change and works now as well
2022-06-21 22:49:30 -05:00
Brian Anderson db9004bd0f
Fix doc warnings (#25953) 2022-06-14 21:55:08 -06:00
behzad nouri 81231a89b9 adds support for different variants of ShredCode and ShredData
The commit implements two new types:
    pub enum ShredCode {
        Legacy(legacy::ShredCode),
    }
    pub enum ShredData {
        Legacy(legacy::ShredData),
    }

Following commits will extend these types by adding merkle variants:
    pub enum ShredCode {
        Legacy(legacy::ShredCode),
        Merkle(merkle::ShredCode),
    }
    pub enum ShredData {
        Legacy(legacy::ShredData),
        Merkle(merkle::ShredData),
    }
2022-06-02 18:55:50 +00:00
apfitzge 934da5ef99
Fix pre-check of blockstore slots during load_bank_forks (#25632)
Fix pre-check of blockstore slts during load_bank_forks. Now iterates from starting_slot to halt_slot via slot_meta.next_slots to confirm they are connected.
2022-06-01 20:19:42 -05:00
steviez 17995c7e67
Cleanup BlockstoreInsertionMetrics (#25618)
* Move BlockstoreInsertionMetrics to blockstore_metrics.rs

* Specify unit (us) in metric fields
2022-06-01 10:54:11 -05:00
Yueh-Hsuan Chiang 5b67960c76
(Refactor) Move blocktore options related stuff to blockstore_options.rs (#25509)
#### Problem
blockstore_db.rs has a mutual dependency between blockstore_metrics.rs.

#### Summary of Changes
This PR removes the mutual dependency by moving the option-related stuff
out from blockstore_db.rs to its new home --- blockstore_options.rs.

By doing this, we address the mutual dependency and also make the code cleaner.
2022-05-26 16:59:26 -07:00