* replay: do not start leader for a block we already have shreds for (#2416)
* replay: do not start leader for a block we already have shreds for
* pr feedback: comment, move existing check to blockstore fn
* move blockstore read after tick height check
* pr feedback: resuse blockstore fn in next_leader_slot
(cherry picked from commit 15dbe7fb0f)
# Conflicts:
# poh/src/poh_recorder.rs
* fix conflicts
---------
Co-authored-by: Ashwin Sekar <ashwin@anza.xyz>
Co-authored-by: Ashwin Sekar <ashwin@solana.com>
blockstore: only consume duplicate proofs from root_slot + 1 on startup (#1971)
* blockstore: only consume duplicate proofs from root_slot + 1 on startup
* pr feedback: update test comments
* pr feedback: add pub behind dcou for test fns
(cherry picked from commit 2a48564b59)
Co-authored-by: Ashwin Sekar <ashwin@anza.xyz>
* Refactor cost tracking (#1954)
* Refactor and additional metrics for cost tracking (#1888)
* Refactor and add metrics:
- Combine remove_* and update_* functions to reduce locking on cost-tracker and iteration.
- Add method to calculate executed transaction cost by directly using actual execution cost and loaded accounts size;
- Wireup histogram to report loaded accounts size;
- Report time of block limits checking;
- Move account counters from ExecuteDetailsTimings to ExecuteAccountsDetails;
* Move committed transactions adjustment into its own function
* remove histogram for loaded accounts size due to performance impact
(cherry picked from commit f8630a3522)
* rename cost_tracker.account_data_size to better describe its purpose is to tracker per-block new account allocation
---------
Co-authored-by: Tao Zhu <82401714+tao-stones@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
* Refactor and additional metrics for cost tracking (#1888)
* Refactor and add metrics:
- Combine remove_* and update_* functions to reduce locking on cost-tracker and iteration.
- Add method to calculate executed transaction cost by directly using actual execution cost and loaded accounts size;
- Wireup histogram to report loaded accounts size;
- Report time of block limits checking;
- Move account counters from ExecuteDetailsTimings to ExecuteAccountsDetails;
* Move committed transactions adjustment into its own function
(cherry picked from commit c3fadacf69)
* rename cost_tracker.account_data_size to better describe its purpose is to tracker per-block new account allocation
---------
Co-authored-by: Tao Zhu <82401714+tao-stones@users.noreply.github.com>
Co-authored-by: Tao Zhu <tao@solana.com>
* Adjust replay-related metrics for unified schduler
* Fix grammar
* Don't compute slowest for unified scheduler
* Rename to is_unified_scheduler_enabled
* Hoist uses to top of file
* Conditionally disable replay-slot-end-to-end-stats
* Remove the misleading fairly balanced text
* Add num_partitions field to Rewards proto definition
* Add type to hold rewards plus num_partitions
* Add Bank method to get rewards plus num_partitions for recording
* Update Blockstore::write_rewards to use num_partitions
* Update RewardsRecorderService to handle num_partitions
* Populate num_partitions in ReplayStage::record_rewards
* Write num_partitions to Bigtable
* Reword KeyedRewardsAndNumPartitions method
* Clone immediately
* Determine epoch boundary by checking parent epoch
* Rename UiConfirmedBlock field
* nit: fix comment typo
* Add test_get_rewards_and_partitions
* Add pre-activation test
* Add should_record unit test
* Make unified schedler abort on tx execution errors
* Fix typo and improve wording
* Indicate panic _should_ be enforced by trait impls
* Fix typo
* Avoid closure of closure...
* Rename: is_threads_joined => are_threads_joined
* Add some comments for trashed_scheduler_inners
* Ensure 100% coverage of end_session by more tests
* Document relation of aborted thread and is_trashed
* Replace sleep()s with sleepless_testing
* Fix lints from newer rust
* replay: only vote on blocks with >= 32 data shreds in last fec set
* pr feedback: pub(crate), inspect_err
* pr feedback: error variants, collapse function, dedup
* pr feedback: remove set_last_in_slot, rework test
* pr feedback: add metric, perform check regardless of ff
* pr feedback: mark block as dead rather than duplicate
* pr feedback: self.meta, const_assert, no collect
* pr feedback: cfg(test) assertion, remove expect and collect, error fmt
* Keep the collect to preserve error
* pr feedback: do not hold bank_forks lock for mark_dead_slot
* put most AbiExample derivations behind a cfg_attr
* feature gate all `extern crate solana_frozen_abi_macro;`
* use cfg_attr wherever we were deriving both AbiExample and AbiEnumVisitor
* fix cases where AbiEnumVisitor was still being derived unconditionally
* fix a case where AbiExample was derived unconditionally
* fix more cases where both AbiEnumVisitor and AbiExample were derived unconditionally
* two more cases where AbiExample and AbiEnumVisitor were unconditionally derived
* fix remaining unconditional derivations of AbiEnumVisitor
* fix cases where AbiExample is the first thing derived
* fix most remaining unconditional derivations of AbiExample
* move all `frozen_abi(digest =` behind cfg_attr
* replace incorrect cfg with cfg_attr
* fix one more unconditionally derived AbiExample
* feature gate AbiExample impls
* add frozen-abi feature to required Cargo.toml files
* make frozen-abi features activate recursively
* fmt
* add missing feature gating
* fix accidentally changed digest
* activate frozen-abi in relevant test scripts
* don't activate solana-program's frozen-abi in sdk dev-dependencies
* update to handle AbiExample derivation on new AppendVecFileBacking enum
* revert toml formatting
* remove unused frozen-abi entries from address-lookup-table Cargo.toml
* remove toml references to solana-address-lookup-table-program/frozen-abi
* update lock file
* remove no-longer-used generic param
Recovered shreds need to be resigned before being retransmitted to the
other nodes. Because shreds are resigned immediately after signature
verification, shreds belonging to the same erasure batch will have the
same signature and we can attach that signature to recovered shreds.
This makes the slot subcommand use the same output method as bigtable
block to output blocks (-vv). Doing so creates consistency between the
two commands as well as removing a duplicate implementation in the
ledger-tool code. The shared type also supports json output which the
ledger-tool implementation did not.
* blockstore: use erasure meta index field to find conflicting shreds
* pr feedback: error msg, let Some in case of cleanup
* pr feedback: add error if conflicting shred is not found
This is a private function and the callers should NOT be calling with an
empty completed range vector. Regardless, it is safer to just return an
empty Entry vector should an empty CompletedRanges be provided as input.
```
error: assigning the result of `Clone::clone()` may be inefficient
--> bucket_map/src/bucket.rs:979:17
|
979 | hashed = hashed_raw.clone();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: use `clone_from()`: `hashed.clone_from(&hashed_raw)`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#assigning_clones
= note: `-D clippy::assigning-clones` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::assigning_clones)]`
```
Currently, several intermediate Vec<_>'s are created to represent
the keys passed down to multi_get(). These intermediate Vec<_>'s are
not strictly necessary and incur extra allocations. We can instead pass
iterators at several levels to avoid several allocations
Merkle shreds sign the Merkle root of the erasure batch, so all shreds
within the same erasure batch have the same signature. The commit
improves shreds signature verification by adding an LRU cache.
Shred{Code,Data}::get_chained_merkle_root_offset is only applicable to
"chained" Merkle shreds however the code or API does not enforce this.
The commit explicitly checks for "chained" variant in
Shred{Code,Data}::get_chained_merkle_root_offset.