* add bench for ed25519 instruction
* add bench for secp256k1 instruction
* Apply suggestions from code review
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
* prepare unique txs for benching
* use iter::Cycle for endless loop
---------
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
* Forbids all program replacements except for reloads and builtins.
* Adds test_assign_program_failure() and test_assign_program_success().
* Explicitly disallows LoadedProgramType::DelayVisibility to be inserted in the global cache.
There are several cases for fetching entries from the Blockstore:
- Fetching entries for block replay
- Fetching entries for CompletedDataSetService
- Fetching entries to service RPC getBlock requests
All of these operations occur in a different calling thread. However,
the currently implementation utilizes a shared thread-pool within the
Blockstore function. There are several problems with this:
- The thread pool is shared between all of the listed cases, despite
block replay being the most critical. These other services shouldn't
be able to interfere with block replay
- The thread pool is overprovisioned for the average use; thread
utilization on both regular validators and RPC nodes shows that many
of the thread see very little activity. But, these thread existing
introduce "accounting" overhead
- rocksdb exposes an API to fetch multiple items at once, potentially
with some parallelization under the hood. Using parallelization in
our API and the underlying rocksdb is overkill and we're doing more
damage than good.
This change removes that threadpool completely, and instead fetches
all of the desired entries in a single call. This has been observed
to have a minor degradation on the time spent within the Blockstore
get_slot_entries_with_shred_info() function. Namely, some buffer
copying and deserialization that previously occurred in parallel now
occur serially.
However, the metric that tracks the amount of time spent replaying
blocks (inclusive of fetch) is unchanged. Thus, despite spending
marginally more time to fetch/copy/deserialize with only a single
thread, the gains from not thrashing everything else with the pool
keep us at parity.
Working towards adding a new Merkle shred variant with retransmitter's
signature, the commit uses struct instead of tuple to describe Merkle shred
variant.
`sha2` and `sha3` crates already moved to `generic-array` 0.14.7,
which means that we can safely convert the hash result to a sized
array just by calling `finalize().into()`, which doesn't return
any errors.
* use --commitment-config <commitment-level> for setting blockhash commitment level for sending transactions with rpc-client
* clarify default
* leave get_balance_with_commitment at processed()
* rm unused variable
* refactor commitment_config flag read in
* update cli and change send_batch's get_latest_blockhash() to get_latest_blockhash_with_client_commitment() and use client's internal commitment level
* change fix some nits based on PR comments
* rm unused import
* Always limit effective slot to the begin of the current epoch.
* Adds comments.
* Optimizes to avoid having two entries if there is no relevant feature activation.
* Adds test_feature_activation_loaded_programs_epoch_transition().
solana-genesis currently includes a list of accounts that exist in
MainnetBeta genesis. These accounts are added for all cluster types,
including Development clusters.
There is no need for these accounts to get added to dev clusters so
skip adding them for ClusterType::Development case
There are lots of operations that could fail, including lots of the
Blockstore calls. The old code matched on Ok(_) or did unwrap()'s
which clutter the code and increase indentation.
This change wraps the entire command in a function that returns a
Result. The wrapper then does a single unwrap_or_else() and prints
any error message. Everywhere else is now free to use the ? operator