Add in some CPU utilization metrics such as: number of vCPUs, clock frequency, average load across different time intervals, and number of total threads
#### Problem
When FIFO compaction is used, the size ratio between data shred and coding
shred is set to 1:1 based on the `--rocksdb_fifo_shred_storage_size` arg.
However, BlockstoreRocksFifoOptions::default() uses a slightly optimized
5:4 ratio instead, and the default() function is only used in benchmarks.
#### Summary of Changes
This PR makes both validator argument and BlockstoreRocksFifoOptions::default()
to use 1:1 ratio between data and coding shred size.
* Remove the args param from Measure::this since we don't ever use it
* banking_stage.rs: convert to measure!
* poh_recorder.rs: convert to measure!
* cost_update_service.rs: convert to measure!
* poh_service.rs: convert to measure!
* bank.rs: convert to measure!
* measure.rs: Remove Measure::this now that all have been converted to measure!
* Award one credit per dequeued vote when processing VoteStateUpdate instruction,
to match vote rewards of Vote instruction.
* Update feature pubkey to one owned by cc (ashwin)
Co-authored-by: Ashwin Sekar <ashwin@solana.com>
Previous commit separates durable nonce and blockhash domains with a
feature gate. A 2nd feature added in this commit enables durable nonce
at least one epoch after the 1st feature.
By the time 2nd feature is activated, some nonce accounts will have an
old blockhash, but no nonce account can have a recent blockhash.
As a result no transaction (durable or normal) can be executed twice.
AdvanceNonceAccount instruction updates nonce to blockhash. This makes it
possible that a durable transaction is executed twice both as a normal
transaction and a nonce transaction if it uses blockhash (as opposed to nonce)
for its recent_blockhash field.
The commit prevents this double execution by separating nonce and blockhash
domains; when advancing nonce account, blockhash is hashed with a fixed string.
As a result a blockhash cannot be a valid nonce value; and if transaction was
once executed as a normal transaction it cannot be re-executed as a durable
transaction again and vice-versa.
* Bank: Batch account stores in collect_rent_eagerly
Previously store() calls were done one-by-one, which leads to suboptimal
performance.
* Accounts: Remove store_slow_cached()
* clippy
Co-authored-by: Jeff Washington (jwash) <wash678@gmail.com>
Packets are at the boundary of the system where, vast majority of the
time, they are received from an untrusted source. Raw indexing into the
data buffer can open attack vectors if the offsets are invalid.
Validating offsets beforehand is verbose and error prone.
The commit updates Packet::data() api to take a SliceIndex and always to
return an Option. The call-sites are so forced to explicitly handle the
case where the offsets are invalid.
In preparation of
https://github.com/solana-labs/solana/pull/25237
which adds a new shred variant with merkle tree branches, the commit
embeds versioning into shred binary by encoding a new ShredVariant type
at byte 65 of payload replacing previously ShredType at this offset.
enum ShredVariant {
LegacyCode, // 0b0101_1010
LegacyData, // 0b0101_1010
}
* 0b0101_1010 indicates a legacy coding shred, which is also equal to
ShredType::Code for backward compatibility.
* 0b1010_0101 indicates a legacy data shred, which is also equal to
ShredType::Data for backward compatibility.
Following commits will add merkle variants to this type:
enum ShredVariant {
LegacyCode, // 0b0101_1010
LegacyData, // 0b1010_0101
MerkleCode(/*proof_size:*/ u8), // 0b0100_????
MerkleData(/*proof_size:*/ u8), // 0b1000_????
}
* Spawn QUIC server to receive forwarded txs
* Update validator port range
* forward votes using UDP
* no forwarding from unstaked nodes
* forwarding stats in banking stage
* fix test builds
* fix lifetime of forward sender