JsonRpcRequestProcessor::check_blockstore_root() contained some logic
that performed duplicate sanity checking on a Blockstore fetch result.
The checking involved creating rocksdb iterators, which has non-trivial
overhead.
This PR removes the duplicate checking, and also adds comments to help
reason about how JsonRpcRequestProcessor interprets the Blockstore
result.
Instead of returning Result<Option<UnixTimestamp>>, return
Result<UnixTimestamp> and map None to an error. This makes the return
type similar to that of Blockstore::get_rooted_block().
* Initialize fork graph in program cache during bank_forks creation
* rename BankForks::new to BankForks::new_rw_arc
* fix compilation
* no need to set fork_graph on insert()
* fix partition tests
This macro is used a lot for tests to create a ledger path in order to
open a Blockstore. Files will be left on disk unless the test remembers
to call Blockstore::destroy() on the directory. So, instead of requiring
this, use the get_tmp_ledger_path_auto_delete!() macro that creates a
TempDir (which automatically deletes itself when it goes out of scope).
The current getHealth mechanism checks a local accounts hash slot vs.
those of other nodes as specified by --known-validator. This is a
very coarse comparison given that the default for this value is 100
slots. More so, any nodes using a value larger than the default
(ie --incremental-snapshot-interval 500) will likely see getHealth
return status behind at some point.
Change the underlying mechanism of how health is computed. Instead of
using the accounts hash slots published in gossip, use the latest
optimistically confirmed slot from the cluster. Even when a node is
behind, it is able to observe cluster optimistically confirmed by slots
by viewing votes published in gossip.
Thus, the latest cluster optimistically confirmed slot can be compared
against the latest optimistically confirmed bank from replay to
determine health. This new comparison is much more granular, and not
needing to depend on individual known validators is also a plus.
* token: Update to 4.0.0
* token-2022: Bump and support new account and instruction types
* Update token-2022 in fetch_spl / program-test
* Fixup downstream uses
* Mint and destination were flipped in 0.9.0
* Don't use `convert_pubkey`
* Bump spl dependencies to versions which avoid recompilations
* Move vote related code to its own crate
* Update imports in code and tests
* update programs/sbf/Cargo.lock
* fix check errors
* update abi_digest
* rebase fixes
* fixes after rebase
* Adds a module `address_lookup_table` to the SDK.
* Adds a module `address_lookup_table::instruction` to the SDK.
* Adds a module `address_lookup_table::error` to the SDK.
* Adds a module `address_lookup_table::state` to the SDK.
* Moves AddressLookupTable into SDK as well.
* Moves AddressLookupTableAccount into address_lookup_table.
* Adds deprecation messages.
* Disentangles dependencies across cargo files.
* allow pedantic invalid cast lint
* allow lint with false-positive triggered by `test-case` crate
* nightly `fmt` correction
* adapt to rust layout changes
* remove dubious test
* Use transmute instead of pointer cast and de/ref when check_aligned is false.
* Renames clippy::integer_arithmetic to clippy::arithmetic_side_effects.
* bump rust nightly to 2023-08-25
* Upgrades Rust to 1.72.0
---------
Co-authored-by: Trent Nelson <trent@solana.com>
* remove unnecessary hashes around raw string literals
* remove unncessary literal `unwrap()`s
* remove panicking `unwrap()`
* remove unnecessary `unwrap()`
* use `[]` instead of `vec![]` where applicable
* remove (more) unnecessary explicit `into_iter()` calls
* remove redundant pattern matching
* don't cast to same type and constness
* do not `cfg(any(...` a single item
* remove needless pass by `&mut`
* prefer `or_default()` to `or_insert_with(T::default())`
* `filter_map()` better written as `filter()`
* incorrect `PartialOrd` impl on `Ord` type
* replace "slow zero-filled `Vec` initializations"
* remove redundant local bindings
* add required lifetime to associated constant
Created an overload of get_leader_tpus: get_leader_tpus_with_slots to get the tpu address along with the slot. And put that information into the debug log. This information can be compared with where the the transaction confirmed to measure leader accuracy. This help us better understand the txn latency.
In most cases, either a &Bank or an Arc<Bank> is more proper.
- &Bank is used if the function only needs a momentary reference
- Arc<Bank> is used if the function needs its' own copy
This PR leaves several instances of &Arc<Bank> around; these instances
are situations where a clone may only happen conditionally.
The previous implementation is a little tricky to follow, and repeats
some functionality that is already implemented in
Bank::parents_inclusive(). So, swap in bank.ancestors.
* stake: deprecate on chain warmup/cooldown rate and config
* Pr feedback: Deprecate since 1.16.7
Co-authored-by: Jon Cinque <me@jonc.dev>
---------
Co-authored-by: Jon Cinque <me@jonc.dev>
* Fixing send-transaction-service using quic, tpu address is wrong
* Use Protocol field instead of bool for passing protocol info
* Address some code review comment from Behzad: get_leader_tpus per protocol
`Arc` is already a reference internally, so it does not seem to be
beneficial to pass a reference to it. Just adds an extra layer of
indirection.
Functions that need to be able to increment `Arc` reference count need
to take `Arc<AtomicBool>`, but those that just want to read the
`AtomicBool` value can accept `&AtomicBool`, making them a bit more
generic.
This change focuses specifically on `Arc<AtomicBool>`. There are other
uses of `&Arc<T>` in the code base that could be converted in a similar
manner. But it would make the change even larger.
* Use spl-token ids directly in program-id checks
* Remove id redefinitions
* Deprecate pubkey_from_spl_token and remove usage
* Deprecate spl_token_pubkey and remove usage
* Deprecate native mint helpers and remove usage
* Deprecate spl_token_instruction and remove usage
Condense and rename pubsub counters into a single datapoint
The counters have been combined into fields within a struct that are
accumulated locally and submitted on a set interval (10s). Doing so
decreases overhead on the system as well as makes it easier to compare
these similar fields in the metrics explorer by having them all under
the same measurement.