Now that CRDS supports incremental snapshot hashes,
SnapshotPackagerService needs to push 'em!
This commit does two main things:
1. SnapshotPackagerService now knows about incremental snapshot hashes,
and will push SnapshotPackage::IncrementalSnapshot hashes to CRDS.
2. At startup, when loading from a full + incremental snapshot, the
hashes need to be passed all the way to SnapshotPackagerService so it
can push these starting hashes to CRDS. Those values have been piped
through.
Fixes#20441 and #20423
Summary of Changes
Create a plugin mechanism in the accounts update path so that accounts data can be streamed out to external data stores (be it Kafka or Postgres). The plugin mechanism allows
Data stores of connection strings/credentials to be configured,
Accounts with patterns to be streamed
PostgreSQL implementation of the streaming for different destination stores to be plugged in.
The code comprises 4 major parts:
accountsdb-plugin-intf: defines the plugin interface which concrete plugin should implement.
accountsdb-plugin-manager: manages the load/unload of plugins and provide interfaces which the validator can notify of accounts update to plugins.
accountsdb-plugin-postgres: the concrete plugin implementation for PostgreSQL
The validator integrations: updated streamed right after snapshot restore and after account update from transaction processing or other real updates.
The plugin is optionally loaded on demand by new validator CLI argument -- there is no impact if the plugin is not loaded.
Use TempDir::path() instead of TempDir::into_path(); into_path() returns
the underlying PathBuf and voids the promise to delete the directory
when TempDir goes out of scope.
This is a bugfix.
When `load_frozen_forks()` called `snapshot_bank()`, the accounts hash
was not computed. So then when a snapshot archive was eventually
created, its hash would be all 1s, which is clearly wrong. The fix is
to update the bank's accounts hash before taking the bank snapshot.
* Add TransactionMemos column family
* Traitify extract_memos
* Write TransactionMemos in TransactionStatusService
* Populate memos from column
* Dedupe and add unit test
Some of the blockstore_processor tests weren't using
`test_process_blockstore()`, but could. I updated those tests, and also
changed to using `..` for ignoring, instead of `_`, since I'll be adding
another return value here shortly (thus reducing the need to change
these again).
```text
error: all if blocks contain the same code at the end
--> ledger/src/blockstore.rs:3473:5
|
3473 | / Ok(insert_map.get(&slot).unwrap().clone())
3474 | | }
| |_____^
|
= note: `-D clippy::branches-sharing-code` implied by `-D warnings`
= note: The end suggestion probably needs some adjustments to use the expression result correctly
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#branches_sharing_code
help: consider moving the end statements out like this
|
3473 ~ }
3474 + Ok(insert_map.get(&slot).unwrap().clone())
|
error: could not compile `solana-ledger` due to previous error
```
#### Problem
Snapshot names are overloaded, and there are multiple terms that mean the same thing. This is confusing. Here's a list of ones in the codebase that I've found:
```
- snapshot_dir
- snapshots_dir
- snapshot_path
- snapshot_output_dir
- snapshot_package_output_path
- snapshot_archives_dir
```
#### Summary of Changes
For all the ones that are about the directory where snapshot archives are stored, ensure they are `snapshot_archives_dir`. For the ones about the (bank) snapshots directory, set to `bank_snapshots_dir`.
Co-authored-by: Michael Vines <mvines@gmail.com>
* increase block_cost_max by 10 times to accommodate updated account read and write costs; Otherwise during performace test, banking_stage will start reject and retry transactions in next block, result regressed performance.
* fix typo
Shreds recovered from erasure codes have not been received from turbine
and have not been retransmitted to other nodes downstream. This results
in more repairs across the cluster which is slower.
This commit channels through recovered shreds to retransmit stage in
order to further broadcast the shreds to downstream nodes in the tree.
* Handle cleaning zero-lamport accounts
Handle cleaning zero-lamport accounts in slots higher than the last full
snapshot slot. This is part of the Incremental Snapshot work.
Fixes#18825
* added realtime cost checking logic to reject block that would exceed max limit:
- defines max limits at block_cost_limits.rs
- right after each bath's execution, accumulate its cost and check again
limit, return error if limit is exceeded
* update abi that changed due to adding additional TransactionError
* To avoid counting stats mltiple times, only accumulate execute-timing when a bank is completed
* gate it by a feature
* move cost const def into block_cost_limits.rs
* redefine the cost for signature and account access, removed signer part as it is not well defined for now
* check if per_program_timings of execute_timings before sending