#### Summary of Changes
To avoid mixing the use of different shred storage types, each shred storage type
will have its blockstore in a different directory.
This PR still keeps the RocksFifo setting hidden. The default ShredStorageType and
blockstore directory are still RocksLevel and `rocksdb`.
Will follow-up with PRs on making FIFO option public in ledger-tool and validator.
#### Test Plan
* Added a new test to verify the existence of `rocksdb-fifo` directory when FIFO compaction is used.
* Updated existing test to verify the current setting still store ledger under `rocksdb` directory.
* Manually ran ledger_cleanup_test with both level and fifo compaction and verified the resulting ledger.
* Ran a validator with this PR.
* Make ledger-tool analyze-storage use Blockstore::open()
Opening a large ledger may require setting a larger open file descriptor
limit. Blockstore::open() does this whereas the underlying Database
object that analyze-storage was opening does not.
* Move key_size call lookup to take advantage of traits
* Fix typo where analyze worked on wrong column
* Make analyze-storage analyze all columns
* 1.made load_credentials accept credential path as a parameter. 2.partial implement bigtable comparasion function
* finding missing blocks in bigtables in a specified range
* refactor compare-blocks,add unit test for missing_blocks and fmt
* compare-block fix last block bug
* refactor compare-block and improve wording
* Update ledger-tool/src/bigtable.rs
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* update compare-block command-line description
* style:improve wording/naming/code style
Co-authored-by: Tyera Eulberg <teulberg@gmail.com>
* Add commas for readability and fix plurality in a printout
* Pass verbose_level into slot subcommand instead of default to max
This is useful for when a full printout isn't necessary, such as just
trying to determine the slot blockhash. The equivalent behavior before
this change would have been using "-vv" with the command.
It takes a long time to print the contents of all the accounts. I'll be
adding more to the "accounts" subcommand and would like to skip printing
the account contents.
- decouple cost_model from cost_tracker; allowing one cost_model
instance being shared within a validator;
- update cost_model api to calculate_cost(&self...)->transaction_cost
* add filler accounts to bloat validator and predict failure
* assert no accounts match filler
* cleanup magic numbers
* panic if can't load from snapshot with filler accounts specified
* some renames
* renames
* into_par_iter
* clean filler accts, too
Summary of Changes
Create a plugin mechanism in the accounts update path so that accounts data can be streamed out to external data stores (be it Kafka or Postgres). The plugin mechanism allows
Data stores of connection strings/credentials to be configured,
Accounts with patterns to be streamed
PostgreSQL implementation of the streaming for different destination stores to be plugged in.
The code comprises 4 major parts:
accountsdb-plugin-intf: defines the plugin interface which concrete plugin should implement.
accountsdb-plugin-manager: manages the load/unload of plugins and provide interfaces which the validator can notify of accounts update to plugins.
accountsdb-plugin-postgres: the concrete plugin implementation for PostgreSQL
The validator integrations: updated streamed right after snapshot restore and after account update from transaction processing or other real updates.
The plugin is optionally loaded on demand by new validator CLI argument -- there is no impact if the plugin is not loaded.
This commit only adds args to specify the maximum full/incremental
snapshot archives to retain when purging old snapshot archives. It
purposely does *not* add a way to generate new incremental snapshots.
Fixes#19857