* Modify db_ledger to support per_slot metadata, add signal for updates, and add chaining to slots in db_ledger
* Modify replay stage to ask db_ledger for updates based on slots
* Add repair send/receive metrics
* Add repair service, remove old repair code
* Fix tmp_copy_ledger and setup for tests to account for multiple slots and tick limits within slots
* Modify replay stage to ask db_ledger for updates instead of reading from upstream channel
* Add signal for db_ledger to update listeners about updates
* fix flaky test
* Connect TPU's broadcast service with TVU's blob fetch stage
- This is needed since ledger is being written only in TVU now
* fix clippy warnings
* fix failing test
* fix broken tests
* fixed failing tests
* Add timeout to Replicator::new; used when polling for leader
* Add timeout functionality to replicator ledger download
Shares the same timeout as polling for leader
Defaults to 30 seconds
* Add docs for Replicator::new
* Add trait for RpcRequestHandler trait for RpcClient and add MockRpcClient for unit tests
* Add request_airdrop integration test
* Add timestamp_tx, witness_tx, and cancel_tx to wallet integration tests; add wallet integration tests to test-stable
* Add test cases
* Ignore plentiful sleeps in unit tests
* Also implement more storage contract logic
* Add transactions for proof validation,
* Move storage state members into system storage account userdata
* Remove logging init from storage program: saw a crash in a test
indicating the logger being init'ed twice.
* Add entry_height mining proof to indicate which segment the result is
for
* Add an interface to get storage miner pubkeys for a given entry_height
* Add an interface to get the current storage mining entry_height
* Set the tvu socket to 0.0.0.0:0 in replicator to stop getting entries
after the desired ledger segment is downloaded.
* Use signature of PoH height to determine which block to download for
replicator.
* Move more of the replicator logic into the replicator class
* Add support for the RPC interface to query the storage last_id value
that the replicator would sign and use to pick a block.
* Fix replicator connecting to gossip and change test to exercise that
scenario.
* Add db_window module for windowing functions from RocksDb
* Replace window with db_window functions in window_service
* Fix tests
* Make note of change in db_window
* Create RocksDb ledger in bin/fullnode
* Make db_ledger functions generic
* Add db_ledger to bin/replicator
* Cluster Replicated Data Store
Separate the data storage and merge strategy from the network IO boundary.
Implement an eager push overlay for transporting recent messages.
Simulation shows fast convergence with 20k nodes.
* Add first leader to genesis entries, consume in genesis.sh
* Set bootstrap leader in the bank on startup, remove instantiation of bootstrap leader from bin/fullnode
* Remove need to initialize bootstrap leader in leader_scheduler, now can be read from genesis entries
* Add separate interface new_with_leader() in mint for creating genesis leader entries
* Add Vote Contract
* Move ownership of LeaderScheduler from Fullnode to the bank
* Modified ReplicateStage to consume leader information from bank
* Restart RPC Services in Leader To Validator Transition
* Make VoteContract Context Free
* Remove voting from ClusterInfo and Tpu
* Remove dependency on ActiveValidators in LeaderScheduler
* Switch VoteContract to have two steps 1) Register 2) Vote. Change thin client to create + register a voting account on fullnode startup
* Remove check in leader_to_validator transition for unique references to bank, b/c jsonrpc service and rpcpubsub hold references through jsonhttpserver
* Add PoH height to process_ledger()
* Moved broadcast_stage Leader Scheduling logic to use Poh height instead of entry_height
* Moved LeaderScheduler logic to PoH in ReplicateStage
* Fix Leader scheduling tests to use PoH instead of entry height
* Change is_leader detection in repair() to use PoH instead of entry height
* Add tests to LeaderScheduler for new functionality
* fix Entry::new and genesis block PoH counts
* Moved LeaderScheduler to PoH ticks
* Cleanup to resolve PR comments