* Cluster Replicated Data Store
Separate the data storage and merge strategy from the network IO boundary.
Implement an eager push overlay for transporting recent messages.
Simulation shows fast convergence with 20k nodes.
That's supposed to be an ASCII format, but we're not making use
of it. We can switch back to that some day, but if we do, it shouldn't
be JSON-encoded.
* Add first leader to genesis entries, consume in genesis.sh
* Set bootstrap leader in the bank on startup, remove instantiation of bootstrap leader from bin/fullnode
* Remove need to initialize bootstrap leader in leader_scheduler, now can be read from genesis entries
* Add separate interface new_with_leader() in mint for creating genesis leader entries
* Add Vote Contract
* Move ownership of LeaderScheduler from Fullnode to the bank
* Modified ReplicateStage to consume leader information from bank
* Restart RPC Services in Leader To Validator Transition
* Make VoteContract Context Free
* Remove voting from ClusterInfo and Tpu
* Remove dependency on ActiveValidators in LeaderScheduler
* Switch VoteContract to have two steps 1) Register 2) Vote. Change thin client to create + register a voting account on fullnode startup
* Remove check in leader_to_validator transition for unique references to bank, b/c jsonrpc service and rpcpubsub hold references through jsonhttpserver
* Add PoH height to process_ledger()
* Moved broadcast_stage Leader Scheduling logic to use Poh height instead of entry_height
* Moved LeaderScheduler logic to PoH in ReplicateStage
* Fix Leader scheduling tests to use PoH instead of entry height
* Change is_leader detection in repair() to use PoH instead of entry height
* Add tests to LeaderScheduler for new functionality
* fix Entry::new and genesis block PoH counts
* Moved LeaderScheduler to PoH ticks
* Cleanup to resolve PR comments