* add `ParseError` in `zk-token-elgamal`
* implement `FromStr` for `ElGamalPubkey` and `ElGamalCiphertext`
* implement `FromStr` for `AeCiphertext`
* fix target
* cargo fmt
* use constants for byte length check
* make `FromStr` functions available on chain
* use macros for the `FromStr` implementations
* restrict `from_str` macro to `pub(crate)`
* decode directly into array
* cargo fmt
* Apply suggestions from code review
Co-authored-by: Jon C <me@jonc.dev>
* remove unnecessary imports
* remove the need for `ParseError` dependency
---------
Co-authored-by: Jon C <me@jonc.dev>
#### Problem
tiered_storage/writer.rs was added when we planned to support multiple
tiers in the tiered-storage (i.e., at least hot and cold). However, as we
changed our plan to handle cold accounts as state-compressed accounts,
we don't need a general purposed tiered-storage writer at this moment.
#### Summary of Changes
Remove tiered_storage/writer.rs as we currently don't have plans to develop cold storage.
#### Test Plan
Existing tiered-storage tests.
#### Problem
As we further optimize the HotStorageMeta in #146, there is a need
for a HotAccount struct that contains all the hot account information.
Meanwhile, we currently don't have plans to develop a cold account
format at this moment. As a result, this makes it desirable to repurpose
TieredReadableAccount to HotAccount.
#### Summary of Changes
Repurpose TieredReadableAccount to HotAccount.
#### Test Plan
Existing tiered-storage tests.
The default value was previously being determined down where the thread
pool is being created. Providing a default value at the CLI level is
consistent with other args, and gives an operator better visibility into
what the default will actually be
The threadpool used to replay multiple transactions in parallel is
currently global state via a lazy_static definition. Making this pool
owned by ReplayStage will enable subsequent work to make the pool
size configurable on the CLI.
This makes `ReplayStage` create and hold the threadpool which is passed
down to blockstore_processor::confirm_slot().
blockstore_processor::process_blockstore_from_root() now creates its'
own threadpool as well; however, this pool is only alive while for
the scope of that function and does not persist the lifetime of the
process.
In cluster restart scenarios, an important step is scanning the
Blockstore for blocks that occur after the chosen restart slot with an
incorrect shred version. This check ensures that any blocks that
occurred pre-cluster restart and after the chosen restart slot get
deleted. If a node skips this step, the node can encounter problems when
that block is created again, after the cluster has restarted.
This check only occurs if --wait-for-supermajority AND
--expected-shred-version are set; however, --expected-... is currently
optional when using --wait-...
Our restart instructions typically mention that one should specify
--expected-... as well, but we should just enforce it at the CLI level
to prevent mistakes / wasted time debuggging.
* Support min_context_slot field in getBlocksWithLimit input
* Use min_context_slot in get_blocks_with_limit
* Support min_context_slot field in getBlocks input
* Use min_context_slot in get_blocks
ReplayStage owning the pool allows for subsequent work to configure
the size of the pool; configuring the size of the pool inside of the
lazy_static would have been a little messy