Commit Graph

52 Commits

Author SHA1 Message Date
Jon Cinque 0fe902ced7
Bump rand to 0.8, rand_chacha to 0.3, getrandom to 0.2 (#32871)
* sdk: Add concurrent support for rand 0.7 and 0.8

* Update rand, rand_chacha, and getrandom versions

* Run command to replace `gen_range`

Run `git grep -l gen_range | xargs sed -i'' -e 's/gen_range(\(\S*\), /gen_range(\1../'

* sdk: Fix users of older `gen_range`

* Replace `hash::new_rand` with `hash::new_with_thread_rng`

Run:
```
git grep -l hash::new_rand | xargs sed -i'' -e 's/hash::new_rand([^)]*/hash::new_with_thread_rng(/'
```

* perf: Use `Keypair::new()` instead of `generate`

* Use older rand version in zk-token-sdk

* program-runtime: Inline random key generation

* bloom: Fix clippy warnings in tests

* streamer: Scope rng usage correctly

* perf: Fix clippy warning

* accounts-db: Map to char to generate a random string

* Remove `from_secret_key_bytes`, it's just `keypair_from_seed`

* ledger: Generate keypairs by hand

* ed25519-tests: Use new rand

* runtime: Use new rand in all tests

* gossip: Clean up clippy and inline keypair generators

* core: Inline keypair generation for tests

* Push sbf lockfile change

* sdk: Sort dependencies correctly

* Remove `hash::new_with_thread_rng`, use `Hash::new_unique()`

* Use Keypair::new where chacha isn't used

* sdk: Fix build by marking rand 0.7 optional

* Hardcode secret key length, add static assertion

* Unify `getrandom` crate usage to fix linking errors

* bloom: Fix tests that require a random hash

* Remove some dependencies, try to unify others

* Remove unnecessary uses of rand and rand_core

* Update lockfiles

* Add back some dependencies to reduce rebuilds

* Increase max rebuilds from 14 to 15

* frozen-abi: Remove `getrandom`

* Bump rebuilds to 17

* Remove getrandom from zk-token-proof
2023-08-21 19:11:21 +02:00
dependabot[bot] c2d605b884
Bump bitflags from 1.3.2 to 2.3.3 (#32438)
* Bump bitflags from 1.3.2 to 2.3.3

Bumps [bitflags](https://github.com/bitflags/bitflags) from 1.3.2 to 2.3.3.
- [Release notes](https://github.com/bitflags/bitflags/releases)
- [Changelog](https://github.com/bitflags/bitflags/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bitflags/bitflags/compare/1.3.2...2.3.3)

---
updated-dependencies:
- dependency-name: bitflags
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* updates programs/sbf/Cargo.lock

* derives necessary traits

* replaces from_bits_unchecked with from_bits_retain

* patches test

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: behzad nouri <behzadnouri@gmail.com>
2023-07-24 12:56:55 +00:00
behzad nouri d54b6204be
removes instances of clippy::manual_let_else (#32417) 2023-07-09 21:41:36 +00:00
behzad nouri 25b7811869
moves shreds deduper to shred-sigverify stage (#30786)
Shreds arriving at tvu/tvu_forward/repair sockets are each processed in
a separate thread, and since each thread has its own deduper, the
duplicates across these sockets are not filtered out.
Using a common deduper across these threads will require an RwLock
wrapper and may introduce lock contention.
The commit instead moves the shred-deduper to shred-sigverify-stage
where all these shreds arrive through the same channel.
2023-03-22 13:19:16 +00:00
behzad nouri 4b595ebaaf
adds metrics tracking deduper saturations (#30779) 2023-03-20 15:33:36 +00:00
behzad nouri 06a8419423
reduces MAX_CODE_SHREDS_PER_SLOT (#30543)
https://github.com/solana-labs/solana/pull/26070
introduced MAX_CODE_SHREDS_PER_SLOT which because of how coding shreds
are generated was later set larger than MAX_DATA_SHREDS_PER_SLOT.

The old code was discarding: index >= MAX_DATA_SHREDS_PER_SLOT
regardless of if the shred is code or data:
https://github.com/solana-labs/solana/blob/v1.13.6/core/src/window_service.rs#L193-L195

Since that many code (or data) shreds will probably never result in a
rooted slot anyways, the commit reduces MAX_CODE_SHREDS_PER_SLOT to what
previously was the effective limit.
2023-03-04 20:56:26 +00:00
sakridge 7a8563f2c8
Panic when shred index exceeds the max per slot (#30555)
Assert when shred index exceeds the max per slot
2023-03-04 02:49:23 +01:00
behzad nouri 06bb71c79f
adds hash domain to merkle shreds (#29339) 2023-02-15 21:46:30 +00:00
behzad nouri cf0a149add
removes the merkle root from shreds binary (#29427)
https://github.com/solana-labs/solana/pull/29445
makes it unnecessary to embed merkle roots into shreds binary. This
commit removes the merkle root from shreds binary.

This adds 20 bytes to shreds capacity to store more data.
Additionally since we no longer need to truncate the merkle root, the
signature would be on the full 32 bytes of hash as opposed to the
truncated one.

Also signature verification would now effectively verify merkle proof as
well, so we no longer need to verify merkle proof in the sanitize
implementation.
2023-02-15 21:17:24 +00:00
behzad nouri 5b5a3ebce8
adds metrics for num merkle shreds on the receiving end (#29710) 2023-01-14 23:07:42 +00:00
behzad nouri 9db25655f7
recovers merkle roots from shreds binary in {verify,sign}_shreds_gpu (#29445)
{verify,sign}_shreds_gpu need to point to offsets within the packets for
the signed data. For merkle shreds this signed data is the merkle root
of the erasure batch and this would necessitate embedding the merkle
roots in the shreds payload.
However this is wasteful and reduces shreds capacity to store data
because the merkle root can already be recovered from the encoded merkle
proof.

Instead of pointing to offsets within the shreds payload, this commit
recovers merkle roots from the merkle proofs and stores them in an
allocated buffer. {verify,sign}_shreds_gpu would then point to offsets
within this new buffer for the respective signed data.

This would unblock us from removing merkle roots from shreds payload
which would save capacity to send more data with each shred.
2023-01-04 14:20:05 +00:00
behzad nouri 754ecf467b
generalizes the return type of Shred::get_signed_data (#29446)
The commit adds an associated SignedData type to Shred trait so that
merkle and legacy shreds can return different types for signed_data
method.
This would allow legacy shreds to point to a section of the shred
payload, whereas merkle shreds would compute and return the merkle root.
Ultimately this would allow to remove the merkle root from the shreds
binary.
2022-12-31 17:08:25 +00:00
behzad nouri 70c901792e
removes merkle root comparison in erasure_mismatch (#29447)
Merkle shreds within the same erasure batch have the same merkle root.
The root of the merkle tree is signed. So either the signatures match
or one fails sigverify, and the comparison of merkle roots is redundant.
2022-12-31 14:21:05 +00:00
behzad nouri 50afb80f52
adds shred::layout::get_signed_data (#29438)
Working towards removing merkle root from shreds payload, the commit
implements api to obtain signed data from shreds binary.
2022-12-30 18:52:10 +00:00
behzad nouri db3d926633
implements shred::layout::get_merkle_root (#29437)
In preparation of removing merkle roots form shreds binary, the commit
adds api to recover the root from the merkle proof embedded in shreds
payload.
2022-12-29 23:32:58 +00:00
behzad nouri 92ca725c6b
patches shred::merkle::make_shreds_from_data when data is empty (#29295)
If data is empty, make_shreds_from_data will now return one data shred
with empty data. This preserves invariants verified in tests regardless
of data size.
2022-12-16 18:56:21 +00:00
behzad nouri 73c42dde6e
sanitizes shreds recovered from erasure shards (#29286)
Instead of sanitizing shreds late in recover code:
https://github.com/solana-labs/solana/blob/50ad0390f/ledger/src/shred/merkle.rs#L786

Shred{Code,Data}::from_recovered_shard should sanitize shreds before
returning:
https://github.com/solana-labs/solana/blob/50ad0390f/ledger/src/shred/merkle.rs#L192-L216
https://github.com/solana-labs/solana/blob/50ad0390f/ledger/src/shred/merkle.rs#L324-L350
2022-12-15 20:09:15 +00:00
Brooks Prumo d1ba42180d
clippy for rust 1.65.0 (#28765) 2022-11-09 19:39:38 +00:00
behzad nouri d9ef04772d
moves merkle proof size sanity check to Shred{Code,Data}::merkle_branch (#28266) 2022-10-06 18:54:24 +00:00
behzad nouri a6512016a7
uses references for MerkleBranch root and proof fields (#28243) 2022-10-06 15:41:55 +00:00
behzad nouri 9e7a0e7420
rolls out merkle shreds to ~5% of testnet (#28199) 2022-10-04 19:36:16 +00:00
behzad nouri 72537e7e07
bypasses rayon thread-pool for single entry batches (#28077)
With no parallelization, thread-pool only adds overhead.
2022-09-26 21:32:58 +00:00
behzad nouri b9849179c9
bypasses merkle proof verification for recovered merkle shreds (#28076)
Merkle proof for shreds recovered from erasure codes are generated
locally, and it is superfluous to verify them when sanitizing recovered
shreds:
https://github.com/solana-labs/solana/blob/a0f49c2e4/ledger/src/shred/merkle.rs#L727-L760
2022-09-26 21:28:15 +00:00
behzad nouri f49beb0cbc
caches reed-solomon encoder/decoder instance (#27510)
ReedSolomon::new(...) initializes a matrix and a data-decode-matrix cache:
https://github.com/rust-rse/reed-solomon-erasure/blob/273ebbced/src/core.rs#L460-L466

In order to cache this computation, this commit caches the reed-solomon
encoder/decoder instance for each (data_shards, parity_shards) pair.
2022-09-25 18:09:47 +00:00
behzad nouri 45e26574f3
removes redundant shred.sanitize() from blockstore (#28016)
Shreds received from other nodes over the socket are sanitized when the
payload is deserialized:
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/legacy.rs#L137
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/legacy.rs#L77
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/merkle.rs#L355
https://github.com/solana-labs/solana/blob/315707504/ledger/src/shred/merkle.rs#L439

Similarly, shreds recovered from erasure codes are also sanitized at
deserialization:
https://github.com/solana-labs/solana/blob/f02fe9c7e/ledger/src/shredder.rs#L330
or explicitly so for Merkle shreds:
https://github.com/solana-labs/solana/blob/f02fe9c7e/ledger/src/shred/merkle.rs#L753

Shreds generated locally by the node itself during its leader slots do
not need to be sanitized.

So sanitizing shreds in blockstore is redundant and wasteful. In
particular this becomes more wasteful with Merkle shreds because
sanitizing shreds would require verifying Merkle proof.
As such the commit removes redundant shred.sanitize() from blockstore.
2022-09-24 16:31:50 +00:00
behzad nouri 7d3f3b2f7d generates merkle shreds from ledger entries
The commit adds methods to convert &[Entry] to vector of Merkle shreds.
2022-09-23 16:45:18 +00:00
behzad nouri 37296caee8
removes set_{slot,index,last_in_slot} implementations for merkle shreds (#27689)
These methods are only used in tests but invoked on a merkle shred they
will always invalidate the shred because the merkle proof will no longer
verify. As a result the shred will not sanitize and blockstore will
avoid inserting them. Their use in tests will result in spurious test
coverage because the shreds will not be ingested.

The commit removes implementation of these methods for merkle shreds.
Follow up commits will entirely remove these methods from shreds api.
2022-09-09 17:58:04 +00:00
Jeff Biseda 269eb519dd
track time to coalesce entries in recv_slot_entries (#27525) 2022-09-06 16:07:17 -07:00
Brennan Watt dfdb422fb1
Minor shred constant cleanup (#27472)
* Minor shred constant cleanup to eliminate magic number
2022-08-30 18:53:05 -07:00
Jeff Biseda d1522fc790
coalesce entries in recv_slot_entries to target byte count (#27321) 2022-08-25 13:51:55 -07:00
Brennan Watt e4a7d01e10
Rust v1.63 (#27303)
* Upgrade to Rust v1.63.0

* Add nightly_clippy_allows

* Resolve some new clippy nightly lints

* Increase QUIC packets completion timeout

* Update quinn-udp crate

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-08-22 18:01:03 -07:00
behzad nouri c0b63351ae
recovers merkle shreds from erasure codes (#27136)
The commit
* Identifies Merkle shreds when recovering from erasure codes and
  dispatches specialized code to reconstruct shreds.
* Coding shred headers are added to recovered erasure shards.
* Merkle tree is reconstructed for the erasure batch and added to
  recovered shreds.
* The common signature (for the root of Merkle tree) is attached to all
  recovered shreds.
2022-08-19 21:07:32 +00:00
Brennan Watt 7573000d87
Revert "Rust v1.63.0 (#27148)" (#27245)
This reverts commit a2e7bdf50a.
2022-08-19 09:19:44 +01:00
Brennan Watt a2e7bdf50a
Rust v1.63.0 (#27148)
* Upgrade to Rust v1.63.0

* Add nightly_clippy_allows

* Resolve some new clippy nightly lints

* Increase QUIC packets completion timeout

Co-authored-by: Michael Vines <mvines@gmail.com>
2022-08-17 15:48:33 -07:00
behzad nouri c813d75944
renames size_of_erasure_encoded_slice to ShredCode::capacity (#27157)
Maintain symmetry between code and data shreds:
* ShredData::capacity -> data buffer capacity
* ShredCode::capacity -> erasure code capacity
2022-08-16 12:30:38 +00:00
behzad nouri 0e30609394
adds Shred{Code,Data}::SIZE_OF_HEADERS trait constants (#27144) 2022-08-15 19:02:32 +00:00
behzad nouri b3b57a0f07
adjusts max coding shreds per slot (#27083)
As a consequence of removing buffering when generating coding shreds:
https://github.com/solana-labs/solana/pull/25807
more coding shreds are generated than data shreds, and so
MAX_CODE_SHREDS_PER_SLOT needs to be adjusted accordingly.

The respective value is tied to ERASURE_BATCH_SIZE.
2022-08-12 18:02:01 +00:00
behzad nouri ac91cdab74
removes buffering when generating coding shreds in broadcast (#25807)
Given the 32:32 erasure recovery schema, current implementation requires
exactly 32 data shreds to generate coding shreds for the batch (except
for the final erasure batch in each slot).
As a result, when serializing ledger entries to data shreds, if the
number of data shreds is not a multiple of 32, the coding shreds for the
last batch cannot be generated until there are more data shreds to
complete the batch to 32 data shreds. This adds latency in generating
and broadcasting coding shreds.

In addition, with Merkle variants for shreds, data shreds cannot be
signed and broadcasted until coding shreds are also generated. As a
result *both* code and data shreds will be delayed before broadcast if
we still require exactly 32 data shreds for each batch.

This commit instead always generates and broadcast coding shreds as soon
as there any number of data shreds available. When serializing entries
to shreds:
* if the number of resulting data shreds is less than 32, then more
  coding shreds will be generated so that the resulting erasure batch
  has the same recovery probabilities as a 32:32 batch.
* if the number of data shreds is more than 32, then the data shreds are
  split uniformly into erasure batches with _at least_ 32 data shreds in
  each batch. Each erasure batch will have the same number of code and
  data shreds.

For example:
* If there are 19 data shreds, 27 coding shreds are generated. The
  resulting 19(data):27(code) erasure batch has the same recovery
  probabilities as a 32:32 batch.
* If there are 107 data shreds, they are split into 3 batches of 36:36,
  36:36 and 35:35 data:code shreds each.

A consequence of this change is that code and data shreds indices will
no longer align as there will be more coding shreds than data shreds
(not only in the last batch in each slot but also in the intermediate
ones);
2022-08-11 12:44:27 +00:00
behzad nouri e2a2d271f2
adds number of coding shreds to broadcast metrics (#27006) 2022-08-09 13:59:40 +00:00
behzad nouri 403b2e4841
records num data shreds obtained from serializing entries (#26888) 2022-08-03 17:07:40 +00:00
Jeff Biseda 857be1e237
sign repair requests (#26833) 2022-07-31 15:48:51 -07:00
behzad nouri 348fe9ebe2
verifies shred slot and parent in fetch stage (#26225)
Shred slot and parent are not verified until window-service where
resources are already wasted to sig-verify and deserialize shreds.
This commit moves above verification to earlier in the pipeline in fetch
stage.
2022-06-28 12:45:50 +00:00
Michael Vines f3639b76ce Remove some clippy lints 2022-06-22 09:23:22 -07:00
behzad nouri 1f0f5dc03e verifies shred-version in fetch stage
Shred versions are not verified until window-service where resources are
already wasted to sig-verify and deserialize shreds.
The commit verifies shred-version earlier in the pipeline in fetch stage.
2022-06-22 12:17:37 +00:00
behzad nouri 31b3e0e15a
adds metric tracking wasted data buffer in shreds (#25972) 2022-06-16 16:14:00 +00:00
behzad nouri 5f04512d3a
adds a new shred variant embedding merkle tree hashes of the erasure batch (#25237)
Coding shreds can only be signed once erasure codings are already
generated. Therefore coding shreds recovered from erasure codings lack
slot leader's signature and so cannot be retransmitted to the rest of
the cluster.

shred/merkle.rs implements a new shred variant where we generate merkle
tree for each erasure encoded batch and each shred includes:
* root of the merkle tree (Hash truncated to 20 bytes).
* slot leader's signature of the root of the merkle tree.
* merkle tree nodes along the branch the shred belongs to, where hashes
  are trimmed to 20 bytes during tree construction.

This schema results in the same signature for all shreds within an
erasure batch.

When recovering shreds from erasure codes, we can reconstruct merkle
tree for the batch and for each recovered shred also recover respective
merkle tree branch; then snap the slot leader's signature from any of
the shreds received from turbine and retransmit all recovered code or
data shreds.

Backward compatibility is achieved by encoding shred variant at byte 65
of payload (previously shred-type at this position):
* 0b0101_1010 indicates a legacy coding shred, which is also equal to
  ShredType::Code for backward compatibility.
* 0b1010_0101 indicates a legacy data shred, which is also equal to
  ShredType::Data for backward compatibility.
* 0b0100_???? indicates a merkle coding shred with merkle branch size
  indicated by the last 4 bits.
* 0b1000_???? indicates a merkle data shred with merkle branch size
  indicated by the last 4 bits.

Merkle root and branch are encoded at the end of the shred payload.
2022-06-07 22:41:03 +00:00
behzad nouri 6c9f2eac78
removes fec_set_offset from UnfinishedSlotInfo (#25815)
If the blockstore has shreds for a slot, it should not recreate the
slot:
https://github.com/solana-labs/solana/blob/ff68bf6c2/ledger/src/leader_schedule_cache.rs#L142-L146
https://github.com/solana-labs/solana/pull/15849/files#r596657314

Therefore in broadcast stage if UnfinishedSlotInfo is None, then
fec_set_offset will be zero:
https://github.com/solana-labs/solana/blob/ff68bf6c2/core/src/broadcast_stage/standard_broadcast_run.rs#L111-L120

As a result fec_set_offset will always be zero, and is so redundant and
can be removed.
2022-06-07 22:17:37 +00:00
behzad nouri 81231a89b9 adds support for different variants of ShredCode and ShredData
The commit implements two new types:
    pub enum ShredCode {
        Legacy(legacy::ShredCode),
    }
    pub enum ShredData {
        Legacy(legacy::ShredData),
    }

Following commits will extend these types by adding merkle variants:
    pub enum ShredCode {
        Legacy(legacy::ShredCode),
        Merkle(merkle::ShredCode),
    }
    pub enum ShredData {
        Legacy(legacy::ShredData),
        Merkle(merkle::ShredData),
    }
2022-06-02 18:55:50 +00:00
behzad nouri a913068512 embeds versioning into shred binary
In preparation of
https://github.com/solana-labs/solana/pull/25237
which adds a new shred variant with merkle tree branches, the commit
embeds versioning into shred binary by encoding a new ShredVariant type
at byte 65 of payload replacing previously ShredType at this offset.
    enum ShredVariant {
        LegacyCode, // 0b0101_1010
        LegacyData, // 0b0101_1010
    }

* 0b0101_1010 indicates a legacy coding shred, which is also equal to
  ShredType::Code for backward compatibility.
* 0b1010_0101 indicates a legacy data shred, which is also equal to
  ShredType::Data for backward compatibility.

Following commits will add merkle variants to this type:
    enum ShredVariant {
        LegacyCode, // 0b0101_1010
        LegacyData, // 0b1010_0101
        MerkleCode(/*proof_size:*/ u8), // 0b0100_????
        MerkleData(/*proof_size:*/ u8), // 0b1000_????
    }
2022-06-02 18:55:50 +00:00
behzad nouri 29cfa04c05
records number of residual data shreds which don't make a full batch (#25693)
Data shreds are batched into MAX_DATA_SHREDS_PER_FEC_BLOCK shreds for
each erasure batch. If there are residual shreds not making a full
batch, then we cannot generate coding shreds and need to buffer shreds
until there is a full batch; This may add latency to coding shreds
generation and broadcast.
In order to evaluate upcoming changes removing this buffering logic,
this commit adds metrics tracking residual number of data shreds which
don't make a full batch.
2022-06-02 00:32:32 +00:00