Remove obsolete references to Blob (#6957)
* Remove the name "blob" from archivers * Remove the name "blob" from broadcast * Remove the name "blob" from Cluset Info * Remove the name "blob" from Repair * Remove the name "blob" from a bunch more places * Remove the name "blob" from tests and book
This commit is contained in:
parent
e7f63cd336
commit
79d7090867
|
@ -7,7 +7,7 @@
|
||||||
| TVU | |
|
| TVU | |
|
||||||
| | |
|
| | |
|
||||||
| .-------. .------------. .----+---. .---------. |
|
| .-------. .------------. .----+---. .---------. |
|
||||||
.------------. | | Blob | | Retransmit | | Replay | | Storage | |
|
.------------. | | Shred | | Retransmit | | Replay | | Storage | |
|
||||||
| Upstream +----->| Fetch +-->| Stage +-->| Stage +-->| Stage | |
|
| Upstream +----->| Fetch +-->| Stage +-->| Stage +-->| Stage | |
|
||||||
| Validators | | | Stage | | | | | | | |
|
| Validators | | | Stage | | | | | | | |
|
||||||
`------------` | `-------` `----+-------` `----+---` `---------` |
|
`------------` | `-------` `----+-------` `----+---` `---------` |
|
||||||
|
|
|
@ -1,30 +1,30 @@
|
||||||
.--------------------------------------.
|
.---------------------------------------.
|
||||||
| Validator |
|
| Validator |
|
||||||
| |
|
| |
|
||||||
.--------. | .-------------------. |
|
.--------. | .-------------------. |
|
||||||
| |---->| | |
|
| |---->| | |
|
||||||
| Client | | | JSON RPC Service | |
|
| Client | | | JSON RPC Service | |
|
||||||
| |<----| | |
|
| |<----| | |
|
||||||
`----+---` | `-------------------` |
|
`----+---` | `-------------------` |
|
||||||
| | ^ |
|
| | ^ |
|
||||||
| | | .----------------. | .------------------.
|
| | | .----------------. | .------------------.
|
||||||
| | | | Gossip Service |<----------| Validators |
|
| | | | Gossip Service |<-----------| Validators |
|
||||||
| | | `----------------` | | |
|
| | | `----------------` | | |
|
||||||
| | | ^ | | |
|
| | | ^ | | |
|
||||||
| | | | | | .------------. |
|
| | | | | | .------------. |
|
||||||
| | .---+---. .----+---. .-----------. | | | | |
|
| | .---+---. .----+---. .------------. | | | | |
|
||||||
| | | Bank |<-+ Replay | | BlobFetch |<------+ Upstream | |
|
| | | Bank |<-+ Replay | | ShredFetch |<------+ Upstream | |
|
||||||
| | | Forks | | Stage | | Stage | | | | Validators | |
|
| | | Forks | | Stage | | Stage | | | | Validators | |
|
||||||
| | `-------` `--------` `--+--------` | | | | |
|
| | `-------` `--------` `--+---------` | | | | |
|
||||||
| | ^ ^ | | | `------------` |
|
| | ^ ^ | | | `------------` |
|
||||||
| | | | v | | |
|
| | | | v | | |
|
||||||
| | | .--+--------. | | |
|
| | | .--+--------. | | |
|
||||||
| | | | Blocktree | | | |
|
| | | | Blocktree | | | |
|
||||||
| | | `-----------` | | .------------. |
|
| | | `-----------` | | .------------. |
|
||||||
| | | ^ | | | | |
|
| | | ^ | | | | |
|
||||||
| | | | | | | Downstream | |
|
| | | | | | | Downstream | |
|
||||||
| | .--+--. .-------+---. | | | Validators | |
|
| | .--+--. .-------+---. | | | Validators | |
|
||||||
`-------->| TPU +---->| Broadcast +--------------->| | |
|
`-------->| TPU +---->| Broadcast +---------------->| | |
|
||||||
| `-----` | Stage | | | `------------` |
|
| `-----` | Stage | | | `------------` |
|
||||||
| `-----------` | `------------------`
|
| `-----------` | `------------------`
|
||||||
`--------------------------------------`
|
`---------------------------------------`
|
||||||
|
|
|
@ -12,7 +12,7 @@ Nodes take turns being leader and generating the PoH that encodes state changes.
|
||||||
2. Leader filters valid transactions.
|
2. Leader filters valid transactions.
|
||||||
3. Leader executes valid transactions updating its state.
|
3. Leader executes valid transactions updating its state.
|
||||||
4. Leader packages transactions into entries based off its current PoH slot.
|
4. Leader packages transactions into entries based off its current PoH slot.
|
||||||
5. Leader transmits the entries to validator nodes \(in signed blobs\) 1. The PoH stream includes ticks; empty entries that indicate liveness of
|
5. Leader transmits the entries to validator nodes \(in signed shreds\) 1. The PoH stream includes ticks; empty entries that indicate liveness of
|
||||||
|
|
||||||
the leader and the passage of time on the cluster.
|
the leader and the passage of time on the cluster.
|
||||||
|
|
||||||
|
|
|
@ -30,7 +30,7 @@ Gossip is designed for efficient propagation of state. Messages that are sent th
|
||||||
## Performance
|
## Performance
|
||||||
|
|
||||||
1. Worst case propagation time to the next leader is Log\(N\) hops with a base depending on the fanout. With our current default fanout of 6, it is about 6 hops to 20k nodes.
|
1. Worst case propagation time to the next leader is Log\(N\) hops with a base depending on the fanout. With our current default fanout of 6, it is about 6 hops to 20k nodes.
|
||||||
2. The leader should receive 20k validation votes aggregated by gossip-push into 64kb blobs. Which would reduce the number of packets for 20k network to 80 blobs.
|
2. The leader should receive 20k validation votes aggregated by gossip-push into MTU-sized shreds. Which would reduce the number of packets for 20k network to 80 shreds.
|
||||||
3. Each validators votes is replicated across the entire network. To maintain a queue of 5 previous votes the Crds table would grow by 25 megabytes. `(20,000 nodes * 256 bytes * 5)`.
|
3. Each validators votes is replicated across the entire network. To maintain a queue of 5 previous votes the Crds table would grow by 25 megabytes. `(20,000 nodes * 256 bytes * 5)`.
|
||||||
|
|
||||||
## Two step implementation rollout
|
## Two step implementation rollout
|
||||||
|
@ -44,7 +44,7 @@ Initially the network can perform reliably with just 1 vote transmitted and main
|
||||||
3. Fanout of 6.
|
3. Fanout of 6.
|
||||||
4. Worst case 256kb memory overhead per node.
|
4. Worst case 256kb memory overhead per node.
|
||||||
5. Worst case 4 hops to propagate to every node.
|
5. Worst case 4 hops to propagate to every node.
|
||||||
6. Leader should receive the entire validator vote set in 4 push message blobs.
|
6. Leader should receive the entire validator vote set in 4 push message shreds.
|
||||||
|
|
||||||
### Sub 20k network
|
### Sub 20k network
|
||||||
|
|
||||||
|
@ -55,5 +55,5 @@ Everything above plus the following:
|
||||||
3. Increase fanout to 20.
|
3. Increase fanout to 20.
|
||||||
4. Worst case 25mb memory overhead per node.
|
4. Worst case 25mb memory overhead per node.
|
||||||
5. Sub 4 hops worst case to deliver to the entire network.
|
5. Sub 4 hops worst case to deliver to the entire network.
|
||||||
6. 80 blobs received by the leader for all the validator messages.
|
6. 80 shreds received by the leader for all the validator messages.
|
||||||
|
|
||||||
|
|
|
@ -26,7 +26,7 @@ We unwrap the many abstraction layers and build a single pipeline that can toggl
|
||||||
* TPU moves to new socket-free crate called solana-tpu.
|
* TPU moves to new socket-free crate called solana-tpu.
|
||||||
* TPU's BankingStage absorbs ReplayStage
|
* TPU's BankingStage absorbs ReplayStage
|
||||||
* TVU goes away
|
* TVU goes away
|
||||||
* New RepairStage absorbs Blob Fetch Stage and repair requests
|
* New RepairStage absorbs Shred Fetch Stage and repair requests
|
||||||
* JSON RPC Service is optional - used for debugging. It should instead be part
|
* JSON RPC Service is optional - used for debugging. It should instead be part
|
||||||
|
|
||||||
of a separate `solana-blockstreamer` executable.
|
of a separate `solana-blockstreamer` executable.
|
||||||
|
|
|
@ -261,20 +261,20 @@ impl Archiver {
|
||||||
};
|
};
|
||||||
|
|
||||||
let repair_socket = Arc::new(node.sockets.repair);
|
let repair_socket = Arc::new(node.sockets.repair);
|
||||||
let blob_sockets: Vec<Arc<UdpSocket>> =
|
let shred_sockets: Vec<Arc<UdpSocket>> =
|
||||||
node.sockets.tvu.into_iter().map(Arc::new).collect();
|
node.sockets.tvu.into_iter().map(Arc::new).collect();
|
||||||
let blob_forward_sockets: Vec<Arc<UdpSocket>> = node
|
let shred_forward_sockets: Vec<Arc<UdpSocket>> = node
|
||||||
.sockets
|
.sockets
|
||||||
.tvu_forwards
|
.tvu_forwards
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.map(Arc::new)
|
.map(Arc::new)
|
||||||
.collect();
|
.collect();
|
||||||
let (blob_fetch_sender, blob_fetch_receiver) = channel();
|
let (shred_fetch_sender, shred_fetch_receiver) = channel();
|
||||||
let fetch_stage = ShredFetchStage::new(
|
let fetch_stage = ShredFetchStage::new(
|
||||||
blob_sockets,
|
shred_sockets,
|
||||||
blob_forward_sockets,
|
shred_forward_sockets,
|
||||||
repair_socket.clone(),
|
repair_socket.clone(),
|
||||||
&blob_fetch_sender,
|
&shred_fetch_sender,
|
||||||
&exit,
|
&exit,
|
||||||
);
|
);
|
||||||
let (slot_sender, slot_receiver) = channel();
|
let (slot_sender, slot_receiver) = channel();
|
||||||
|
@ -299,7 +299,7 @@ impl Archiver {
|
||||||
&node_info,
|
&node_info,
|
||||||
&storage_keypair,
|
&storage_keypair,
|
||||||
repair_socket,
|
repair_socket,
|
||||||
blob_fetch_receiver,
|
shred_fetch_receiver,
|
||||||
slot_sender,
|
slot_sender,
|
||||||
) {
|
) {
|
||||||
Ok(window_service) => window_service,
|
Ok(window_service) => window_service,
|
||||||
|
@ -448,7 +448,7 @@ impl Archiver {
|
||||||
node_info: &ContactInfo,
|
node_info: &ContactInfo,
|
||||||
storage_keypair: &Arc<Keypair>,
|
storage_keypair: &Arc<Keypair>,
|
||||||
repair_socket: Arc<UdpSocket>,
|
repair_socket: Arc<UdpSocket>,
|
||||||
blob_fetch_receiver: PacketReceiver,
|
shred_fetch_receiver: PacketReceiver,
|
||||||
slot_sender: Sender<u64>,
|
slot_sender: Sender<u64>,
|
||||||
) -> Result<(WindowService)> {
|
) -> Result<(WindowService)> {
|
||||||
let slots_per_segment =
|
let slots_per_segment =
|
||||||
|
@ -492,7 +492,7 @@ impl Archiver {
|
||||||
let (verified_sender, verified_receiver) = unbounded();
|
let (verified_sender, verified_receiver) = unbounded();
|
||||||
|
|
||||||
let _sigverify_stage = SigVerifyStage::new(
|
let _sigverify_stage = SigVerifyStage::new(
|
||||||
blob_fetch_receiver,
|
shred_fetch_receiver,
|
||||||
verified_sender.clone(),
|
verified_sender.clone(),
|
||||||
DisabledSigVerifier::default(),
|
DisabledSigVerifier::default(),
|
||||||
);
|
);
|
||||||
|
@ -845,7 +845,7 @@ impl Archiver {
|
||||||
/// Return the slot at the start of the archiver's segment
|
/// Return the slot at the start of the archiver's segment
|
||||||
///
|
///
|
||||||
/// It is recommended to use a temporary blocktree for this since the download will not verify
|
/// It is recommended to use a temporary blocktree for this since the download will not verify
|
||||||
/// blobs received and might impact the chaining of blobs across slots
|
/// shreds received and might impact the chaining of shreds across slots
|
||||||
pub fn download_from_archiver(
|
pub fn download_from_archiver(
|
||||||
cluster_info: &Arc<RwLock<ClusterInfo>>,
|
cluster_info: &Arc<RwLock<ClusterInfo>>,
|
||||||
archiver_info: &ContactInfo,
|
archiver_info: &ContactInfo,
|
||||||
|
@ -853,7 +853,7 @@ impl Archiver {
|
||||||
slots_per_segment: u64,
|
slots_per_segment: u64,
|
||||||
) -> Result<(u64)> {
|
) -> Result<(u64)> {
|
||||||
// Create a client which downloads from the archiver and see that it
|
// Create a client which downloads from the archiver and see that it
|
||||||
// can respond with blobs.
|
// can respond with shreds.
|
||||||
let start_slot = Self::get_archiver_segment_slot(archiver_info.storage_addr);
|
let start_slot = Self::get_archiver_segment_slot(archiver_info.storage_addr);
|
||||||
info!("Archiver download: start at {}", start_slot);
|
info!("Archiver download: start at {}", start_slot);
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
//! A stage to broadcast data from a leader node to validators
|
//! A stage to broadcast data from a leader node to validators
|
||||||
use self::broadcast_fake_blobs_run::BroadcastFakeBlobsRun;
|
use self::broadcast_fake_shreds_run::BroadcastFakeShredsRun;
|
||||||
use self::fail_entry_verification_broadcast_run::FailEntryVerificationBroadcastRun;
|
use self::fail_entry_verification_broadcast_run::FailEntryVerificationBroadcastRun;
|
||||||
use self::standard_broadcast_run::StandardBroadcastRun;
|
use self::standard_broadcast_run::StandardBroadcastRun;
|
||||||
use crate::cluster_info::{ClusterInfo, ClusterInfoError};
|
use crate::cluster_info::{ClusterInfo, ClusterInfoError};
|
||||||
|
@ -15,7 +15,7 @@ use std::sync::{Arc, RwLock};
|
||||||
use std::thread::{self, Builder, JoinHandle};
|
use std::thread::{self, Builder, JoinHandle};
|
||||||
use std::time::Instant;
|
use std::time::Instant;
|
||||||
|
|
||||||
mod broadcast_fake_blobs_run;
|
mod broadcast_fake_shreds_run;
|
||||||
pub(crate) mod broadcast_utils;
|
pub(crate) mod broadcast_utils;
|
||||||
mod fail_entry_verification_broadcast_run;
|
mod fail_entry_verification_broadcast_run;
|
||||||
mod standard_broadcast_run;
|
mod standard_broadcast_run;
|
||||||
|
@ -31,7 +31,7 @@ pub enum BroadcastStageReturnType {
|
||||||
pub enum BroadcastStageType {
|
pub enum BroadcastStageType {
|
||||||
Standard,
|
Standard,
|
||||||
FailEntryVerification,
|
FailEntryVerification,
|
||||||
BroadcastFakeBlobs,
|
BroadcastFakeShreds,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BroadcastStageType {
|
impl BroadcastStageType {
|
||||||
|
@ -65,13 +65,13 @@ impl BroadcastStageType {
|
||||||
FailEntryVerificationBroadcastRun::new(),
|
FailEntryVerificationBroadcastRun::new(),
|
||||||
),
|
),
|
||||||
|
|
||||||
BroadcastStageType::BroadcastFakeBlobs => BroadcastStage::new(
|
BroadcastStageType::BroadcastFakeShreds => BroadcastStage::new(
|
||||||
sock,
|
sock,
|
||||||
cluster_info,
|
cluster_info,
|
||||||
receiver,
|
receiver,
|
||||||
exit_sender,
|
exit_sender,
|
||||||
blocktree,
|
blocktree,
|
||||||
BroadcastFakeBlobsRun::new(0),
|
BroadcastFakeShredsRun::new(0),
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -141,8 +141,8 @@ impl BroadcastStage {
|
||||||
/// * `sock` - Socket to send from.
|
/// * `sock` - Socket to send from.
|
||||||
/// * `exit` - Boolean to signal system exit.
|
/// * `exit` - Boolean to signal system exit.
|
||||||
/// * `cluster_info` - ClusterInfo structure
|
/// * `cluster_info` - ClusterInfo structure
|
||||||
/// * `window` - Cache of blobs that we have broadcast
|
/// * `window` - Cache of Shreds that we have broadcast
|
||||||
/// * `receiver` - Receive channel for blobs to be retransmitted to all the layer 1 nodes.
|
/// * `receiver` - Receive channel for Shreds to be retransmitted to all the layer 1 nodes.
|
||||||
/// * `exit_sender` - Set to true when this service exits, allows rest of Tpu to exit cleanly.
|
/// * `exit_sender` - Set to true when this service exits, allows rest of Tpu to exit cleanly.
|
||||||
/// Otherwise, when a Tpu closes, it only closes the stages that come after it. The stages
|
/// Otherwise, when a Tpu closes, it only closes the stages that come after it. The stages
|
||||||
/// that come before could be blocked on a receive, and never notice that they need to
|
/// that come before could be blocked on a receive, and never notice that they need to
|
||||||
|
|
|
@ -3,12 +3,12 @@ use solana_ledger::entry::Entry;
|
||||||
use solana_ledger::shred::{Shredder, RECOMMENDED_FEC_RATE};
|
use solana_ledger::shred::{Shredder, RECOMMENDED_FEC_RATE};
|
||||||
use solana_sdk::hash::Hash;
|
use solana_sdk::hash::Hash;
|
||||||
|
|
||||||
pub(super) struct BroadcastFakeBlobsRun {
|
pub(super) struct BroadcastFakeShredsRun {
|
||||||
last_blockhash: Hash,
|
last_blockhash: Hash,
|
||||||
partition: usize,
|
partition: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BroadcastFakeBlobsRun {
|
impl BroadcastFakeShredsRun {
|
||||||
pub(super) fn new(partition: usize) -> Self {
|
pub(super) fn new(partition: usize) -> Self {
|
||||||
Self {
|
Self {
|
||||||
last_blockhash: Hash::default(),
|
last_blockhash: Hash::default(),
|
||||||
|
@ -17,7 +17,7 @@ impl BroadcastFakeBlobsRun {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BroadcastRun for BroadcastFakeBlobsRun {
|
impl BroadcastRun for BroadcastFakeShredsRun {
|
||||||
fn run(
|
fn run(
|
||||||
&mut self,
|
&mut self,
|
||||||
cluster_info: &Arc<RwLock<ClusterInfo>>,
|
cluster_info: &Arc<RwLock<ClusterInfo>>,
|
||||||
|
@ -82,7 +82,7 @@ impl BroadcastRun for BroadcastFakeBlobsRun {
|
||||||
let peers = cluster_info.read().unwrap().tvu_peers();
|
let peers = cluster_info.read().unwrap().tvu_peers();
|
||||||
peers.iter().enumerate().for_each(|(i, peer)| {
|
peers.iter().enumerate().for_each(|(i, peer)| {
|
||||||
if i <= self.partition {
|
if i <= self.partition {
|
||||||
// Send fake blobs to the first N peers
|
// Send fake shreds to the first N peers
|
||||||
fake_data_shreds
|
fake_data_shreds
|
||||||
.iter()
|
.iter()
|
||||||
.chain(fake_coding_shreds.iter())
|
.chain(fake_coding_shreds.iter())
|
|
@ -23,7 +23,7 @@ impl BroadcastRun for FailEntryVerificationBroadcastRun {
|
||||||
let bank = receive_results.bank.clone();
|
let bank = receive_results.bank.clone();
|
||||||
let last_tick_height = receive_results.last_tick_height;
|
let last_tick_height = receive_results.last_tick_height;
|
||||||
|
|
||||||
// 2) Convert entries to blobs + generate coding blobs. Set a garbage PoH on the last entry
|
// 2) Convert entries to shreds + generate coding shreds. Set a garbage PoH on the last entry
|
||||||
// in the slot to make verification fail on validators
|
// in the slot to make verification fail on validators
|
||||||
if last_tick_height == bank.max_tick_height() {
|
if last_tick_height == bank.max_tick_height() {
|
||||||
let mut last_entry = receive_results.entries.last_mut().unwrap();
|
let mut last_entry = receive_results.entries.last_mut().unwrap();
|
||||||
|
|
|
@ -164,7 +164,7 @@ mod tests {
|
||||||
let mut hasher = Hasher::default();
|
let mut hasher = Hasher::default();
|
||||||
hasher.hash(&buf[..size]);
|
hasher.hash(&buf[..size]);
|
||||||
|
|
||||||
// golden needs to be updated if blob stuff changes....
|
// golden needs to be updated if shred structure changes....
|
||||||
let golden: Hash = "HLzH7Nrh4q2K5WTh3e9vPNFZ1QVYhVDRMN9u5v51GqpJ"
|
let golden: Hash = "HLzH7Nrh4q2K5WTh3e9vPNFZ1QVYhVDRMN9u5v51GqpJ"
|
||||||
.parse()
|
.parse()
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
|
@ -804,14 +804,14 @@ impl ClusterInfo {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn window_index_request_bytes(&self, slot: Slot, blob_index: u64) -> Result<Vec<u8>> {
|
pub fn window_index_request_bytes(&self, slot: Slot, shred_index: u64) -> Result<Vec<u8>> {
|
||||||
let req = Protocol::RequestWindowIndex(self.my_data().clone(), slot, blob_index);
|
let req = Protocol::RequestWindowIndex(self.my_data().clone(), slot, shred_index);
|
||||||
let out = serialize(&req)?;
|
let out = serialize(&req)?;
|
||||||
Ok(out)
|
Ok(out)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn window_highest_index_request_bytes(&self, slot: Slot, blob_index: u64) -> Result<Vec<u8>> {
|
fn window_highest_index_request_bytes(&self, slot: Slot, shred_index: u64) -> Result<Vec<u8>> {
|
||||||
let req = Protocol::RequestHighestWindowIndex(self.my_data().clone(), slot, blob_index);
|
let req = Protocol::RequestHighestWindowIndex(self.my_data().clone(), slot, shred_index);
|
||||||
let out = serialize(&req)?;
|
let out = serialize(&req)?;
|
||||||
Ok(out)
|
Ok(out)
|
||||||
}
|
}
|
||||||
|
@ -837,21 +837,21 @@ impl ClusterInfo {
|
||||||
}
|
}
|
||||||
pub fn map_repair_request(&self, repair_request: &RepairType) -> Result<Vec<u8>> {
|
pub fn map_repair_request(&self, repair_request: &RepairType) -> Result<Vec<u8>> {
|
||||||
match repair_request {
|
match repair_request {
|
||||||
RepairType::Shred(slot, blob_index) => {
|
RepairType::Shred(slot, shred_index) => {
|
||||||
datapoint_debug!(
|
datapoint_debug!(
|
||||||
"cluster_info-repair",
|
"cluster_info-repair",
|
||||||
("repair-slot", *slot, i64),
|
("repair-slot", *slot, i64),
|
||||||
("repair-ix", *blob_index, i64)
|
("repair-ix", *shred_index, i64)
|
||||||
);
|
);
|
||||||
Ok(self.window_index_request_bytes(*slot, *blob_index)?)
|
Ok(self.window_index_request_bytes(*slot, *shred_index)?)
|
||||||
}
|
}
|
||||||
RepairType::HighestBlob(slot, blob_index) => {
|
RepairType::HighestShred(slot, shred_index) => {
|
||||||
datapoint_debug!(
|
datapoint_debug!(
|
||||||
"cluster_info-repair_highest",
|
"cluster_info-repair_highest",
|
||||||
("repair-highest-slot", *slot, i64),
|
("repair-highest-slot", *slot, i64),
|
||||||
("repair-highest-ix", *blob_index, i64)
|
("repair-highest-ix", *shred_index, i64)
|
||||||
);
|
);
|
||||||
Ok(self.window_highest_index_request_bytes(*slot, *blob_index)?)
|
Ok(self.window_highest_index_request_bytes(*slot, *shred_index)?)
|
||||||
}
|
}
|
||||||
RepairType::Orphan(slot) => {
|
RepairType::Orphan(slot) => {
|
||||||
datapoint_debug!("cluster_info-repair_orphan", ("repair-orphan", *slot, i64));
|
datapoint_debug!("cluster_info-repair_orphan", ("repair-orphan", *slot, i64));
|
||||||
|
@ -1165,7 +1165,7 @@ impl ClusterInfo {
|
||||||
packets: Packets,
|
packets: Packets,
|
||||||
response_sender: &PacketSender,
|
response_sender: &PacketSender,
|
||||||
) {
|
) {
|
||||||
// iter over the blobs, collect pulls separately and process everything else
|
// iter over the packets, collect pulls separately and process everything else
|
||||||
let mut gossip_pull_data: Vec<PullData> = vec![];
|
let mut gossip_pull_data: Vec<PullData> = vec![];
|
||||||
packets.packets.iter().for_each(|packet| {
|
packets.packets.iter().for_each(|packet| {
|
||||||
let from_addr = packet.meta.addr();
|
let from_addr = packet.meta.addr();
|
||||||
|
@ -1404,7 +1404,7 @@ impl ClusterInfo {
|
||||||
|
|
||||||
let (res, label) = {
|
let (res, label) = {
|
||||||
match &request {
|
match &request {
|
||||||
Protocol::RequestWindowIndex(from, slot, blob_index) => {
|
Protocol::RequestWindowIndex(from, slot, shred_index) => {
|
||||||
inc_new_counter_debug!("cluster_info-request-window-index", 1);
|
inc_new_counter_debug!("cluster_info-request-window-index", 1);
|
||||||
(
|
(
|
||||||
Self::run_window_request(
|
Self::run_window_request(
|
||||||
|
@ -1413,7 +1413,7 @@ impl ClusterInfo {
|
||||||
blocktree,
|
blocktree,
|
||||||
&my_info,
|
&my_info,
|
||||||
*slot,
|
*slot,
|
||||||
*blob_index,
|
*shred_index,
|
||||||
),
|
),
|
||||||
"RequestWindowIndex",
|
"RequestWindowIndex",
|
||||||
)
|
)
|
||||||
|
@ -1944,7 +1944,7 @@ mod tests {
|
||||||
assert!(one && two);
|
assert!(one && two);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// test window requests respond with the right blob, and do not overrun
|
/// test window requests respond with the right shred, and do not overrun
|
||||||
#[test]
|
#[test]
|
||||||
fn run_window_request() {
|
fn run_window_request() {
|
||||||
solana_logger::setup();
|
solana_logger::setup();
|
||||||
|
@ -2009,7 +2009,7 @@ mod tests {
|
||||||
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
|
Blocktree::destroy(&ledger_path).expect("Expected successful database destruction");
|
||||||
}
|
}
|
||||||
|
|
||||||
/// test run_window_requestwindow requests respond with the right blob, and do not overrun
|
/// test run_window_requestwindow requests respond with the right shred, and do not overrun
|
||||||
#[test]
|
#[test]
|
||||||
fn run_highest_window_request() {
|
fn run_highest_window_request() {
|
||||||
solana_logger::setup();
|
solana_logger::setup();
|
||||||
|
@ -2061,7 +2061,7 @@ mod tests {
|
||||||
let rv = ClusterInfo::run_orphan(&socketaddr_any!(), Some(&blocktree), 2, 0);
|
let rv = ClusterInfo::run_orphan(&socketaddr_any!(), Some(&blocktree), 2, 0);
|
||||||
assert!(rv.is_empty());
|
assert!(rv.is_empty());
|
||||||
|
|
||||||
// Create slots 1, 2, 3 with 5 blobs apiece
|
// Create slots 1, 2, 3 with 5 shreds apiece
|
||||||
let (shreds, _) = make_many_slot_entries(1, 3, 5);
|
let (shreds, _) = make_many_slot_entries(1, 3, 5);
|
||||||
|
|
||||||
blocktree
|
blocktree
|
||||||
|
@ -2072,7 +2072,7 @@ mod tests {
|
||||||
let rv = ClusterInfo::run_orphan(&socketaddr_any!(), Some(&blocktree), 4, 5);
|
let rv = ClusterInfo::run_orphan(&socketaddr_any!(), Some(&blocktree), 4, 5);
|
||||||
assert!(rv.is_empty());
|
assert!(rv.is_empty());
|
||||||
|
|
||||||
// For slot 3, we should return the highest blobs from slots 3, 2, 1 respectively
|
// For slot 3, we should return the highest shreds from slots 3, 2, 1 respectively
|
||||||
// for this request
|
// for this request
|
||||||
let rv: Vec<_> = ClusterInfo::run_orphan(&socketaddr_any!(), Some(&blocktree), 3, 5)
|
let rv: Vec<_> = ClusterInfo::run_orphan(&socketaddr_any!(), Some(&blocktree), 3, 5)
|
||||||
.packets
|
.packets
|
||||||
|
|
|
@ -29,47 +29,47 @@ pub const NUM_SLOTS_PER_UPDATE: usize = 2;
|
||||||
pub const REPAIR_SAME_SLOT_THRESHOLD: u64 = 5000;
|
pub const REPAIR_SAME_SLOT_THRESHOLD: u64 = 5000;
|
||||||
use solana_sdk::timing::timestamp;
|
use solana_sdk::timing::timestamp;
|
||||||
|
|
||||||
// Represents the blobs that a repairman is responsible for repairing in specific slot. More
|
// Represents the shreds that a repairman is responsible for repairing in specific slot. More
|
||||||
// specifically, a repairman is responsible for every blob in this slot with index
|
// specifically, a repairman is responsible for every shred in this slot with index
|
||||||
// `(start_index + step_size * i) % num_blobs_in_slot`, for all `0 <= i <= num_blobs_to_send - 1`
|
// `(start_index + step_size * i) % num_shreds_in_slot`, for all `0 <= i <= num_shreds_to_send - 1`
|
||||||
// in this slot.
|
// in this slot.
|
||||||
struct BlobIndexesToRepairIterator {
|
struct ShredIndexesToRepairIterator {
|
||||||
start_index: usize,
|
start_index: usize,
|
||||||
num_blobs_to_send: usize,
|
num_shreds_to_send: usize,
|
||||||
step_size: usize,
|
step_size: usize,
|
||||||
num_blobs_in_slot: usize,
|
num_shreds_in_slot: usize,
|
||||||
blobs_sent: usize,
|
shreds_sent: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BlobIndexesToRepairIterator {
|
impl ShredIndexesToRepairIterator {
|
||||||
fn new(
|
fn new(
|
||||||
start_index: usize,
|
start_index: usize,
|
||||||
num_blobs_to_send: usize,
|
num_shreds_to_send: usize,
|
||||||
step_size: usize,
|
step_size: usize,
|
||||||
num_blobs_in_slot: usize,
|
num_shreds_in_slot: usize,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
Self {
|
Self {
|
||||||
start_index,
|
start_index,
|
||||||
num_blobs_to_send,
|
num_shreds_to_send,
|
||||||
step_size,
|
step_size,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
blobs_sent: 0,
|
shreds_sent: 0,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Iterator for BlobIndexesToRepairIterator {
|
impl Iterator for ShredIndexesToRepairIterator {
|
||||||
type Item = usize;
|
type Item = usize;
|
||||||
|
|
||||||
fn next(&mut self) -> Option<Self::Item> {
|
fn next(&mut self) -> Option<Self::Item> {
|
||||||
if self.blobs_sent == self.num_blobs_to_send {
|
if self.shreds_sent == self.num_shreds_to_send {
|
||||||
None
|
None
|
||||||
} else {
|
} else {
|
||||||
let blob_index = Some(
|
let shred_index = Some(
|
||||||
(self.start_index + self.step_size * self.blobs_sent) % self.num_blobs_in_slot,
|
(self.start_index + self.step_size * self.shreds_sent) % self.num_shreds_in_slot,
|
||||||
);
|
);
|
||||||
self.blobs_sent += 1;
|
self.shreds_sent += 1;
|
||||||
blob_index
|
shred_index
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -295,8 +295,8 @@ impl ClusterInfoRepairListener {
|
||||||
|
|
||||||
let mut slot_iter = slot_iter?;
|
let mut slot_iter = slot_iter?;
|
||||||
|
|
||||||
let mut total_data_blobs_sent = 0;
|
let mut total_data_shreds_sent = 0;
|
||||||
let mut total_coding_blobs_sent = 0;
|
let mut total_coding_shreds_sent = 0;
|
||||||
let mut num_slots_repaired = 0;
|
let mut num_slots_repaired = 0;
|
||||||
let max_confirmed_repairee_epoch =
|
let max_confirmed_repairee_epoch =
|
||||||
epoch_schedule.get_leader_schedule_epoch(repairee_epoch_slots.root);
|
epoch_schedule.get_leader_schedule_epoch(repairee_epoch_slots.root);
|
||||||
|
@ -318,23 +318,23 @@ impl ClusterInfoRepairListener {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
if !repairee_epoch_slots.slots.contains(&slot) {
|
if !repairee_epoch_slots.slots.contains(&slot) {
|
||||||
// Calculate the blob indexes this node is responsible for repairing. Note that
|
// Calculate the shred indexes this node is responsible for repairing. Note that
|
||||||
// because we are only repairing slots that are before our root, the slot.received
|
// because we are only repairing slots that are before our root, the slot.received
|
||||||
// should be equal to the actual total number of blobs in the slot. Optimistically
|
// should be equal to the actual total number of shreds in the slot. Optimistically
|
||||||
// this means that most repairmen should observe the same "total" number of blobs
|
// this means that most repairmen should observe the same "total" number of shreds
|
||||||
// for a particular slot, and thus the calculation in
|
// for a particular slot, and thus the calculation in
|
||||||
// calculate_my_repairman_index_for_slot() will divide responsibility evenly across
|
// calculate_my_repairman_index_for_slot() will divide responsibility evenly across
|
||||||
// the cluster
|
// the cluster
|
||||||
let num_blobs_in_slot = slot_meta.received as usize;
|
let num_shreds_in_slot = slot_meta.received as usize;
|
||||||
|
|
||||||
// Check if I'm responsible for repairing this slots
|
// Check if I'm responsible for repairing this slots
|
||||||
if let Some(my_repair_indexes) = Self::calculate_my_repairman_index_for_slot(
|
if let Some(my_repair_indexes) = Self::calculate_my_repairman_index_for_slot(
|
||||||
my_pubkey,
|
my_pubkey,
|
||||||
&eligible_repairmen,
|
&eligible_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
REPAIR_REDUNDANCY,
|
REPAIR_REDUNDANCY,
|
||||||
) {
|
) {
|
||||||
// If I've already sent blobs >= this slot before, then don't send them again
|
// If I've already sent shreds >= this slot before, then don't send them again
|
||||||
// until the timeout has expired
|
// until the timeout has expired
|
||||||
if slot > last_repaired_slot
|
if slot > last_repaired_slot
|
||||||
|| timestamp() - last_repaired_ts > REPAIR_SAME_SLOT_THRESHOLD
|
|| timestamp() - last_repaired_ts > REPAIR_SAME_SLOT_THRESHOLD
|
||||||
|
@ -343,27 +343,27 @@ impl ClusterInfoRepairListener {
|
||||||
"Serving repair for slot {} to {}. Repairee slots: {:?}",
|
"Serving repair for slot {} to {}. Repairee slots: {:?}",
|
||||||
slot, repairee_pubkey, repairee_epoch_slots.slots
|
slot, repairee_pubkey, repairee_epoch_slots.slots
|
||||||
);
|
);
|
||||||
// Repairee is missing this slot, send them the blobs for this slot
|
// Repairee is missing this slot, send them the shreds for this slot
|
||||||
for blob_index in my_repair_indexes {
|
for shred_index in my_repair_indexes {
|
||||||
// Loop over the blob indexes and query the database for these blob that
|
// Loop over the shred indexes and query the database for these shred that
|
||||||
// this node is reponsible for repairing. This should be faster than using
|
// this node is reponsible for repairing. This should be faster than using
|
||||||
// a database iterator over the slots because by the time this node is
|
// a database iterator over the slots because by the time this node is
|
||||||
// sending the blobs in this slot for repair, we expect these slots
|
// sending the shreds in this slot for repair, we expect these slots
|
||||||
// to be full.
|
// to be full.
|
||||||
if let Some(blob_data) = blocktree
|
if let Some(shred_data) = blocktree
|
||||||
.get_data_shred(slot, blob_index as u64)
|
.get_data_shred(slot, shred_index as u64)
|
||||||
.expect("Failed to read data blob from blocktree")
|
.expect("Failed to read data shred from blocktree")
|
||||||
{
|
{
|
||||||
socket.send_to(&blob_data[..], repairee_addr)?;
|
socket.send_to(&shred_data[..], repairee_addr)?;
|
||||||
total_data_blobs_sent += 1;
|
total_data_shreds_sent += 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if let Some(coding_bytes) = blocktree
|
if let Some(coding_bytes) = blocktree
|
||||||
.get_coding_shred(slot, blob_index as u64)
|
.get_coding_shred(slot, shred_index as u64)
|
||||||
.expect("Failed to read coding blob from blocktree")
|
.expect("Failed to read coding shred from blocktree")
|
||||||
{
|
{
|
||||||
socket.send_to(&coding_bytes[..], repairee_addr)?;
|
socket.send_to(&coding_bytes[..], repairee_addr)?;
|
||||||
total_coding_blobs_sent += 1;
|
total_coding_shreds_sent += 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -371,11 +371,11 @@ impl ClusterInfoRepairListener {
|
||||||
Self::report_repair_metrics(
|
Self::report_repair_metrics(
|
||||||
slot,
|
slot,
|
||||||
repairee_pubkey,
|
repairee_pubkey,
|
||||||
total_data_blobs_sent,
|
total_data_shreds_sent,
|
||||||
total_coding_blobs_sent,
|
total_coding_shreds_sent,
|
||||||
);
|
);
|
||||||
total_data_blobs_sent = 0;
|
total_data_shreds_sent = 0;
|
||||||
total_coding_blobs_sent = 0;
|
total_coding_shreds_sent = 0;
|
||||||
}
|
}
|
||||||
num_slots_repaired += 1;
|
num_slots_repaired += 1;
|
||||||
}
|
}
|
||||||
|
@ -388,16 +388,16 @@ impl ClusterInfoRepairListener {
|
||||||
fn report_repair_metrics(
|
fn report_repair_metrics(
|
||||||
slot: u64,
|
slot: u64,
|
||||||
repairee_id: &Pubkey,
|
repairee_id: &Pubkey,
|
||||||
total_data_blobs_sent: u64,
|
total_data_shreds_sent: u64,
|
||||||
total_coding_blobs_sent: u64,
|
total_coding_shreds_sent: u64,
|
||||||
) {
|
) {
|
||||||
if total_data_blobs_sent > 0 || total_coding_blobs_sent > 0 {
|
if total_data_shreds_sent > 0 || total_coding_shreds_sent > 0 {
|
||||||
datapoint!(
|
datapoint!(
|
||||||
"repairman_activity",
|
"repairman_activity",
|
||||||
("slot", slot, i64),
|
("slot", slot, i64),
|
||||||
("repairee_id", repairee_id.to_string(), String),
|
("repairee_id", repairee_id.to_string(), String),
|
||||||
("data_sent", total_data_blobs_sent, i64),
|
("data_sent", total_data_shreds_sent, i64),
|
||||||
("coding_sent", total_coding_blobs_sent, i64)
|
("coding_sent", total_coding_shreds_sent, i64)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -418,21 +418,21 @@ impl ClusterInfoRepairListener {
|
||||||
eligible_repairmen.shuffle(&mut rng);
|
eligible_repairmen.shuffle(&mut rng);
|
||||||
}
|
}
|
||||||
|
|
||||||
// The calculation should partition the blobs in the slot across the repairmen in the cluster
|
// The calculation should partition the shreds in the slot across the repairmen in the cluster
|
||||||
// such that each blob in the slot is the responsibility of `repair_redundancy` or
|
// such that each shred in the slot is the responsibility of `repair_redundancy` or
|
||||||
// `repair_redundancy + 1` number of repairmen in the cluster.
|
// `repair_redundancy + 1` number of repairmen in the cluster.
|
||||||
fn calculate_my_repairman_index_for_slot(
|
fn calculate_my_repairman_index_for_slot(
|
||||||
my_pubkey: &Pubkey,
|
my_pubkey: &Pubkey,
|
||||||
eligible_repairmen: &[&Pubkey],
|
eligible_repairmen: &[&Pubkey],
|
||||||
num_blobs_in_slot: usize,
|
num_shreds_in_slot: usize,
|
||||||
repair_redundancy: usize,
|
repair_redundancy: usize,
|
||||||
) -> Option<BlobIndexesToRepairIterator> {
|
) -> Option<ShredIndexesToRepairIterator> {
|
||||||
let total_blobs = num_blobs_in_slot * repair_redundancy;
|
let total_shreds = num_shreds_in_slot * repair_redundancy;
|
||||||
let total_repairmen_for_slot = cmp::min(total_blobs, eligible_repairmen.len());
|
let total_repairmen_for_slot = cmp::min(total_shreds, eligible_repairmen.len());
|
||||||
|
|
||||||
let blobs_per_repairman = cmp::min(
|
let shreds_per_repairman = cmp::min(
|
||||||
(total_blobs + total_repairmen_for_slot - 1) / total_repairmen_for_slot,
|
(total_shreds + total_repairmen_for_slot - 1) / total_repairmen_for_slot,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Calculate the indexes this node is responsible for
|
// Calculate the indexes this node is responsible for
|
||||||
|
@ -440,15 +440,15 @@ impl ClusterInfoRepairListener {
|
||||||
.iter()
|
.iter()
|
||||||
.position(|id| *id == my_pubkey)
|
.position(|id| *id == my_pubkey)
|
||||||
{
|
{
|
||||||
let start_index = my_position % num_blobs_in_slot;
|
let start_index = my_position % num_shreds_in_slot;
|
||||||
Some(BlobIndexesToRepairIterator::new(
|
Some(ShredIndexesToRepairIterator::new(
|
||||||
start_index,
|
start_index,
|
||||||
blobs_per_repairman,
|
shreds_per_repairman,
|
||||||
total_repairmen_for_slot,
|
total_repairmen_for_slot,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
))
|
))
|
||||||
} else {
|
} else {
|
||||||
// If there are more repairmen than `total_blobs`, then some repairmen
|
// If there are more repairmen than `total_shreds`, then some repairmen
|
||||||
// will not have any responsibility to repair this slot
|
// will not have any responsibility to repair this slot
|
||||||
None
|
None
|
||||||
}
|
}
|
||||||
|
@ -797,7 +797,7 @@ mod tests {
|
||||||
let mut received_shreds: Vec<Packets> = vec![];
|
let mut received_shreds: Vec<Packets> = vec![];
|
||||||
|
|
||||||
// This repairee was missing exactly `num_slots / 2` slots, so we expect to get
|
// This repairee was missing exactly `num_slots / 2` slots, so we expect to get
|
||||||
// `(num_slots / 2) * num_shreds_per_slot * REPAIR_REDUNDANCY` blobs.
|
// `(num_slots / 2) * num_shreds_per_slot * REPAIR_REDUNDANCY` shreds.
|
||||||
let num_expected_shreds = (num_slots / 2) * num_shreds_per_slot * REPAIR_REDUNDANCY as u64;
|
let num_expected_shreds = (num_slots / 2) * num_shreds_per_slot * REPAIR_REDUNDANCY as u64;
|
||||||
while (received_shreds
|
while (received_shreds
|
||||||
.iter()
|
.iter()
|
||||||
|
@ -833,7 +833,7 @@ mod tests {
|
||||||
let slots_per_epoch = stakers_slot_offset * 2;
|
let slots_per_epoch = stakers_slot_offset * 2;
|
||||||
let epoch_schedule = EpochSchedule::custom(slots_per_epoch, stakers_slot_offset, false);
|
let epoch_schedule = EpochSchedule::custom(slots_per_epoch, stakers_slot_offset, false);
|
||||||
|
|
||||||
// Create blobs for first two epochs and write them to blocktree
|
// Create shreds for first two epochs and write them to blocktree
|
||||||
let total_slots = slots_per_epoch * 2;
|
let total_slots = slots_per_epoch * 2;
|
||||||
let (shreds, _) = make_many_slot_entries(0, total_slots, 1);
|
let (shreds, _) = make_many_slot_entries(0, total_slots, 1);
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
|
@ -853,7 +853,7 @@ mod tests {
|
||||||
// 1) They are missing all of the second epoch, but have all of the first epoch.
|
// 1) They are missing all of the second epoch, but have all of the first epoch.
|
||||||
// 2) The root only confirms epoch 1, so the leader for epoch 2 is unconfirmed.
|
// 2) The root only confirms epoch 1, so the leader for epoch 2 is unconfirmed.
|
||||||
//
|
//
|
||||||
// Thus, no repairmen should send any blobs to this repairee b/c this repairee
|
// Thus, no repairmen should send any shreds to this repairee b/c this repairee
|
||||||
// already has all the slots for which they have a confirmed leader schedule
|
// already has all the slots for which they have a confirmed leader schedule
|
||||||
let repairee_root = 0;
|
let repairee_root = 0;
|
||||||
let repairee_slots: BTreeSet<_> = (0..=slots_per_epoch).collect();
|
let repairee_slots: BTreeSet<_> = (0..=slots_per_epoch).collect();
|
||||||
|
@ -898,7 +898,7 @@ mod tests {
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Make sure some blobs get sent this time
|
// Make sure some shreds get sent this time
|
||||||
sleep(Duration::from_millis(1000));
|
sleep(Duration::from_millis(1000));
|
||||||
assert!(mock_repairee.receiver.try_recv().is_ok());
|
assert!(mock_repairee.receiver.try_recv().is_ok());
|
||||||
|
|
||||||
|
@ -932,79 +932,79 @@ mod tests {
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_calculate_my_repairman_index_for_slot() {
|
fn test_calculate_my_repairman_index_for_slot() {
|
||||||
// Test when the number of blobs in the slot > number of repairmen
|
// Test when the number of shreds in the slot > number of repairmen
|
||||||
let num_repairmen = 10;
|
let num_repairmen = 10;
|
||||||
let num_blobs_in_slot = 42;
|
let num_shreds_in_slot = 42;
|
||||||
let repair_redundancy = 3;
|
let repair_redundancy = 3;
|
||||||
|
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Test when num_blobs_in_slot is a multiple of num_repairmen
|
// Test when num_shreds_in_slot is a multiple of num_repairmen
|
||||||
let num_repairmen = 12;
|
let num_repairmen = 12;
|
||||||
let num_blobs_in_slot = 48;
|
let num_shreds_in_slot = 48;
|
||||||
let repair_redundancy = 3;
|
let repair_redundancy = 3;
|
||||||
|
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Test when num_repairmen and num_blobs_in_slot are relatively prime
|
// Test when num_repairmen and num_shreds_in_slot are relatively prime
|
||||||
let num_repairmen = 12;
|
let num_repairmen = 12;
|
||||||
let num_blobs_in_slot = 47;
|
let num_shreds_in_slot = 47;
|
||||||
let repair_redundancy = 12;
|
let repair_redundancy = 12;
|
||||||
|
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Test 1 repairman
|
// Test 1 repairman
|
||||||
let num_repairmen = 1;
|
let num_repairmen = 1;
|
||||||
let num_blobs_in_slot = 30;
|
let num_shreds_in_slot = 30;
|
||||||
let repair_redundancy = 3;
|
let repair_redundancy = 3;
|
||||||
|
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Test when repair_redundancy is 1, and num_blobs_in_slot does not evenly
|
// Test when repair_redundancy is 1, and num_shreds_in_slot does not evenly
|
||||||
// divide num_repairmen
|
// divide num_repairmen
|
||||||
let num_repairmen = 12;
|
let num_repairmen = 12;
|
||||||
let num_blobs_in_slot = 47;
|
let num_shreds_in_slot = 47;
|
||||||
let repair_redundancy = 1;
|
let repair_redundancy = 1;
|
||||||
|
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Test when the number of blobs in the slot <= number of repairmen
|
// Test when the number of shreds in the slot <= number of repairmen
|
||||||
let num_repairmen = 10;
|
let num_repairmen = 10;
|
||||||
let num_blobs_in_slot = 10;
|
let num_shreds_in_slot = 10;
|
||||||
let repair_redundancy = 3;
|
let repair_redundancy = 3;
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
|
|
||||||
// Test when there are more repairmen than repair_redundancy * num_blobs_in_slot
|
// Test when there are more repairmen than repair_redundancy * num_shreds_in_slot
|
||||||
let num_repairmen = 42;
|
let num_repairmen = 42;
|
||||||
let num_blobs_in_slot = 10;
|
let num_shreds_in_slot = 10;
|
||||||
let repair_redundancy = 3;
|
let repair_redundancy = 3;
|
||||||
run_calculate_my_repairman_index_for_slot(
|
run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen,
|
num_repairmen,
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
@ -1059,7 +1059,7 @@ mod tests {
|
||||||
|
|
||||||
fn run_calculate_my_repairman_index_for_slot(
|
fn run_calculate_my_repairman_index_for_slot(
|
||||||
num_repairmen: usize,
|
num_repairmen: usize,
|
||||||
num_blobs_in_slot: usize,
|
num_shreds_in_slot: usize,
|
||||||
repair_redundancy: usize,
|
repair_redundancy: usize,
|
||||||
) {
|
) {
|
||||||
let eligible_repairmen: Vec<_> = (0..num_repairmen).map(|_| Pubkey::new_rand()).collect();
|
let eligible_repairmen: Vec<_> = (0..num_repairmen).map(|_| Pubkey::new_rand()).collect();
|
||||||
|
@ -1071,13 +1071,13 @@ mod tests {
|
||||||
ClusterInfoRepairListener::calculate_my_repairman_index_for_slot(
|
ClusterInfoRepairListener::calculate_my_repairman_index_for_slot(
|
||||||
pk,
|
pk,
|
||||||
&eligible_repairmen_ref[..],
|
&eligible_repairmen_ref[..],
|
||||||
num_blobs_in_slot,
|
num_shreds_in_slot,
|
||||||
repair_redundancy,
|
repair_redundancy,
|
||||||
)
|
)
|
||||||
{
|
{
|
||||||
for blob_index in my_repair_indexes {
|
for shred_index in my_repair_indexes {
|
||||||
results
|
results
|
||||||
.entry(blob_index)
|
.entry(shred_index)
|
||||||
.and_modify(|e| *e += 1)
|
.and_modify(|e| *e += 1)
|
||||||
.or_insert(1);
|
.or_insert(1);
|
||||||
}
|
}
|
||||||
|
@ -1089,7 +1089,7 @@ mod tests {
|
||||||
|
|
||||||
// Analyze the results:
|
// Analyze the results:
|
||||||
|
|
||||||
// 1) If there are a sufficient number of repairmen, then each blob should be sent
|
// 1) If there are a sufficient number of repairmen, then each shred should be sent
|
||||||
// `repair_redundancy` OR `repair_redundancy + 1` times.
|
// `repair_redundancy` OR `repair_redundancy + 1` times.
|
||||||
let num_expected_redundancy = cmp::min(num_repairmen, repair_redundancy);
|
let num_expected_redundancy = cmp::min(num_repairmen, repair_redundancy);
|
||||||
for b in results.keys() {
|
for b in results.keys() {
|
||||||
|
@ -1098,18 +1098,18 @@ mod tests {
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// 2) The number of times each blob is sent should be evenly distributed
|
// 2) The number of times each shred is sent should be evenly distributed
|
||||||
let max_times_blob_sent = results.values().min_by(|x, y| x.cmp(y)).unwrap();
|
let max_times_shred_sent = results.values().min_by(|x, y| x.cmp(y)).unwrap();
|
||||||
let min_times_blob_sent = results.values().max_by(|x, y| x.cmp(y)).unwrap();
|
let min_times_shred_sent = results.values().max_by(|x, y| x.cmp(y)).unwrap();
|
||||||
assert!(*max_times_blob_sent <= *min_times_blob_sent + 1);
|
assert!(*max_times_shred_sent <= *min_times_shred_sent + 1);
|
||||||
|
|
||||||
// 3) There should only be repairmen who are not responsible for repairing this slot
|
// 3) There should only be repairmen who are not responsible for repairing this slot
|
||||||
// if we have more repairman than `num_blobs_in_slot * repair_redundancy`. In this case the
|
// if we have more repairman than `num_shreds_in_slot * repair_redundancy`. In this case the
|
||||||
// first `num_blobs_in_slot * repair_redundancy` repairmen would send one blob, and the rest
|
// first `num_shreds_in_slot * repair_redundancy` repairmen would send one shred, and the rest
|
||||||
// would not be responsible for sending any repairs
|
// would not be responsible for sending any repairs
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
none_results,
|
none_results,
|
||||||
num_repairmen.saturating_sub(num_blobs_in_slot * repair_redundancy)
|
num_repairmen.saturating_sub(num_shreds_in_slot * repair_redundancy)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
//! Crds Gossip
|
//! Crds Gossip
|
||||||
//! This module ties together Crds and the push and pull gossip overlays. The interface is
|
//! This module ties together Crds and the push and pull gossip overlays. The interface is
|
||||||
//! designed to run with a simulator or over a UDP network connection with messages up to a
|
//! designed to run with a simulator or over a UDP network connection with messages up to a
|
||||||
//! packet::BLOB_DATA_SIZE size.
|
//! packet::PACKET_DATA_SIZE size.
|
||||||
|
|
||||||
use crate::crds::{Crds, VersionedCrdsValue};
|
use crate::crds::{Crds, VersionedCrdsValue};
|
||||||
use crate::crds_gossip_error::CrdsGossipError;
|
use crate::crds_gossip_error::CrdsGossipError;
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
//! The `repair_service` module implements the tools necessary to generate a thread which
|
//! The `repair_service` module implements the tools necessary to generate a thread which
|
||||||
//! regularly finds missing blobs in the ledger and sends repair requests for those blobs
|
//! regularly finds missing shreds in the ledger and sends repair requests for those shreds
|
||||||
use crate::{
|
use crate::{
|
||||||
cluster_info::ClusterInfo, cluster_info_repair_listener::ClusterInfoRepairListener,
|
cluster_info::ClusterInfo, cluster_info_repair_listener::ClusterInfoRepairListener,
|
||||||
result::Result,
|
result::Result,
|
||||||
|
@ -36,7 +36,7 @@ pub enum RepairStrategy {
|
||||||
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq)]
|
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
pub enum RepairType {
|
pub enum RepairType {
|
||||||
Orphan(u64),
|
Orphan(u64),
|
||||||
HighestBlob(u64, u64),
|
HighestShred(u64, u64),
|
||||||
Shred(u64, u64),
|
Shred(u64, u64),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -187,7 +187,7 @@ impl RepairService {
|
||||||
max_repairs: usize,
|
max_repairs: usize,
|
||||||
repair_range: &RepairSlotRange,
|
repair_range: &RepairSlotRange,
|
||||||
) -> Result<(Vec<RepairType>)> {
|
) -> Result<(Vec<RepairType>)> {
|
||||||
// Slot height and blob indexes for blobs we want to repair
|
// Slot height and shred indexes for shreds we want to repair
|
||||||
let mut repairs: Vec<RepairType> = vec![];
|
let mut repairs: Vec<RepairType> = vec![];
|
||||||
for slot in repair_range.start..=repair_range.end {
|
for slot in repair_range.start..=repair_range.end {
|
||||||
if repairs.len() >= max_repairs {
|
if repairs.len() >= max_repairs {
|
||||||
|
@ -219,7 +219,7 @@ impl RepairService {
|
||||||
root: u64,
|
root: u64,
|
||||||
max_repairs: usize,
|
max_repairs: usize,
|
||||||
) -> Result<(Vec<RepairType>)> {
|
) -> Result<(Vec<RepairType>)> {
|
||||||
// Slot height and blob indexes for blobs we want to repair
|
// Slot height and shred indexes for shreds we want to repair
|
||||||
let mut repairs: Vec<RepairType> = vec![];
|
let mut repairs: Vec<RepairType> = vec![];
|
||||||
Self::generate_repairs_for_fork(blocktree, &mut repairs, max_repairs, root);
|
Self::generate_repairs_for_fork(blocktree, &mut repairs, max_repairs, root);
|
||||||
|
|
||||||
|
@ -242,7 +242,7 @@ impl RepairService {
|
||||||
if slot_meta.is_full() {
|
if slot_meta.is_full() {
|
||||||
vec![]
|
vec![]
|
||||||
} else if slot_meta.consumed == slot_meta.received {
|
} else if slot_meta.consumed == slot_meta.received {
|
||||||
vec![RepairType::HighestBlob(slot, slot_meta.received)]
|
vec![RepairType::HighestShred(slot, slot_meta.received)]
|
||||||
} else {
|
} else {
|
||||||
let reqs = blocktree.find_missing_data_indexes(
|
let reqs = blocktree.find_missing_data_indexes(
|
||||||
slot,
|
slot,
|
||||||
|
@ -322,7 +322,7 @@ impl RepairService {
|
||||||
|
|
||||||
// Safe to set into gossip because by this time, the leader schedule cache should
|
// Safe to set into gossip because by this time, the leader schedule cache should
|
||||||
// also be updated with the latest root (done in blocktree_processor) and thus
|
// also be updated with the latest root (done in blocktree_processor) and thus
|
||||||
// will provide a schedule to window_service for any incoming blobs up to the
|
// will provide a schedule to window_service for any incoming shreds up to the
|
||||||
// last_confirmed_epoch.
|
// last_confirmed_epoch.
|
||||||
cluster_info
|
cluster_info
|
||||||
.write()
|
.write()
|
||||||
|
@ -414,7 +414,7 @@ mod test {
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
RepairService::generate_repairs(&blocktree, 0, 2).unwrap(),
|
RepairService::generate_repairs(&blocktree, 0, 2).unwrap(),
|
||||||
vec![RepairType::HighestBlob(0, 0), RepairType::Orphan(2)]
|
vec![RepairType::HighestShred(0, 0), RepairType::Orphan(2)]
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -429,14 +429,14 @@ mod test {
|
||||||
|
|
||||||
let (shreds, _) = make_slot_entries(2, 0, 1);
|
let (shreds, _) = make_slot_entries(2, 0, 1);
|
||||||
|
|
||||||
// Write this blob to slot 2, should chain to slot 0, which we haven't received
|
// Write this shred to slot 2, should chain to slot 0, which we haven't received
|
||||||
// any blobs for
|
// any shreds for
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
|
|
||||||
// Check that repair tries to patch the empty slot
|
// Check that repair tries to patch the empty slot
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
RepairService::generate_repairs(&blocktree, 0, 2).unwrap(),
|
RepairService::generate_repairs(&blocktree, 0, 2).unwrap(),
|
||||||
vec![RepairType::HighestBlob(0, 0)]
|
vec![RepairType::HighestShred(0, 0)]
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
Blocktree::destroy(&blocktree_path).expect("Expected successful database destruction");
|
Blocktree::destroy(&blocktree_path).expect("Expected successful database destruction");
|
||||||
|
@ -451,12 +451,12 @@ mod test {
|
||||||
let nth = 3;
|
let nth = 3;
|
||||||
let num_slots = 2;
|
let num_slots = 2;
|
||||||
|
|
||||||
// Create some blobs
|
// Create some shreds
|
||||||
let (mut shreds, _) = make_many_slot_entries(0, num_slots as u64, 150 as u64);
|
let (mut shreds, _) = make_many_slot_entries(0, num_slots as u64, 150 as u64);
|
||||||
let num_shreds = shreds.len() as u64;
|
let num_shreds = shreds.len() as u64;
|
||||||
let num_shreds_per_slot = num_shreds / num_slots;
|
let num_shreds_per_slot = num_shreds / num_slots;
|
||||||
|
|
||||||
// write every nth blob
|
// write every nth shred
|
||||||
let mut shreds_to_write = vec![];
|
let mut shreds_to_write = vec![];
|
||||||
let mut missing_indexes_per_slot = vec![];
|
let mut missing_indexes_per_slot = vec![];
|
||||||
for i in (0..num_shreds).rev() {
|
for i in (0..num_shreds).rev() {
|
||||||
|
@ -476,7 +476,7 @@ mod test {
|
||||||
.flat_map(|slot| {
|
.flat_map(|slot| {
|
||||||
missing_indexes_per_slot
|
missing_indexes_per_slot
|
||||||
.iter()
|
.iter()
|
||||||
.map(move |blob_index| RepairType::Shred(slot as u64, *blob_index))
|
.map(move |shred_index| RepairType::Shred(slot as u64, *shred_index))
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
|
|
||||||
|
@ -501,7 +501,7 @@ mod test {
|
||||||
|
|
||||||
let num_entries_per_slot = 100;
|
let num_entries_per_slot = 100;
|
||||||
|
|
||||||
// Create some blobs
|
// Create some shreds
|
||||||
let (mut shreds, _) = make_slot_entries(0, 0, num_entries_per_slot as u64);
|
let (mut shreds, _) = make_slot_entries(0, 0, num_entries_per_slot as u64);
|
||||||
let num_shreds_per_slot = shreds.len() as u64;
|
let num_shreds_per_slot = shreds.len() as u64;
|
||||||
|
|
||||||
|
@ -510,9 +510,9 @@ mod test {
|
||||||
|
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
|
|
||||||
// We didn't get the last blob for this slot, so ask for the highest blob for that slot
|
// We didn't get the last shred for this slot, so ask for the highest shred for that slot
|
||||||
let expected: Vec<RepairType> =
|
let expected: Vec<RepairType> =
|
||||||
vec![RepairType::HighestBlob(0, num_shreds_per_slot - 1)];
|
vec![RepairType::HighestShred(0, num_shreds_per_slot - 1)];
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
RepairService::generate_repairs(&blocktree, 0, std::usize::MAX).unwrap(),
|
RepairService::generate_repairs(&blocktree, 0, std::usize::MAX).unwrap(),
|
||||||
|
@ -551,7 +551,7 @@ mod test {
|
||||||
if slots.contains(&(slot_index as u64)) {
|
if slots.contains(&(slot_index as u64)) {
|
||||||
RepairType::Shred(slot_index as u64, 0)
|
RepairType::Shred(slot_index as u64, 0)
|
||||||
} else {
|
} else {
|
||||||
RepairType::HighestBlob(slot_index as u64, 0)
|
RepairType::HighestShred(slot_index as u64, 0)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
|
@ -582,7 +582,7 @@ mod test {
|
||||||
let num_slots = 1;
|
let num_slots = 1;
|
||||||
let start = 5;
|
let start = 5;
|
||||||
|
|
||||||
// Create some blobs in slots 0..num_slots
|
// Create some shreds in slots 0..num_slots
|
||||||
for i in start..start + num_slots {
|
for i in start..start + num_slots {
|
||||||
let parent = if i > 0 { i - 1 } else { 0 };
|
let parent = if i > 0 { i - 1 } else { 0 };
|
||||||
let (shreds, _) = make_slot_entries(i, parent, num_entries_per_slot as u64);
|
let (shreds, _) = make_slot_entries(i, parent, num_entries_per_slot as u64);
|
||||||
|
@ -592,9 +592,9 @@ mod test {
|
||||||
|
|
||||||
let end = 4;
|
let end = 4;
|
||||||
let expected: Vec<RepairType> = vec![
|
let expected: Vec<RepairType> = vec![
|
||||||
RepairType::HighestBlob(end - 2, 0),
|
RepairType::HighestShred(end - 2, 0),
|
||||||
RepairType::HighestBlob(end - 1, 0),
|
RepairType::HighestShred(end - 1, 0),
|
||||||
RepairType::HighestBlob(end, 0),
|
RepairType::HighestShred(end, 0),
|
||||||
];
|
];
|
||||||
|
|
||||||
let mut repair_slot_range = RepairSlotRange::default();
|
let mut repair_slot_range = RepairSlotRange::default();
|
||||||
|
@ -630,7 +630,7 @@ mod test {
|
||||||
let fork2 = vec![8, 12];
|
let fork2 = vec![8, 12];
|
||||||
let fork2_shreds = make_chaining_slot_entries(&fork2, num_entries_per_slot);
|
let fork2_shreds = make_chaining_slot_entries(&fork2, num_entries_per_slot);
|
||||||
|
|
||||||
// Remove the last blob from each slot to make an incomplete slot
|
// Remove the last shred from each slot to make an incomplete slot
|
||||||
let fork2_incomplete_shreds: Vec<_> = fork2_shreds
|
let fork2_incomplete_shreds: Vec<_> = fork2_shreds
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.flat_map(|(mut shreds, _)| {
|
.flat_map(|(mut shreds, _)| {
|
||||||
|
|
|
@ -511,7 +511,7 @@ impl ReplayStage {
|
||||||
let rooted_slots: Vec<_> = rooted_banks.iter().map(|bank| bank.slot()).collect();
|
let rooted_slots: Vec<_> = rooted_banks.iter().map(|bank| bank.slot()).collect();
|
||||||
// Call leader schedule_cache.set_root() before blocktree.set_root() because
|
// Call leader schedule_cache.set_root() before blocktree.set_root() because
|
||||||
// bank_forks.root is consumed by repair_service to update gossip, so we don't want to
|
// bank_forks.root is consumed by repair_service to update gossip, so we don't want to
|
||||||
// get blobs for repair on gossip before we update leader schedule, otherwise they may
|
// get shreds for repair on gossip before we update leader schedule, otherwise they may
|
||||||
// get dropped.
|
// get dropped.
|
||||||
leader_schedule_cache.set_root(rooted_banks.last().unwrap());
|
leader_schedule_cache.set_root(rooted_banks.last().unwrap());
|
||||||
blocktree
|
blocktree
|
||||||
|
@ -971,7 +971,7 @@ mod test {
|
||||||
let mut bank_forks = BankForks::new(0, bank0);
|
let mut bank_forks = BankForks::new(0, bank0);
|
||||||
bank_forks.working_bank().freeze();
|
bank_forks.working_bank().freeze();
|
||||||
|
|
||||||
// Insert blob for slot 1, generate new forks, check result
|
// Insert shred for slot 1, generate new forks, check result
|
||||||
let (shreds, _) = make_slot_entries(1, 0, 8);
|
let (shreds, _) = make_slot_entries(1, 0, 8);
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
assert!(bank_forks.get(1).is_none());
|
assert!(bank_forks.get(1).is_none());
|
||||||
|
@ -982,7 +982,7 @@ mod test {
|
||||||
);
|
);
|
||||||
assert!(bank_forks.get(1).is_some());
|
assert!(bank_forks.get(1).is_some());
|
||||||
|
|
||||||
// Insert blob for slot 3, generate new forks, check result
|
// Insert shred for slot 3, generate new forks, check result
|
||||||
let (shreds, _) = make_slot_entries(2, 0, 8);
|
let (shreds, _) = make_slot_entries(2, 0, 8);
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
assert!(bank_forks.get(2).is_none());
|
assert!(bank_forks.get(2).is_none());
|
||||||
|
@ -1208,7 +1208,7 @@ mod test {
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Given a blob and a fatal expected error, check that replaying that blob causes causes the fork to be
|
// Given a shred and a fatal expected error, check that replaying that shred causes causes the fork to be
|
||||||
// marked as dead. Returns the error for caller to verify.
|
// marked as dead. Returns the error for caller to verify.
|
||||||
fn check_dead_fork<F>(shred_to_insert: F) -> Result<()>
|
fn check_dead_fork<F>(shred_to_insert: F) -> Result<()>
|
||||||
where
|
where
|
||||||
|
|
|
@ -29,7 +29,6 @@ pub enum Error {
|
||||||
BlockError(block_error::BlockError),
|
BlockError(block_error::BlockError),
|
||||||
BlocktreeError(blocktree::BlocktreeError),
|
BlocktreeError(blocktree::BlocktreeError),
|
||||||
FsExtra(fs_extra::error::Error),
|
FsExtra(fs_extra::error::Error),
|
||||||
ToBlobError,
|
|
||||||
SnapshotError(snapshot_utils::SnapshotError),
|
SnapshotError(snapshot_utils::SnapshotError),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
//! The `retransmit_stage` retransmits blobs between validators
|
//! The `retransmit_stage` retransmits shreds between validators
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
cluster_info::{compute_retransmit_peers, ClusterInfo, DATA_PLANE_FANOUT},
|
cluster_info::{compute_retransmit_peers, ClusterInfo, DATA_PLANE_FANOUT},
|
||||||
|
@ -143,11 +143,11 @@ fn retransmit(
|
||||||
/// Service to retransmit messages from the leader or layer 1 to relevant peer nodes.
|
/// Service to retransmit messages from the leader or layer 1 to relevant peer nodes.
|
||||||
/// See `cluster_info` for network layer definitions.
|
/// See `cluster_info` for network layer definitions.
|
||||||
/// # Arguments
|
/// # Arguments
|
||||||
/// * `sock` - Socket to read from. Read timeout is set to 1.
|
/// * `sockets` - Sockets to read from.
|
||||||
/// * `exit` - Boolean to signal system exit.
|
/// * `bank_forks` - The BankForks structure
|
||||||
|
/// * `leader_schedule_cache` - The leader schedule to verify shreds
|
||||||
/// * `cluster_info` - This structure needs to be updated and populated by the bank and via gossip.
|
/// * `cluster_info` - This structure needs to be updated and populated by the bank and via gossip.
|
||||||
/// * `recycler` - Blob recycler.
|
/// * `r` - Receive channel for shreds to be retransmitted to all the layer 1 nodes.
|
||||||
/// * `r` - Receive channel for blobs to be retransmitted to all the layer 1 nodes.
|
|
||||||
pub fn retransmitter(
|
pub fn retransmitter(
|
||||||
sockets: Arc<Vec<UdpSocket>>,
|
sockets: Arc<Vec<UdpSocket>>,
|
||||||
bank_forks: Arc<RwLock<BankForks>>,
|
bank_forks: Arc<RwLock<BankForks>>,
|
||||||
|
|
|
@ -269,7 +269,7 @@ impl Validator {
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
blocktree.new_shreds_signals.len(),
|
blocktree.new_shreds_signals.len(),
|
||||||
1,
|
1,
|
||||||
"New blob signal for the TVU should be the same as the clear bank signal."
|
"New shred signal for the TVU should be the same as the clear bank signal."
|
||||||
);
|
);
|
||||||
|
|
||||||
let ip_echo_server = solana_net_utils::ip_echo_server(node.sockets.ip_echo.unwrap());
|
let ip_echo_server = solana_net_utils::ip_echo_server(node.sockets.ip_echo.unwrap());
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
//! `window_service` handles the data plane incoming blobs, storing them in
|
//! `window_service` handles the data plane incoming shreds, storing them in
|
||||||
//! blocktree and retransmitting where required
|
//! blocktree and retransmitting where required
|
||||||
//!
|
//!
|
||||||
use crate::cluster_info::ClusterInfo;
|
use crate::cluster_info::ClusterInfo;
|
||||||
|
@ -34,8 +34,8 @@ fn verify_shred_slot(shred: &Shred, root: u64) -> bool {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// drop blobs that are from myself or not from the correct leader for the
|
/// drop shreds that are from myself or not from the correct leader for the
|
||||||
/// blob's slot
|
/// shred's slot
|
||||||
pub fn should_retransmit_and_persist(
|
pub fn should_retransmit_and_persist(
|
||||||
shred: &Shred,
|
shred: &Shred,
|
||||||
bank: Option<Arc<Bank>>,
|
bank: Option<Arc<Bank>>,
|
||||||
|
@ -347,7 +347,7 @@ mod test {
|
||||||
|
|
||||||
let mut shreds = local_entries_to_shred(&[Entry::default()], 0, 0, &leader_keypair);
|
let mut shreds = local_entries_to_shred(&[Entry::default()], 0, 0, &leader_keypair);
|
||||||
|
|
||||||
// with a Bank for slot 0, blob continues
|
// with a Bank for slot 0, shred continues
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
should_retransmit_and_persist(&shreds[0], Some(bank.clone()), &cache, &me_id, 0,),
|
should_retransmit_and_persist(&shreds[0], Some(bank.clone()), &cache, &me_id, 0,),
|
||||||
true
|
true
|
||||||
|
@ -371,14 +371,14 @@ mod test {
|
||||||
false
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
// with a Bank and no idea who leader is, blob gets thrown out
|
// with a Bank and no idea who leader is, shred gets thrown out
|
||||||
shreds[0].set_slot(MINIMUM_SLOTS_PER_EPOCH as u64 * 3);
|
shreds[0].set_slot(MINIMUM_SLOTS_PER_EPOCH as u64 * 3);
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
should_retransmit_and_persist(&shreds[0], Some(bank.clone()), &cache, &me_id, 0),
|
should_retransmit_and_persist(&shreds[0], Some(bank.clone()), &cache, &me_id, 0),
|
||||||
false
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
// with a shred where shred.slot() == root, blob gets thrown out
|
// with a shred where shred.slot() == root, shred gets thrown out
|
||||||
let slot = MINIMUM_SLOTS_PER_EPOCH as u64 * 3;
|
let slot = MINIMUM_SLOTS_PER_EPOCH as u64 * 3;
|
||||||
let shreds = local_entries_to_shred(&[Entry::default()], slot, slot - 1, &leader_keypair);
|
let shreds = local_entries_to_shred(&[Entry::default()], slot, slot - 1, &leader_keypair);
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
|
@ -386,7 +386,7 @@ mod test {
|
||||||
false
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
// with a shred where shred.parent() < root, blob gets thrown out
|
// with a shred where shred.parent() < root, shred gets thrown out
|
||||||
let slot = MINIMUM_SLOTS_PER_EPOCH as u64 * 3;
|
let slot = MINIMUM_SLOTS_PER_EPOCH as u64 * 3;
|
||||||
let shreds =
|
let shreds =
|
||||||
local_entries_to_shred(&[Entry::default()], slot + 1, slot - 1, &leader_keypair);
|
local_entries_to_shred(&[Entry::default()], slot + 1, slot - 1, &leader_keypair);
|
||||||
|
@ -395,7 +395,7 @@ mod test {
|
||||||
false
|
false
|
||||||
);
|
);
|
||||||
|
|
||||||
// if the blob came back from me, it doesn't continue, whether or not I have a bank
|
// if the shred came back from me, it doesn't continue, whether or not I have a bank
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
should_retransmit_and_persist(&shreds[0], None, &cache, &me_id, 0),
|
should_retransmit_and_persist(&shreds[0], None, &cache, &me_id, 0),
|
||||||
false
|
false
|
||||||
|
|
|
@ -19,10 +19,10 @@ fn num_threads() -> usize {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Search for the a node with the given balance
|
/// Search for the a node with the given balance
|
||||||
fn find_insert_blob(id: &Pubkey, blob: i32, batches: &mut [Nodes]) {
|
fn find_insert_shred(id: &Pubkey, shred: i32, batches: &mut [Nodes]) {
|
||||||
batches.par_iter_mut().for_each(|batch| {
|
batches.par_iter_mut().for_each(|batch| {
|
||||||
if batch.contains_key(id) {
|
if batch.contains_key(id) {
|
||||||
let _ = batch.get_mut(id).unwrap().1.insert(blob);
|
let _ = batch.get_mut(id).unwrap().1.insert(shred);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
@ -32,7 +32,7 @@ fn retransmit(
|
||||||
senders: &HashMap<Pubkey, Sender<(i32, bool)>>,
|
senders: &HashMap<Pubkey, Sender<(i32, bool)>>,
|
||||||
cluster: &ClusterInfo,
|
cluster: &ClusterInfo,
|
||||||
fanout: usize,
|
fanout: usize,
|
||||||
blob: i32,
|
shred: i32,
|
||||||
retransmit: bool,
|
retransmit: bool,
|
||||||
) -> i32 {
|
) -> i32 {
|
||||||
let mut seed = [0; 32];
|
let mut seed = [0; 32];
|
||||||
|
@ -47,22 +47,22 @@ fn retransmit(
|
||||||
true
|
true
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
seed[0..4].copy_from_slice(&blob.to_le_bytes());
|
seed[0..4].copy_from_slice(&shred.to_le_bytes());
|
||||||
let shuffled_indices = (0..shuffled_nodes.len()).collect();
|
let shuffled_indices = (0..shuffled_nodes.len()).collect();
|
||||||
let (neighbors, children) = compute_retransmit_peers(fanout, my_index, shuffled_indices);
|
let (neighbors, children) = compute_retransmit_peers(fanout, my_index, shuffled_indices);
|
||||||
children.into_iter().for_each(|i| {
|
children.into_iter().for_each(|i| {
|
||||||
let s = senders.get(&shuffled_nodes[i].id).unwrap();
|
let s = senders.get(&shuffled_nodes[i].id).unwrap();
|
||||||
let _ = s.send((blob, retransmit));
|
let _ = s.send((shred, retransmit));
|
||||||
});
|
});
|
||||||
|
|
||||||
if retransmit {
|
if retransmit {
|
||||||
neighbors.into_iter().for_each(|i| {
|
neighbors.into_iter().for_each(|i| {
|
||||||
let s = senders.get(&shuffled_nodes[i].id).unwrap();
|
let s = senders.get(&shuffled_nodes[i].id).unwrap();
|
||||||
let _ = s.send((blob, false));
|
let _ = s.send((shred, false));
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
blob
|
shred
|
||||||
}
|
}
|
||||||
|
|
||||||
fn run_simulation(stakes: &[u64], fanout: usize) {
|
fn run_simulation(stakes: &[u64], fanout: usize) {
|
||||||
|
@ -107,8 +107,8 @@ fn run_simulation(stakes: &[u64], fanout: usize) {
|
||||||
});
|
});
|
||||||
let c_info = cluster_info.clone();
|
let c_info = cluster_info.clone();
|
||||||
|
|
||||||
let blobs_len = 100;
|
let shreds_len = 100;
|
||||||
let shuffled_peers: Vec<Vec<ContactInfo>> = (0..blobs_len as i32)
|
let shuffled_peers: Vec<Vec<ContactInfo>> = (0..shreds_len as i32)
|
||||||
.map(|i| {
|
.map(|i| {
|
||||||
let mut seed = [0; 32];
|
let mut seed = [0; 32];
|
||||||
seed[0..4].copy_from_slice(&i.to_le_bytes());
|
seed[0..4].copy_from_slice(&i.to_le_bytes());
|
||||||
|
@ -128,10 +128,10 @@ fn run_simulation(stakes: &[u64], fanout: usize) {
|
||||||
})
|
})
|
||||||
.collect();
|
.collect();
|
||||||
|
|
||||||
// create some "blobs".
|
// create some "shreds".
|
||||||
(0..blobs_len).into_iter().for_each(|i| {
|
(0..shreds_len).into_iter().for_each(|i| {
|
||||||
let broadcast_table = &shuffled_peers[i];
|
let broadcast_table = &shuffled_peers[i];
|
||||||
find_insert_blob(&broadcast_table[0].id, i as i32, &mut batches);
|
find_insert_shred(&broadcast_table[0].id, i as i32, &mut batches);
|
||||||
});
|
});
|
||||||
|
|
||||||
assert!(!batches.is_empty());
|
assert!(!batches.is_empty());
|
||||||
|
@ -165,7 +165,7 @@ fn run_simulation(stakes: &[u64], fanout: usize) {
|
||||||
}
|
}
|
||||||
|
|
||||||
//send and recv
|
//send and recv
|
||||||
if recv.len() < blobs_len {
|
if recv.len() < shreds_len {
|
||||||
loop {
|
loop {
|
||||||
match r.try_recv() {
|
match r.try_recv() {
|
||||||
Ok((data, retx)) => {
|
Ok((data, retx)) => {
|
||||||
|
@ -179,7 +179,7 @@ fn run_simulation(stakes: &[u64], fanout: usize) {
|
||||||
retx,
|
retx,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
if recv.len() == blobs_len {
|
if recv.len() == shreds_len {
|
||||||
remaining -= 1;
|
remaining -= 1;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
|
@ -790,7 +790,7 @@ impl Blocktree {
|
||||||
);
|
);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
// Check that we do not receive a blob with "last_index" true, but shred_index
|
// Check that we do not receive a shred with "last_index" true, but shred_index
|
||||||
// less than our current received
|
// less than our current received
|
||||||
if last_in_slot && shred_index < slot_meta.received {
|
if last_in_slot && shred_index < slot_meta.received {
|
||||||
datapoint_error!(
|
datapoint_error!(
|
||||||
|
@ -1430,7 +1430,7 @@ fn get_slot_meta_entry<'a>(
|
||||||
) -> &'a mut SlotMetaWorkingSetEntry {
|
) -> &'a mut SlotMetaWorkingSetEntry {
|
||||||
let meta_cf = db.column::<cf::SlotMeta>();
|
let meta_cf = db.column::<cf::SlotMeta>();
|
||||||
|
|
||||||
// Check if we've already inserted the slot metadata for this blob's slot
|
// Check if we've already inserted the slot metadata for this shred's slot
|
||||||
slot_meta_working_set.entry(slot).or_insert_with(|| {
|
slot_meta_working_set.entry(slot).or_insert_with(|| {
|
||||||
// Store a 2-tuple of the metadata (working copy, backup copy)
|
// Store a 2-tuple of the metadata (working copy, backup copy)
|
||||||
if let Some(mut meta) = meta_cf.get(slot).expect("Expect database get to succeed") {
|
if let Some(mut meta) = meta_cf.get(slot).expect("Expect database get to succeed") {
|
||||||
|
@ -1823,7 +1823,7 @@ pub fn verify_shred_slots(slot: Slot, parent_slot: Slot, last_root: u64) -> bool
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Ignore blobs that chain to slots before the last root
|
// Ignore shreds that chain to slots before the last root
|
||||||
if parent_slot < last_root {
|
if parent_slot < last_root {
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
@ -3602,7 +3602,7 @@ pub mod tests {
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Trying to insert with set_index with num_coding that would imply the last blob
|
// Trying to insert with set_index with num_coding that would imply the last shred
|
||||||
// has index > u32::MAX should fail
|
// has index > u32::MAX should fail
|
||||||
{
|
{
|
||||||
let mut coding_shred = Shred::new_empty_from_header(
|
let mut coding_shred = Shred::new_empty_from_header(
|
||||||
|
|
|
@ -10,7 +10,7 @@ pub struct SlotMeta {
|
||||||
// The number of slots above the root (the genesis block). The first
|
// The number of slots above the root (the genesis block). The first
|
||||||
// slot has slot 0.
|
// slot has slot 0.
|
||||||
pub slot: Slot,
|
pub slot: Slot,
|
||||||
// The total number of consecutive blobs starting from index 0
|
// The total number of consecutive shreds starting from index 0
|
||||||
// we have received for this slot.
|
// we have received for this slot.
|
||||||
pub consumed: u64,
|
pub consumed: u64,
|
||||||
// The index *plus one* of the highest shred received for this slot. Useful
|
// The index *plus one* of the highest shred received for this slot. Useful
|
||||||
|
|
|
@ -1,14 +1,14 @@
|
||||||
//! # Erasure Coding and Recovery
|
//! # Erasure Coding and Recovery
|
||||||
//!
|
//!
|
||||||
//! Blobs are logically grouped into erasure sets or blocks. Each set contains 16 sequential data
|
//! Shreds are logically grouped into erasure sets or blocks. Each set contains 16 sequential data
|
||||||
//! blobs and 4 sequential coding blobs.
|
//! shreds and 4 sequential coding shreds.
|
||||||
//!
|
//!
|
||||||
//! Coding blobs in each set starting from `start_idx`:
|
//! Coding shreds in each set starting from `start_idx`:
|
||||||
//! For each erasure set:
|
//! For each erasure set:
|
||||||
//! generate `NUM_CODING` coding_blobs.
|
//! generate `NUM_CODING` coding_shreds.
|
||||||
//! index the coding blobs from `start_idx` to `start_idx + NUM_CODING - 1`.
|
//! index the coding shreds from `start_idx` to `start_idx + NUM_CODING - 1`.
|
||||||
//!
|
//!
|
||||||
//! model of an erasure set, with top row being data blobs and second being coding
|
//! model of an erasure set, with top row being data shreds and second being coding
|
||||||
//! |<======================= NUM_DATA ==============================>|
|
//! |<======================= NUM_DATA ==============================>|
|
||||||
//! |<==== NUM_CODING ===>|
|
//! |<==== NUM_CODING ===>|
|
||||||
//! +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+
|
//! +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+
|
||||||
|
@ -17,10 +17,10 @@
|
||||||
//! | C | | C | | C | | C | | | | | | | | | | | | |
|
//! | C | | C | | C | | C | | | | | | | | | | | | |
|
||||||
//! +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+
|
//! +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+
|
||||||
//!
|
//!
|
||||||
//! blob structure for coding blobs
|
//! shred structure for coding shreds
|
||||||
//!
|
//!
|
||||||
//! + ------- meta is set and used by transport, meta.size is actual length
|
//! + ------- meta is set and used by transport, meta.size is actual length
|
||||||
//! | of data in the byte array blob.data
|
//! | of data in the byte array shred.data
|
||||||
//! |
|
//! |
|
||||||
//! | + -- data is stuff shipped over the wire, and has an included
|
//! | + -- data is stuff shipped over the wire, and has an included
|
||||||
//! | | header
|
//! | | header
|
||||||
|
@ -30,14 +30,14 @@
|
||||||
//! |+---+-- |+---+---+---+---+------------------------------------------+|
|
//! |+---+-- |+---+---+---+---+------------------------------------------+|
|
||||||
//! || s | . || i | | f | s | ||
|
//! || s | . || i | | f | s | ||
|
||||||
//! || i | . || n | i | l | i | ||
|
//! || i | . || n | i | l | i | ||
|
||||||
//! || z | . || d | d | a | z | blob.data(), or blob.data_mut() ||
|
//! || z | . || d | d | a | z | shred.data(), or shred.data_mut() ||
|
||||||
//! || e | || e | | g | e | ||
|
//! || e | || e | | g | e | ||
|
||||||
//! |+---+-- || x | | s | | ||
|
//! |+---+-- || x | | s | | ||
|
||||||
//! | |+---+---+---+---+------------------------------------------+|
|
//! | |+---+---+---+---+------------------------------------------+|
|
||||||
//! +----------+------------------------------------------------------------+
|
//! +----------+------------------------------------------------------------+
|
||||||
//! | |<=== coding blob part for "coding" =======>|
|
//! | |<=== coding shred part for "coding" =======>|
|
||||||
//! | |
|
//! | |
|
||||||
//! |<============== data blob part for "coding" ==============>|
|
//! |<============== data shred part for "coding" ==============>|
|
||||||
//!
|
//!
|
||||||
//!
|
//!
|
||||||
|
|
||||||
|
@ -46,9 +46,9 @@ use reed_solomon_erasure::ReedSolomon;
|
||||||
use serde::{Deserialize, Serialize};
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
//TODO(sakridge) pick these values
|
//TODO(sakridge) pick these values
|
||||||
/// Number of data blobs
|
/// Number of data shreds
|
||||||
pub const NUM_DATA: usize = 8;
|
pub const NUM_DATA: usize = 8;
|
||||||
/// Number of coding blobs; also the maximum number that can go missing.
|
/// Number of coding shreds; also the maximum number that can go missing.
|
||||||
pub const NUM_CODING: usize = 8;
|
pub const NUM_CODING: usize = 8;
|
||||||
|
|
||||||
#[derive(Clone, Copy, Debug, Eq, PartialEq, Serialize, Deserialize)]
|
#[derive(Clone, Copy, Debug, Eq, PartialEq, Serialize, Deserialize)]
|
||||||
|
@ -86,7 +86,7 @@ impl ErasureConfig {
|
||||||
type Result<T> = std::result::Result<T, reed_solomon_erasure::Error>;
|
type Result<T> = std::result::Result<T, reed_solomon_erasure::Error>;
|
||||||
|
|
||||||
/// Represents an erasure "session" with a particular configuration and number of data and coding
|
/// Represents an erasure "session" with a particular configuration and number of data and coding
|
||||||
/// blobs
|
/// shreds
|
||||||
#[derive(Debug, Clone)]
|
#[derive(Debug, Clone)]
|
||||||
pub struct Session(ReedSolomon<Field>);
|
pub struct Session(ReedSolomon<Field>);
|
||||||
|
|
||||||
|
@ -134,11 +134,11 @@ pub mod test {
|
||||||
use log::*;
|
use log::*;
|
||||||
use solana_sdk::clock::Slot;
|
use solana_sdk::clock::Slot;
|
||||||
|
|
||||||
/// Specifies the contents of a 16-data-blob and 4-coding-blob erasure set
|
/// Specifies the contents of a 16-data-shred and 4-coding-shred erasure set
|
||||||
/// Exists to be passed to `generate_blocktree_with_coding`
|
/// Exists to be passed to `generate_blocktree_with_coding`
|
||||||
#[derive(Debug, Copy, Clone)]
|
#[derive(Debug, Copy, Clone)]
|
||||||
pub struct ErasureSpec {
|
pub struct ErasureSpec {
|
||||||
/// Which 16-blob erasure set this represents
|
/// Which 16-shred erasure set this represents
|
||||||
pub set_index: u64,
|
pub set_index: u64,
|
||||||
pub num_data: usize,
|
pub num_data: usize,
|
||||||
pub num_coding: usize,
|
pub num_coding: usize,
|
||||||
|
|
|
@ -128,7 +128,7 @@ impl LeaderScheduleCache {
|
||||||
if *pubkey == leader_schedule[i] {
|
if *pubkey == leader_schedule[i] {
|
||||||
if let Some(blocktree) = blocktree {
|
if let Some(blocktree) = blocktree {
|
||||||
if let Some(meta) = blocktree.meta(current_slot).unwrap() {
|
if let Some(meta) = blocktree.meta(current_slot).unwrap() {
|
||||||
// We have already sent a blob for this slot, so skip it
|
// We have already sent a shred for this slot, so skip it
|
||||||
if meta.received > 0 {
|
if meta.received > 0 {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -435,7 +435,7 @@ mod tests {
|
||||||
1
|
1
|
||||||
);
|
);
|
||||||
|
|
||||||
// Write a blob into slot 2 that chains to slot 1,
|
// Write a shred into slot 2 that chains to slot 1,
|
||||||
// but slot 1 is empty so should not be skipped
|
// but slot 1 is empty so should not be skipped
|
||||||
let (shreds, _) = make_slot_entries(2, 1, 1);
|
let (shreds, _) = make_slot_entries(2, 1, 1);
|
||||||
blocktree.insert_shreds(shreds, None, false).unwrap();
|
blocktree.insert_shreds(shreds, None, false).unwrap();
|
||||||
|
@ -447,7 +447,7 @@ mod tests {
|
||||||
1
|
1
|
||||||
);
|
);
|
||||||
|
|
||||||
// Write a blob into slot 1
|
// Write a shred into slot 1
|
||||||
let (shreds, _) = make_slot_entries(1, 0, 1);
|
let (shreds, _) = make_slot_entries(1, 0, 1);
|
||||||
|
|
||||||
// Check that slot 1 and 2 are skipped
|
// Check that slot 1 and 2 are skipped
|
||||||
|
|
|
@ -22,7 +22,7 @@ use std::{
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Start the cluster with the given configuration and wait till the archivers are discovered
|
/// Start the cluster with the given configuration and wait till the archivers are discovered
|
||||||
/// Then download blobs from one of them.
|
/// Then download shreds from one of them.
|
||||||
fn run_archiver_startup_basic(num_nodes: usize, num_archivers: usize) {
|
fn run_archiver_startup_basic(num_nodes: usize, num_archivers: usize) {
|
||||||
solana_logger::setup();
|
solana_logger::setup();
|
||||||
info!("starting archiver test");
|
info!("starting archiver test");
|
||||||
|
|
|
@ -574,8 +574,8 @@ fn test_fail_entry_verification_leader() {
|
||||||
#[test]
|
#[test]
|
||||||
#[allow(unused_attributes)]
|
#[allow(unused_attributes)]
|
||||||
#[ignore]
|
#[ignore]
|
||||||
fn test_fake_blobs_broadcast_leader() {
|
fn test_fake_shreds_broadcast_leader() {
|
||||||
test_faulty_node(BroadcastStageType::BroadcastFakeBlobs);
|
test_faulty_node(BroadcastStageType::BroadcastFakeShreds);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn test_faulty_node(faulty_node_type: BroadcastStageType) {
|
fn test_faulty_node(faulty_node_type: BroadcastStageType) {
|
||||||
|
@ -741,7 +741,7 @@ fn run_repairman_catchup(num_repairmen: u64) {
|
||||||
);
|
);
|
||||||
|
|
||||||
// Start up a new node, wait for catchup. Backwards repair won't be sufficient because the
|
// Start up a new node, wait for catchup. Backwards repair won't be sufficient because the
|
||||||
// leader is sending blobs past this validator's first two confirmed epochs. Thus, the repairman
|
// leader is sending shreds past this validator's first two confirmed epochs. Thus, the repairman
|
||||||
// protocol will have to kick in for this validator to repair.
|
// protocol will have to kick in for this validator to repair.
|
||||||
cluster.add_validator(&validator_config, repairee_stake);
|
cluster.add_validator(&validator_config, repairee_stake);
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue