The commit implement new ContactInfo where
* Ports and IP addresses are specified separately so that unique IP
addresses can only be specified once.
* Different sockets (tvu, tpu, etc) are specified by opaque u8 tags so
that adding and removing sockets is backward and forward compatible.
* solana_version::Version is also embedded in so that it won't need to
be gossiped separately.
* NodeInstance is also rolled in by adding a field identifying when the
instance was first created so that it won't need to be gossiped
separately.
Update plan:
* Once the cluster is able to ingest the new type (i.e. this patch), a
2nd patch will start gossiping the new ContactInfo along with the
LegacyContactInfo.
* Once all nodes in the cluster gossip the new ContactInfo, a 3rd patch
will start solely using the new ContactInfo while still gossiping the
old LegacyContactInfo.
* Once all nodes in the cluster solely use the new ContactInfo, a 4th
patch will stop gossiping the old LegacyContactInfo.
* introduce workspace.package
* introduce workspace.dependencies
* read version from root cargo.toml
* pass check when version = { workspace = true }
* don't bump version when version = { workspace = true }
* including workspace Cargo.toml when bump version
* programs/sbf use workspace inheritance
* fix increasing cargo version ignore program/sbf/Cargo.toml
Retransmit code has moved to core/src/cluster_nodes.rs and has been
significantly revised.
gossip/tests/cluster_info.rs is testing the old code which is no longer
relevant.
* First draft of ingesting duplicate proofs in Gossip into blockstore.
* Add more unittests.
* Add more unittests for bad cases.
* Fix lint errors for tests.
* More linter fixes for tests.
* Lint fixes
* Rename get_entries, move location of comment.
* Some renaming changes and comment fixes.
* Fix compile warning, this enum is not used.
* Fix lint errors.
* Slow down cleanup because this could potentially be expensive.
* Forgot to reset cleanup count.
* Add protection against attackers when constructing chunk map when
we ingest Gossip proofs.
* Use duplicate shred index instead of get_entries.
* Rename ClusterInfoDuplicateShredListener and fix a few small problems.
* Use into_shreds to piece together the proof.
* Remove redundant code.
* Address a few small errors.
* Discard slots too advanced in the future.
* - Use oldest proof for each pubkey
- Limit number of pubkeys in each slot to 100
* Disable duplicate shred handling for now.
* Revert "Disable duplicate shred handling for now."
This reverts commit c3fcf403876cfbf90afe4d2265a826f21a5e24ab.
Amplifying gossip peer sampling weights by the time since last
pull-request has undesired consequence that a node coming back online
will see a huge number of pull requests all at once.
This "time since last request" is also unnecessary to include in
weights because as long as sampling probabilities are non-zero, a node
will be almost surely periodically selected in the sample.
The commit reworks peer sampling probabilities by just using (dampened)
stakes as weights.
It was not immediately clear why the second `CrdsValue` insertion in the
test must always succeed. Turns out the test was relying on hash values
having a specific relationship. It is confusing to someone not deeply
familiar with the test.
As overwrite based on the hash value is not part of the behavior that we
consider valuable, we just remove that check.
Unified assertion between two checks into one.
Gossip push samples nodes by stake. This is unnecessarily wasteful and
creates too much congestion at high staked nodes if the CRDS value to be
propagated is from a node with low or zero stake.
This commit instead maintains several active-sets for push, each
corresponding with a stake bucket. Peer sampling weights are accordingly
capped by the respective bucket stake.
Since
https://github.com/solana-labs/solana/pull/20480
turbine includes all epoch staked nodes in tree construction and no
longer relies on obtaining their contact-info from gossip; and so
distinguishing between is_valid_address and is_valid_tvu_address is no
longer necessary and the latter can be removed.
dedups gossip addresses, keeping only the one with the highest weight
In order to avoid traffic congestion or sending duplicate packets, when
sampling gossip nodes if several nodes have the same gossip address
(because they are behind a relayer or whatever), they need to be
deduplicated into one.
As described here:
https://github.com/solana-labs/solana/issues/28642#issuecomment-1337449607
current gossip pruning code fails to maintain spanning trees across
cluster.
This commit instead implements a pruning code based on timeliness of
delivered messages. If a messages is delivered timely enough (in terms
of number of duplicates already observed for that value), it counts
towards the respective node's score. Once there are enough many CRDS
upserts from a specific origin, redundant nodes are pruned based on the
tracked score.
Since the pruning leaves some configurable redundancy and the scores are
reset frequently, it should better tolerate active-set rotations.
The commit tracks number of times duplicates of a CRDS value is received
from gossip push. Following commits will utilize this metric to score
gossip nodes in terms of timeliness of their push messages, in order to
better pick which nodes to prune.
* Move ConnectionCache back to solana-client, and duplicate ThinClient, TpuClient there
* Dedupe thin_client modules
* Dedupe tpu_client modules
* Move TpuClient to TpuConnectionCache
* Move ThinClient to TpuConnectionCache
* Move TpuConnection and quic/udp trait implementations back to solana-client
* Remove enum_dispatch from solana-tpu-client
* Move udp-client to its own crate
* Move quic-client to its own crate
https://github.com/solana-labs/solana/pull/27193
added hash domain to ping-pong protocol.
For backward compatibility responses both with and without domain were
generated and accepted.
Now that all clusters are upgraded, this commit enforces the hash domain
by removing the response without the domain.
Currently, gossip packets are counted after excess packets are dropped.
This makes it difficult to debug gossip traffic spikes if the majority
of the packets are dropped.
This commit instead counts gossip packets received before excess packets
are dropped
* Fix test_bench_tps_local_cluster_solana
* Remove #[ignore] annotations from dos tests (which are also fixed by this change)
* Remove #[ignore] annotations from local cluster tests (which are also fixed by this change)
Tenets:
1. Limit thread names to 15 characters
2. Prefix all Solana-controlled threads with "sol"
3. Use Camel case. It's more character dense than Snake or Kebab case
In order to maintain backward compatibility, for now the responding node
will hash the token both with and without domain so that the other node
will accept the response regardless of its upgrade status.
Once the cluster has upgraded to the new code, we will remove the legacy
domain = false case.
cargo test --package solana-gossip --release test_push_votes_with_tower
occasionally fails because with --release all votes are generated at
the same wallclock (milliseconds resolution) and so the new ones will
not necessarily override existing entries in the table.
The commit ensures that the new vote is pushed with a wallclock later
than existing entries.
* add three gossip metrics measuring gossip loop times
* add 5 metrics
* rm space
* rm space
* Update SECURITY.md
- fix nav link
- add bounty split policy for duplicate reports
* Add transaction index in slot to geyser plugin TransactionInfo (#25688)
* Define shuffle to prep using same shuffle for multiple slices
* Determine transaction indexes and plumb to execute_batch
* Pair transaction_index with transaction in TransactionStatusService
* Add new ReplicaTransactionInfoVersion
* Plumb transaction_indexes through BankingStage
* Prepare BankingStage to receive transaction indexes from PohRecorder
* Determine transaction indexes in PohRecorder; add field to WorkingBank
* Add PohRecorder::record unit test
* Only pass starting_transaction_index around PohRecorder
* Add helper structs to simplify test DashMap
* Pass entry and starting-index into process_entries_with_callback together
* Add tx-index checks to test_rebatch_transactions
* Revert shuffle definition and use zip/unzip
* Only zip/unzip if randomize
* Add confirm_slot_entries test
* Review nits
* Add type alias to make sender docs more clear
* Update SECURITY.md
finish filling out the table....
* rpc: fix possible deadlock in rpc (#26051)
* Add StatusCache::root_slot_deltas() and use it (#26170)
* Remove InMemAccountsIndex::map() and use map_internal directly (#26189)
* [quic]Decrement total_streams correctly (#26158)
* remove comment
* alphabetical metrics. no abbreviations
* remove trailing white space
* cargo fmt to update code format/readability
Co-authored-by: Trent Nelson <trent@solana.com>
Co-authored-by: Tyera Eulberg <tyera@solana.com>
Co-authored-by: Boqin Qin(秦 伯钦) <Bobbqqin@gmail.com>
Co-authored-by: Brooks Prumo <brooks@solana.com>
Co-authored-by: Miles Obare <bdhobare@gmail.com>
In prepration of
https://github.com/solana-labs/solana/pull/25807
which reworks erasure batch sizes, this commit:
* adds a helper function mapping the number of data shreds to the
erasure batch size.
* adds ProcessShredsStats to Shredder::entries_to_shreds in order to
replace and remove entries_to_data_shreds from the public interface.
* client: Remove static connection cache, plumb it instead
* Add TpuClient::new_with_connection_cache to not break downstream
* Refactor get_connection and RwLock into ConnectionCache
* Fix merge conflicts from new async TpuClient
* Remove `ConnectionCache::set_use_quic`
* Move DEFAULT_TPU_USE_QUIC to client, use ConnectionCache::default()
Packets are at the boundary of the system where, vast majority of the
time, they are received from an untrusted source. Raw indexing into the
data buffer can open attack vectors if the offsets are invalid.
Validating offsets beforehand is verbose and error prone.
The commit updates Packet::data() api to take a SliceIndex and always to
return an Option. The call-sites are so forced to explicitly handle the
case where the offsets are invalid.
* Spawn QUIC server to receive forwarded txs
* Update validator port range
* forward votes using UDP
* no forwarding from unstaked nodes
* forwarding stats in banking stage
* fix test builds
* fix lifetime of forward sender
test_pull_from_entrypoint_if_not_present relies on a deterministic
ordering for the entries when generating gossip pull requests.
https://github.com/solana-labs/solana/pull/25460
changed an intermediate type for gossip pull-requests from Vec to
HashMap, and so the entries are no longer deterministically ordered.
This causes the test to be flaky.
The commit updates the test so that it no longer relies on the ordering.
Each time a node generates gossip pull-requests, it sends out all the
requests to a single randomly selected peer:
https://github.com/solana-labs/solana/blob/fd7ad31ee/gossip/src/crds_gossip_pull.rs#L253-L266
This causes a burst of pull-requests at a single node at once. In order
to make gossip in-bound traffic less bursty, this commit fans out gossip
pull-requests to several randomly selected peers.
This should reduce spikes in inbound gossip traffic without changing the
average load which may help reduce number of times outbound data budget is
exhausted when responding to gossip pull-requests at the receiving node, and
reduce number of pull-requests dropped.
Bytes past Packet.meta.size are not valid to read from.
The commit makes the buffer field private and instead provides two
methods:
* Packet::data() which returns an immutable reference to the underlying
buffer up to Packet.meta.size. The rest of the buffer is not valid to
read from.
* Packet::buffer_mut() which returns a mutable reference to the entirety
of the underlying buffer to write into. The caller is responsible to
update Packet.meta.size after writing to the buffer.
Upcoming changes to PacketBatch to support variable sized packets will
modify the internals of PacketBatch. So, this change removes usage of
the internal packet struct and instead uses accessors (which are
currently just wrappers of Vector functions but will change down the
road).
Refactor the thin_client::create_client to take addresses separately instead of as a tuple
Co-authored-by: Bijie Zhu <bijiezhu@Bijies-MBP.cable.rcn.com>
* Add quic-client module to send transactions via quic, abstracted behind the TpuConnection trait (along with a legacy UDP implementation of TpuConnection) and change thin-client to use TpuConnection
The commit adjust CRDS_SHARDS_BITS up to be in-line with mask_bits in
gossip pull request. This will avoid redundant filtering of irrelevant
crds entries when responding to pull requests.
The indices for erasure coding shreds are tied to data shreds:
https://github.com/solana-labs/solana/blob/90f41fd9b/ledger/src/shred.rs#L921
However with the upcoming changes to erasure schema, there will be more
erasure coding shreds than data shreds and we can no longer infer coding
shreds indices from data shreds.
The commit adds constructs to track coding shreds indices explicitly.
next-shred-index is already readily available from returned data shreds.
The commit simplifies the api for upcoming changes to erasure coding
schema which will require explicit tracking of indices for coding shreds
as well as data shreds.