* introduce workspace.package
* introduce workspace.dependencies
* read version from root cargo.toml
* pass check when version = { workspace = true }
* don't bump version when version = { workspace = true }
* including workspace Cargo.toml when bump version
* programs/sbf use workspace inheritance
* fix increasing cargo version ignore program/sbf/Cargo.toml
Local-timestamp used in current gossip eviction code:
https://github.com/solana-labs/solana/blob/a36e1b211/gossip/src/crds.rs#L469-L511
does not indicate how old the entry is but how recently it was received.
The commit instead uses origin's wallclock to identify old values. In
order to avoid cases where the wallclock on the entry is bogus, it is
capped by local-timestamp.
Duplicate-shreds handler is using a nested hash-map for the incomplete
chunks buffered. This is resulting in a convoluted logic to limit the
number of entries:
https://github.com/solana-labs/solana/blob/427bd6264/gossip/src/duplicate_shred_handler.rs#L62
This commit instead uses a flat buffer mapping (Slot, Pubkey) pairs to
the respective duplicate shreds chunks. The buffer is allowed to grow to
twice the intended capacity, at which point the extraneous entries are
removed in linear time, resulting an amortized O(1) performance.
The commit implement new ContactInfo where
* Ports and IP addresses are specified separately so that unique IP
addresses can only be specified once.
* Different sockets (tvu, tpu, etc) are specified by opaque u8 tags so
that adding and removing sockets is backward and forward compatible.
* solana_version::Version is also embedded in so that it won't need to
be gossiped separately.
* NodeInstance is also rolled in by adding a field identifying when the
instance was first created so that it won't need to be gossiped
separately.
Update plan:
* Once the cluster is able to ingest the new type (i.e. this patch), a
2nd patch will start gossiping the new ContactInfo along with the
LegacyContactInfo.
* Once all nodes in the cluster gossip the new ContactInfo, a 3rd patch
will start solely using the new ContactInfo while still gossiping the
old LegacyContactInfo.
* Once all nodes in the cluster solely use the new ContactInfo, a 4th
patch will stop gossiping the old LegacyContactInfo.
* introduce workspace.package
* introduce workspace.dependencies
* read version from root cargo.toml
* pass check when version = { workspace = true }
* don't bump version when version = { workspace = true }
* including workspace Cargo.toml when bump version
* programs/sbf use workspace inheritance
* fix increasing cargo version ignore program/sbf/Cargo.toml
Retransmit code has moved to core/src/cluster_nodes.rs and has been
significantly revised.
gossip/tests/cluster_info.rs is testing the old code which is no longer
relevant.
* First draft of ingesting duplicate proofs in Gossip into blockstore.
* Add more unittests.
* Add more unittests for bad cases.
* Fix lint errors for tests.
* More linter fixes for tests.
* Lint fixes
* Rename get_entries, move location of comment.
* Some renaming changes and comment fixes.
* Fix compile warning, this enum is not used.
* Fix lint errors.
* Slow down cleanup because this could potentially be expensive.
* Forgot to reset cleanup count.
* Add protection against attackers when constructing chunk map when
we ingest Gossip proofs.
* Use duplicate shred index instead of get_entries.
* Rename ClusterInfoDuplicateShredListener and fix a few small problems.
* Use into_shreds to piece together the proof.
* Remove redundant code.
* Address a few small errors.
* Discard slots too advanced in the future.
* - Use oldest proof for each pubkey
- Limit number of pubkeys in each slot to 100
* Disable duplicate shred handling for now.
* Revert "Disable duplicate shred handling for now."
This reverts commit c3fcf403876cfbf90afe4d2265a826f21a5e24ab.
Amplifying gossip peer sampling weights by the time since last
pull-request has undesired consequence that a node coming back online
will see a huge number of pull requests all at once.
This "time since last request" is also unnecessary to include in
weights because as long as sampling probabilities are non-zero, a node
will be almost surely periodically selected in the sample.
The commit reworks peer sampling probabilities by just using (dampened)
stakes as weights.
It was not immediately clear why the second `CrdsValue` insertion in the
test must always succeed. Turns out the test was relying on hash values
having a specific relationship. It is confusing to someone not deeply
familiar with the test.
As overwrite based on the hash value is not part of the behavior that we
consider valuable, we just remove that check.
Unified assertion between two checks into one.
Gossip push samples nodes by stake. This is unnecessarily wasteful and
creates too much congestion at high staked nodes if the CRDS value to be
propagated is from a node with low or zero stake.
This commit instead maintains several active-sets for push, each
corresponding with a stake bucket. Peer sampling weights are accordingly
capped by the respective bucket stake.
Since
https://github.com/solana-labs/solana/pull/20480
turbine includes all epoch staked nodes in tree construction and no
longer relies on obtaining their contact-info from gossip; and so
distinguishing between is_valid_address and is_valid_tvu_address is no
longer necessary and the latter can be removed.
dedups gossip addresses, keeping only the one with the highest weight
In order to avoid traffic congestion or sending duplicate packets, when
sampling gossip nodes if several nodes have the same gossip address
(because they are behind a relayer or whatever), they need to be
deduplicated into one.
As described here:
https://github.com/solana-labs/solana/issues/28642#issuecomment-1337449607
current gossip pruning code fails to maintain spanning trees across
cluster.
This commit instead implements a pruning code based on timeliness of
delivered messages. If a messages is delivered timely enough (in terms
of number of duplicates already observed for that value), it counts
towards the respective node's score. Once there are enough many CRDS
upserts from a specific origin, redundant nodes are pruned based on the
tracked score.
Since the pruning leaves some configurable redundancy and the scores are
reset frequently, it should better tolerate active-set rotations.
The commit tracks number of times duplicates of a CRDS value is received
from gossip push. Following commits will utilize this metric to score
gossip nodes in terms of timeliness of their push messages, in order to
better pick which nodes to prune.