Fix a number of typos (#34385)

* Update vote-accounts.md

* Update what-is-a-validator.md

* Update what-is-a-validator.md

* Update accounts-db-replication.md

* Update blockstore-rocksdb-compaction.md

* Update rip-curl.md

* Update ledger-replication-to-implement.md

* Update optimistic_confirmation.md

* Update return-data.md

* Update handle-duplicate-block.md

* Update timely-vote-credits.md

* Update optimistic-transaction-propagation-signal.md

* Update simple-payment-and-state-verification.md

* Update off-chain-message-signing.md

* Update mod.rs

* Update elgamal.rs

* Update ledger.md

* Update deploy-a-program.md

* Update staking-rewards.md

* Update reliable-vote-transmission.md

* Update repair-service.md

* Update abi-management.md

* Update testing-programs.md

* Update docs/src/implemented-proposals/staking-rewards.md

Co-authored-by: Tyera <teulberg@gmail.com>

---------

Co-authored-by: Tyera <teulberg@gmail.com>
This commit is contained in:
pandabadger 2023-12-12 20:27:29 +00:00 committed by GitHub
parent 05dae592f4
commit 549c3e7813
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
22 changed files with 43 additions and 43 deletions

View File

@ -279,7 +279,7 @@ $ sha256sum extended.so dump.so
Instead of deploying directly to the program account, the program can be written
to an intermediary buffer account. Intermediary accounts can be useful for
things like multi-entity governed programs where the governing members fist
things like multi-entity governed programs where the governing members first
verify the intermediary buffer contents and then vote to allow an upgrade using
it.

View File

@ -94,7 +94,7 @@ solana balance 7cvkjYAkUYs4W8XcXsca7cBrEGFeSUjeZmKoNBvEwyri
You can also view the balance of any account address on the Accounts tab in the
[Explorer](https://explorer.solana.com/accounts) and paste the address in the
box to view the balance in you web browser.
box to view the balance in your web browser.
Note: Any address with a balance of 0 SOL, such as a newly created one on your
Ledger, will show as "Not Found" in the explorer. Empty accounts and

View File

@ -130,7 +130,7 @@ name suggests, there is no need to implement `AbiEnumVisitor` for other types.
To summarize this interplay, `serde` handles the recursive serialization control
flow in tandem with `AbiDigester`. The initial entry point in tests and child
`AbiDigester`s use `AbiExample` recursively to create an example object
hierarchal graph. And `AbiDigester` uses `AbiEnumVisitor` to inquiry the actual
hierarchical graph. And `AbiDigester` uses `AbiEnumVisitor` to inquiry the actual
ABI information using the constructed sample.
`Default` isn't enough for `AbiExample`. Various collection's `::default()` is
@ -142,7 +142,7 @@ On the other hand, ABI digesting can't be done only with `AbiExample`, either.
`AbiEnumVisitor` is required because all variants of an `enum` cannot be
traversed just with a single variant of it as a ABI example.
Digestable information:
Digestible information:
- rust's type name
- `serde`'s data type name
@ -152,7 +152,7 @@ Digestable information:
- `enum`: normal variants and `struct`- and `tuple`- styles.
- attributes: `serde(serialize_with=...)` and `serde(skip)`
Not digestable information:
Not digestible information:
- Any custom serialize code path not touched by the sample provided by
`AbiExample`. (technically not possible)

View File

@ -8,7 +8,7 @@ Validator votes are messages that have a critical function for consensus and con
1. Leader rotation is triggered by PoH, which is clock with high drift. So many nodes are likely to have an incorrect view if the next leader is active in realtime or not.
2. The next leader may be easily be flooded. Thus a DDOS would not only prevent delivery of regular transactions, but also consensus messages.
3. UDP is unreliable, and our asynchronous protocol requires any message that is transmitted to be retransmitted until it is observed in the ledger. Retransmittion could potentially cause an unintentional _thundering herd_ against the leader with a large number of validators. Worst case flood would be `(num_nodes * num_retransmits)`.
3. UDP is unreliable, and our asynchronous protocol requires any message that is transmitted to be retransmitted until it is observed in the ledger. Retransmission could potentially cause an unintentional _thundering herd_ against the leader with a large number of validators. Worst case flood would be `(num_nodes * num_retransmits)`.
4. Tracking if the vote has been transmitted or not via the ledger does not guarantee it will appear in a confirmed block. The current observed block may be unrolled. Validators would need to maintain state for each vote and fork.
## Design

View File

@ -54,7 +54,7 @@ The different protocol strategies to address the above challenges:
Blockstore tracks the latest root slot. RepairService will then periodically
iterate every fork in blockstore starting from the root slot, sending repair
requests to validators for any missing shreds. It will send at most some `N`
repair reqeusts per iteration. Shred repair should prioritize repairing
repair requests per iteration. Shred repair should prioritize repairing
forks based on the leader's fork weight. Validators should only send repair
requests to validators who have marked that slot as completed in their
EpochSlots. Validators should prioritize repairing shreds in each slot

View File

@ -30,4 +30,4 @@ Solana's trustless sense of time and ordering provided by its PoH data structure
As discussed in the [Economic Design](ed_overview/ed_overview.md) section, annual validator interest rates are to be specified as a function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online and actively participating in the validation process throughout the entirety of their _validation period_. For validators that go offline/fail to validate transactions during this period, their annual reward is effectively reduced.
Similarly, we may consider an algorithmic reduction in a validator's active amount staked amount in the case that they are offline. I.e. if a validator is inactive for some amount of time, either due to a partition or otherwise, the amount of their stake that is considered active \(eligible to earn rewards\) may be reduced. This design would be structured to help long-lived partitions to eventually reach finality on their respective chains as the % of non-voting total stake is reduced over time until a supermajority can be achieved by the active validators in each partition. Similarly, upon re-engaging, the active amount staked will come back online at some defined rate. Different rates of stake reduction may be considered depending on the size of the partition/active set.
Similarly, we may consider an algorithmic reduction in a validator's active staked amount in the case that they are offline. I.e. if a validator is inactive for some amount of time, either due to a partition or otherwise, the amount of their stake that is considered active \(eligible to earn rewards\) may be reduced. This design would be structured to help long-lived partitions to eventually reach finality on their respective chains as the % of non-voting total stake is reduced over time until a supermajority can be achieved by the active validators in each partition. Similarly, upon re-engaging, the active amount staked will come back online at some defined rate. Different rates of stake reduction may be considered depending on the size of the partition/active set.

View File

@ -32,7 +32,7 @@ trait SyncClient {
}
```
Users send transactions and asynchrounously and synchrounously await results.
Users send transactions and asynchronously and synchronously await results.
### ThinClient for Clusters

View File

@ -165,7 +165,7 @@ Rotating the vote account authority keys requires special handling when dealing
with a live validator.
Note that vote account key rotation has no effect on the stake accounts that
have been delegate to the vote account. For example it is possible to use key
have been delegated to the vote account. For example it is possible to use key
rotation to transfer all authority of a vote account from one entity to another
without any impact to staking rewards.

View File

@ -88,7 +88,7 @@ During replication we also need to replicate the information of accounts that ha
up due to zero lamports, i.e. we need to be able to tell the difference between an account in a
given slot which was not updated and hence has no storage entry in that slot, and one that
holds 0 lamports and has been cleaned up through the history. We may record this via some
"Tombstone" mechanism -- recording the dead accounts cleaned up fora slot. The tombstones
"Tombstone" mechanism -- recording the dead accounts cleaned up for a slot. The tombstones
themselves can be removed after exceeding the retention period expressed as epochs. Any
attempt to replicate slots with tombstones removed will fail and the replica should skip
this slot and try later ones.

View File

@ -109,7 +109,7 @@ close to 1 read amplification. As each key is only inserted once, we have
space amplification 1.
### Use Current Settings for Metadata Column Families
The second type of the column families related to shred insertion is medadata
The second type of the column families related to shred insertion is metadata
column families. These metadata column families contributes ~1% of the shred
insertion data in size. The largest metadata column family here is the Index
column family, which occupies 0.8% of the shred insertion data.
@ -160,7 +160,7 @@ in Solana's BlockStore use case:
Here we discuss Level to FIFO and FIFO to Level migrations:
### Level to FIFO
heoretically, FIFO compaction is the superset of all other compaction styles,
Theoretically, FIFO compaction is the superset of all other compaction styles,
as it does not have any assumption of the LSM tree structure. However, the
current RocksDB implementation does not offer such flexibility while it is
theoretically doable.

View File

@ -25,7 +25,7 @@ Before a duplicate slot `S` is `duplicate_confirmed`, it's first excluded from t
Some notes about the `DUPLICATE_THRESHOLD`. In the cases below, assume `DUPLICATE_THRESHOLD = 52`:
a) If less than `2 * DUPLICATE_THRESHOLD - 1` percentage of the network is malicious, then there can only be one such `duplicate_confirmed` version of the slot. With `DUPLICATE_THRESHOLD = 52`, this is
a malcious tolerance of `4%`
a malicious tolerance of `4%`
b) The liveness of the network is at most `1 - DUPLICATE_THRESHOLD - SWITCH_THRESHOLD`. This is because if you need at least `SWITCH_THRESHOLD` percentage of the stake voting on a different fork in order to switch off of a duplicate fork that has `< DUPLICATE_THRESHOLD` stake voting on it, and is *not* `duplicate_confirmed`. For `DUPLICATE_THRESHOLD = 52` and `DUPLICATE_THRESHOLD = 38`, this implies a liveness tolerance of `10%`.
@ -38,7 +38,7 @@ For example in the situation below, validators that voted on `2` can't vote any
```
3. Switching proofs need to be extended to allow including vote hashes from different versions of the same same slot (detected through 1). Right now this is not supported since switching proofs can
3. Switching proofs need to be extended to allow including vote hashes from different versions of the same slot (detected through 1). Right now this is not supported since switching proofs can
only be built using votes from banks in BankForks, and two different versions of the same slot cannot
simultaneously exist in BankForks. For instance:
@ -73,7 +73,7 @@ This problem we need to solve is modeled simply by the below scenario:
```
Assume the following:
1. Due to gossiping duplciate proofs, we assume everyone will eventually see duplicate proofs for 2 and 4, so everyone agrees to remove them from fork choice until they are `duplicate_confirmed`.
1. Due to gossiping duplicate proofs, we assume everyone will eventually see duplicate proofs for 2 and 4, so everyone agrees to remove them from fork choice until they are `duplicate_confirmed`.
2. Due to lockouts, `> DUPLICATE_THRESHOLD` of the stake votes on 4, but not 2. This means at least `DUPLICATE_THRESHOLD` of people have the "correct" version of both slots 2 and 4.

View File

@ -219,7 +219,7 @@ For each turn of the PoRep game, both Validators and Archivers evaluate each sta
For any random seed, we force everyone to use a signature that is derived from a PoH hash at the turn boundary. Everyone uses the same count, so the same PoH hash is signed by every participant. The signatures are then each cryptographically tied to the keypair, which prevents a leader from grinding on the resulting value for more than 1 identity.
Since there are many more client identities then encryption identities, we need to split the reward for multiple clients, and prevent Sybil attacks from generating many clients to acquire the same block of data. To remain BFT we want to avoid a single human entity from storing all the replications of a single chunk of the ledger.
Since there are many more client identities than encryption identities, we need to split the reward for multiple clients, and prevent Sybil attacks from generating many clients to acquire the same block of data. To remain BFT we want to avoid a single human entity from storing all the replications of a single chunk of the ledger.
Our solution to this is to force the clients to continue using the same identity. If the first round is used to acquire the same block for many client identities, the second round for the same client identities will force a redistribution of the signatures, and therefore PoRep identities and blocks. Thus to get a reward for archivers need to store the first block for free and the network can reward long lived client identities more than new ones.

View File

@ -64,7 +64,7 @@ This may be any arbitrary bytes. For instance the on-chain address of a program,
DAO instance, Candy Machine, etc.
This field **SHOULD** be displayed to users as a base58-encoded ASCII string rather
than interpretted otherwise.
than interpreted otherwise.
#### Message Format

View File

@ -13,7 +13,7 @@ concatenating (1), (2), and (3)
deduplicating this list of entries by pubkey favoring entries with contact info
filtering this list by entries with contact info
This list is then is randomly shuffled by stake weight.
This list is then randomly shuffled by stake weight.
Shreds are then retransmitted to up to FANOUT neighbors and up to FANOUT
children.
@ -37,7 +37,7 @@ First, only epoch staked nodes will be considered regardless of presence of
contact info (and possibly including the validator node itself).
A deterministic ordering of the epoch staked nodes will be created based on the
derministic shred seed using weighted_shuffle.
deterministic shred seed using weighted_shuffle.
Let `neighbor_set` be selected from up to FANOUT neighbors of the current node.
Let `child_set` be selected from up to FANOUT children of the current node.
@ -73,7 +73,7 @@ distribution levels.
distribution levels because of lack of contact info.
- Current node was part of original epoch staked shuffle from retransmitter
but was filtered out because of missing contact info. Current node subsequently
receives retransmisison of shred and assumes that the retransmit was a result
receives retransmission of shred and assumes that the retransmit was a result
of the deterministic tree calculation and not from subsequent random selection.
This should be benign because the current node will underestimate prior stake
weight in the retransmission tree.
@ -105,5 +105,5 @@ Practically, signals should fall into the following buckets:
1.2. can signal layer 1 + subset of layer 2 when retransmit is sent
3. layer 2
3.1. can signal layer 2 when shred is received
3.2. can signal layer 2 + subset of layer 3 when retrnasmit is sent
3.2. can signal layer 2 + subset of layer 3 when retransmit is sent
4. current node not a member of epoch staked nodes, no signal can be sent

View File

@ -86,7 +86,7 @@ the votes must satisfy:
- `X <= S.last`, `X' <= S'.last`
- All `s` in `S` are ancestors/descendants of one another,
all `s'` in `S'` are ancsestors/descendants of one another,
all `s'` in `S'` are ancestors/descendants of one another,
-
- `X == X'` implies `S` is parent of `S'` or `S'` is a parent of `S`
- `X' > X` implies `X' > S.last` and `S'.last > S.last`
@ -312,7 +312,7 @@ true that `B' > X`
```
`Proof`: Let `Vote(X, S)` be a vote in the `Optimistic Votes` set. Then by
definition, given the "optimistcally confirmed" block `B`, `X <= B <= S.last`.
definition, given the "optimistically confirmed" block `B`, `X <= B <= S.last`.
Because `X` is a parent of `B`, and `B'` is not a parent or ancestor of `B`,
then:
@ -322,7 +322,7 @@ then:
Now consider if `B'` < `X`:
`Case B' < X`: We wll show this is a violation of lockouts.
`Case B' < X`: We will show this is a violation of lockouts.
From above, we know `B'` is not a parent of `X`. Then because `B'` was rooted,
and `B'` is not a parent of `X`, then the validator should not have been able
to vote on the higher slot `X` that does not descend from `B'`.
@ -361,7 +361,7 @@ By `Lemma 2` we know `B' > X`, and from above `S_v.last > B'`, so then
From above, `S.last >= B >= X` so for all such "switching votes", `X_v > B`.
Now ordering all these "switching votes" in time, let `V` to be the validator
in `Optimistic Validators` that first submitted such a "swtching vote"
in `Optimistic Validators` that first submitted such a "switching vote"
`Vote(X', S')`, where `X' > B`. We know that such a validator exists because
we know from above that all delinquent validators must have submitted such
a vote, and the delinquent validators are a subset of the

View File

@ -136,7 +136,7 @@ strings in the [stable log](https://github.com/solana-labs/solana/blob/952928419
Solidity on Ethereum allows the contract to return an error in the return data. In this case, all
the account data changes for the account should be reverted. On Solana, any non-zero exit code
for a SBF prorgram means the entire transaction fails. We do not wish to support an error return
for a SBF program means the entire transaction fails. We do not wish to support an error return
by returning success and then returning an error in the return data. This would mean we would have
to support reverting the account data changes; this too expensive both on the VM side and the SBF
contract side.

View File

@ -39,7 +39,7 @@ Easier for validators to support:
has no significant resource constraints.
- Transaction status is never stored in memory and cannot be polled for.
- Signatures are only stored in memory until the desired commitment level or
until the blockhash expires, which ever is later.
until the blockhash expires, whichever is later.
How it works:

View File

@ -90,7 +90,7 @@ code, but a single status bit to indicate the transaction's success.
Currently, the Block-Merkle is not implemented, so to verify `E` was an entry
in the block with bank hash `B`, we would need to provide all the entry hashes
in the block. Ideally this Block-Merkle would be implmented, as the alternative
in the block. Ideally this Block-Merkle would be implemented, as the alternative
is very inefficient.
#### Block Headers
@ -138,7 +138,7 @@ https://github.com/solana-labs/solana/blob/b6bfed64cb159ee67bb6bdbaefc7f833bbed3
Each vote is a signed transaction that includes the bank hash of the block the
validator voted for, i.e. the `B` from the `Transaction Merkle` section above.
Once a certain threshold `T` of the network has voted on a block, the block is
considered optimistially confirmed. The votes made by this group of `T`
considered optimistically confirmed. The votes made by this group of `T`
validators is needed to show the block with bank hash `B` was optimistically
confirmed.
@ -150,11 +150,11 @@ vote, and vote account pubkey responsible for the vote.
Together, the transaction merkle and optimistic confirmation proofs can be
provided over RPC to subscribers by extending the existing signature
subscrption logic. Clients who subscribe to the "Confirmed" confirmation
subscription logic. Clients who subscribe to the "Confirmed" confirmation
level are already notified when optimistic confirmation is detected, a flag
can be provided to signal the two proofs above should also be returned.
It is important to note that optimistcally confirming `B` also implies that all
It is important to note that optimistically confirming `B` also implies that all
ancestor blocks of `B` are also optimistically confirmed, and also that not
all blocks will be optimistically confirmed.
@ -164,7 +164,7 @@ B -> B'
```
So in the example above if a block `B'` is optimisically confirmed, then so is
So in the example above if a block `B'` is optimistically confirmed, then so is
`B`. Thus if a transaction was in block `B`, the transaction merkle in the
proof will be for block `B`, but the votes presented in the proof will be for
block `B'`. This is why the headers in the `Block headers` section above are
@ -174,10 +174,10 @@ important, the client will need to verify that `B` is indeed an ancestor of
#### Proof of Stake Distribution
Once presented with the transaction merkle and optimistic confirmation proofs
above, a client can verify a transaction `T` was optimistially confirmed in a
above, a client can verify a transaction `T` was optimistically confirmed in a
block with bank hash `B`. The last missing piece is how to verify that the
votes in the optimistic proofs above actually constitute the valid `T`
percentage of the stake necessay to uphold the safety guarantees of
percentage of the stake necessary to uphold the safety guarantees of
"optimistic confirmation".
One way to approach this might be for every epoch, when the stake set changes,
@ -191,7 +191,7 @@ block `B` was optimistically confirmed/rooted.
An account's state (balance or other data) can be verified by submitting a
transaction with a **_TBD_** Instruction to the cluster. The client can then
use a [Transaction Inclusion Proof](#transaction-inclusion-proof) to verify
whether the cluster agrees that the acount has reached the expected state.
whether the cluster agrees that the account has reached the expected state.
### Validator Votes

View File

@ -47,7 +47,7 @@ transmitted immediately and landed in an earlier slot.
If landing a vote with 1 slot latency awarded more credit than landing that
same vote in 2 slots latency, then validators who could land votes
consistently wihthin 1 slot would have a credits earning advantage over those
consistently within 1 slot would have a credits earning advantage over those
who could not. Part of the latency when transmitting votes is unavoidable as
it's a function of geographical distance between the sender and receiver of
the vote. The Solana network is spread around the world but it is not evenly

View File

@ -6,13 +6,13 @@ A validator is a computer that helps to run the Solana network. Each validator e
The more independent entities that run validators, the less vulnerable the cluster is to an attack or catastrophe that affects the cluster.
> For an more in depth look at the health of the Solana network, see the [Solana Foundation Validator Health Report](https://solana.com/news/validator-health-report-march-2023).
> For a more in depth look at the health of the Solana network, see the [Solana Foundation Validator Health Report](https://solana.com/news/validator-health-report-march-2023).
By becoming a validator, you are helping to grow the network. You are also learning first hand how the Solana cluster functions at the lowest level. You will become part of an active community of operators that are passionate about the Solana ecosystem.
## Consensus vs RPC
Before, we discuss validators in more detail, it's useful to make some distinctions. Using the same validator software, you have the option of running a voting/consensus node or choosing to instead run an RPC node. An RPC node helps Solana devs and others interact with the blockchain but for performance reasons should not vote. We go into more detail on RPC nodes in the next section, [what is an rpc node](./what-is-an-rpc-node.md).
Before we discuss validators in more detail, it's useful to make some distinctions. Using the same validator software, you have the option of running a voting/consensus node or choosing to instead run an RPC node. An RPC node helps Solana devs and others interact with the blockchain but for performance reasons should not vote. We go into more detail on RPC nodes in the next section, [what is an rpc node](./what-is-an-rpc-node.md).
For this document, when a validator is mentioned, we are talking about a voting/consensus node. Now, to better understand what your validator is doing, it would help to understand how the Solana network functions in more depth.
@ -36,4 +36,4 @@ Understanding how PoH works is not necessary to run a good validator, but a very
As a validator, you are helping to secure the network by producing and voting on blocks and to improve decentralization by running an independent node. You have the right to participate in discussions of changes on the network. You are also assuming a responsibility to keep your system running properly, to make sure your system is secure, and to keep it up to date with the latest software. As more individuals stake their tokens to your validator, you can reward their trust by running a high performing and reliable validator. Hopefully, your validator is performing well a majority of the time, but you should also have systems in place to respond to an outage at any time of the day. If your validator is not responding late at night, someone (either you or other team members) need to be available to investigate and fix the issues.
Running a validator is a [technical and important task](./operations/prerequisites.md), but it can also be very rewarding. Good luck and welcome to the community.
Running a validator is a [technical and important task](./operations/prerequisites.md), but it can also be very rewarding. Good luck and welcome to the community.

View File

@ -164,7 +164,7 @@ impl ElGamal {
}
/// On input a secret key and a ciphertext, the function returns the decrypted amount
/// interpretted as a positive 32-bit number (but still of type `u64`).
/// interpreted as a positive 32-bit number (but still of type `u64`).
///
/// If the originally encrypted amount is not a positive 32-bit number, then the function
/// returns `None`.

View File

@ -31,8 +31,8 @@ pub enum Role {
}
/// Takes in a 64-bit number `amount` and a bit length `bit_length`. It returns:
/// - the `bit_length` low bits of `amount` interpretted as u64
/// - the (64 - `bit_length`) high bits of `amount` interpretted as u64
/// - the `bit_length` low bits of `amount` interpreted as u64
/// - the (64 - `bit_length`) high bits of `amount` interpreted as u64
#[cfg(not(target_os = "solana"))]
pub fn split_u64(amount: u64, bit_length: usize) -> (u64, u64) {
if bit_length == 64 {