parent
42aaacf520
commit
9fae5aacc2
|
@ -4,52 +4,51 @@ Progress on optimistic confirmation can be tracked here
|
|||
|
||||
https://github.com/solana-labs/solana/projects/52
|
||||
|
||||
At the end of May, the mainnet-beta is moving to 1.1, and testnet
|
||||
is moving to 1.2. With 1.2, testnet will behave as if we have 1-block
|
||||
conf as long as no more than 4.66% of the validators are acting
|
||||
maliciously. Applications can assume that 2/3+ votes observed in
|
||||
gossip confirm a block or that at least 4.66% of the network is
|
||||
violating the protocol.
|
||||
At the end of May, the mainnet-beta is moving to 1.1, and testnet is
|
||||
moving to 1.2. With 1.2, testnet will behave as if it has optimistic
|
||||
finality as long as at least no more than 4.66% of the validators are
|
||||
acting maliciously. Applications can assume that 2/3+ votes observed in
|
||||
gossip confirm a block or that at least 4.66% of the network is violating
|
||||
the protocol.
|
||||
|
||||
## How does it work?
|
||||
|
||||
The general idea is that validators have to continue voting, following
|
||||
their last fork, unless they can construct a proof that their fork
|
||||
may not reach finality. The way validators construct this proof is
|
||||
by collecting votes for all the other forks, excluding their own.
|
||||
If the set of valid votes represents over 1/3+X of the epoch stake
|
||||
weight, there is may not be a way for the validators current fork
|
||||
to reach 2/3+ finality. The validator hashes the proof (creates a
|
||||
witness) and submits it with their vote for the alternative fork.
|
||||
But if 2/3+ votes for the same block, it is impossible for any of
|
||||
the nodes to construct this proof, and therefore no node is able
|
||||
to switch forks and this block will be eventually finalized.
|
||||
The general idea is that validators must continue voting following their
|
||||
last fork, unless the validator can construct a proof that their current
|
||||
fork may not reach finality. The way validators construct this proof is
|
||||
by collecting votes for all the forks excluding their own. If the set
|
||||
of valid votes represents over 1/3+X of the epoch stake weight, there
|
||||
may not be a way for the validators current fork to reach 2/3+ finality.
|
||||
The validator hashes the proof (creates a witness) and submits it with
|
||||
their vote for the alternative fork. But if 2/3+ votes for the same
|
||||
block, it is impossible for any of the validators to construct this proof,
|
||||
and therefore no validator is able to switch forks and this block will
|
||||
be eventually finalized.
|
||||
|
||||
|
||||
## Tradeoffs
|
||||
|
||||
The safety margin is 1/3+X, where X represents the minimum amount
|
||||
of stake that will be slashed in case the protocol is violated. The
|
||||
tradeoff is that liveness is now reduced by 2X in the worst case.
|
||||
If more than 1/3 - 2X of the network is unavailable, the network
|
||||
may stall and will resume finalizing blocks after the network
|
||||
recovers. So far, we haven’t observed a large unavailability hit
|
||||
The safety margin is 1/3+X, where X represents the minimum amount of stake
|
||||
that will be slashed in case the protocol is violated. The tradeoff is
|
||||
that liveness is now reduced by 2X in the worst case. If more than 1/3 -
|
||||
2X of the network is unavailable, the network may stall and will only
|
||||
resume finalizing blocks after the network recovers below 1/3 - 2X of
|
||||
failing nodes. So far, we haven’t observed a large unavailability hit
|
||||
on our mainnet, cosmos, or tezos. For our network, which is primarily
|
||||
composed of high availability systems, this seems unlikely. Currently,
|
||||
we have set the threshold percentage to 4.66%, which means that if
|
||||
23.68% have failed the network may stop finalizing blocks. For our
|
||||
network, which is primarily composed of high availability systems
|
||||
a 23.68% drop in availabilty seems unlinkely. 1:10^12 odds assuming
|
||||
five 4.7% staked nodes with 0.995 of uptime.
|
||||
we have set the threshold percentage to 4.66%, which means that if 23.68%
|
||||
have failed the network may stop finalizing blocks. For our network,
|
||||
which is primarily composed of high availability systems a 23.68% drop
|
||||
in availabilty seems unlinkely. 1:10^12 odds assuming five 4.7% staked
|
||||
nodes with 0.995 of uptime.
|
||||
|
||||
## Security
|
||||
|
||||
Long term average votes per slot has been 670,000,000 votes /
|
||||
12,000,000 slots, or 55 out of 64 voting validators. This includes
|
||||
missed blocks due to block producer failures. When a client sees
|
||||
55/64, or ~86% confirming a block, it can expect that ~24% or (86
|
||||
- 66.666.. + 4.666..)% of the network must be slashed for this
|
||||
block to fail full finalization.
|
||||
Long term average votes per slot has been 670,000,000 votes / 12,000,000
|
||||
slots, or 55 out of 64 voting validators. This includes missed blocks due
|
||||
to block producer failures. When a client sees 55/64, or ~86% confirming
|
||||
a block, it can expect that ~24% or `(86 - 66.666.. + 4.666..)%` of
|
||||
the network must be slashed for this block to fail full finalization.
|
||||
|
||||
## Why Solana?
|
||||
|
||||
|
@ -62,11 +61,11 @@ time.
|
|||
## Slashing roadmap
|
||||
|
||||
Slashing is a hard problem, and it becomes harder when the goal of
|
||||
the network is to be the fastest possible implementation. The
|
||||
tradeoffs are especially apparent when optimizing for latency. For
|
||||
example, we would really like the validators to cast and propagate
|
||||
their votes before the memory has been synced to disk, which means
|
||||
that the risk of local state corruption is much higher.
|
||||
the network is to have the lowest possible latency. The tradeoffs are
|
||||
especially apparent when optimizing for latency. For example, ideally
|
||||
validators should cast and propagate their votes before the
|
||||
memory has been synced to disk, which means that the risk of local state
|
||||
corruption is much higher.
|
||||
|
||||
Fundamentally, our goal for slashing is to slash 100% in cases where
|
||||
the node is maliciously trying to violate safety rules and 0% during
|
||||
|
|
Loading…
Reference in New Issue