* Book nits

* nits
This commit is contained in:
Jack May 2019-03-04 14:44:54 -08:00 committed by Greg Fitzgerald
parent 846fdd3b2d
commit 44013855d8
4 changed files with 10 additions and 8 deletions

View File

@ -73,7 +73,7 @@ cluster sizes change.
The following diagram shows how two neighborhoods in different layers interact.
What this diagram doesn't capture is that each neighbor actually receives
blobs from 1 one validator per neighborhood above it. This means that, to
blobs from one validator per neighborhood above it. This means that, to
cripple a neighborhood, enough nodes (erasure codes +1 per neighborhood) from
the layer above need to fail. Since multiple neighborhoods exist in the upper
layer and a node will receive blobs from a node in each of those neighborhoods,

View File

@ -59,9 +59,11 @@ Validators vote based on a greedy choice to maximize their reward described in
### Validator's View
#### Time Progression The diagram below represents a validator's view of the
PoH stream with possible forks over time. L1, L2, etc. are leader slot, and
`E`s represent entries from that leader during that leader's slot. The 'x's
#### Time Progression
The diagram below represents a validator's view of the
PoH stream with possible forks over time. L1, L2, etc. are leader slots, and
`E`s represent entries from that leader during that leader's slot. The `x`s
represent ticks only, and time flows downwards in the diagram.

View File

@ -23,7 +23,7 @@ PoH ledger. Thus the segments stay in the exact same order for every PoRep and
verification can stream the data and verify all the proofs in a single batch.
This way we can verify multiple proofs concurrently, each one on its own CUDA
core. The total space required for verification is `1_ledger_segment +
2_cbc_blocks * number_of_identities` with core count of equal to
2_cbc_blocks * number_of_identities` with core count equal to
`number_of_identities`. We use a 64-byte chacha CBC block size.
## Network
@ -86,7 +86,7 @@ ledger data, they have to rely on other full nodes (validators) for
information. Any given validator may or may not be malicious and give incorrect
information, although there are not any obvious attack vectors that this could
accomplish besides having the replicator do extra wasted work. For many of the
operations there are number of options depending on how paranoid a replicator
operations there are a number of options depending on how paranoid a replicator
is:
- (a) replicator can ask a validator
- (b) replicator can ask multiple validators

View File

@ -54,9 +54,9 @@ function](https://eprint.iacr.org/2018/601.pdf) or *VDF*.
A desirable property of a VDF is that verification time is very fast. Solana's
approach to verifying its delay function is proportional to the time it took to
create it. Split over a 4000 core GPU, it is sufficiently fast for Solana's
needs, but if you asked the authors the paper cited above, they might tell you
needs, but if you asked the authors of the paper cited above, they might tell you
([and have](https://github.com/solana-labs/solana/issues/388)) that Solana's
approach is algorithmically slow it shouldn't be called a VDF. We argue the
approach is algorithmically slow and it shouldn't be called a VDF. We argue the
term VDF should represent the category of verifiable delay functions and not
just the subset with certain performance characteristics. Until that's
resolved, Solana will likely continue using the term PoH for its