spec: more fixes
This commit is contained in:
parent
8cca953590
commit
5b368252ac
|
@ -1,6 +1,7 @@
|
|||
# P2P Config
|
||||
|
||||
Here we describe configuration options around the Peer Exchange.
|
||||
These can be set using flags or via the `$TMHOME/config/config.toml` file.
|
||||
|
||||
## Seed Mode
|
||||
|
||||
|
@ -22,14 +23,12 @@ If we already have enough peers in the address book, we may never need to dial t
|
|||
|
||||
Dial these peers and auto-redial them if the connection fails.
|
||||
These are intended to be trusted persistent peers that can help
|
||||
anchor us in the p2p network.
|
||||
anchor us in the p2p network. The auto-redial uses exponential
|
||||
backoff and will give up after a day of trying to connect.
|
||||
|
||||
Note that the auto-redial uses exponential backoff and will give up
|
||||
after a day of trying to connect.
|
||||
|
||||
NOTE: If `seeds` and `persistent_peers` intersect,
|
||||
the user will be WARNED that seeds may auto-close connections
|
||||
and the node may not be able to keep the connection persistent.
|
||||
**Note:** If `seeds` and `persistent_peers` intersect,
|
||||
the user will be warned that seeds may auto-close connections
|
||||
and that the node may not be able to keep the connection persistent.
|
||||
|
||||
## Private Persistent Peers
|
||||
|
||||
|
|
|
@ -1,32 +1,34 @@
|
|||
## P2P Multiplex Connection
|
||||
|
||||
...
|
||||
# P2P Multiplex Connection
|
||||
|
||||
## MConnection
|
||||
|
||||
`MConnection` is a multiplex connection that supports multiple independent streams
|
||||
with distinct quality of service guarantees atop a single TCP connection.
|
||||
Each stream is known as a `Channel` and each `Channel` has a globally unique byte id.
|
||||
Each stream is known as a `Channel` and each `Channel` has a globally unique *byte id*.
|
||||
Each `Channel` also has a relative priority that determines the quality of service
|
||||
of the `Channel` in comparison to the others.
|
||||
The byte id and the relative priorities of each `Channel` are configured upon
|
||||
of the `Channel` compared to other `Channel`s.
|
||||
The *byte id* and the relative priorities of each `Channel` are configured upon
|
||||
initialization of the connection.
|
||||
|
||||
The `MConnection` supports three packet types: Ping, Pong, and Msg.
|
||||
The `MConnection` supports three packet types:
|
||||
|
||||
- Ping
|
||||
- Pong
|
||||
- Msg
|
||||
|
||||
### Ping and Pong
|
||||
|
||||
The ping and pong messages consist of writing a single byte to the connection; 0x1 and 0x2, respectively.
|
||||
|
||||
When we haven't received any messages on an `MConnection` in a time `pingTimeout`, we send a ping message.
|
||||
When we haven't received any messages on an `MConnection` in time `pingTimeout`, we send a ping message.
|
||||
When a ping is received on the `MConnection`, a pong is sent in response only if there are no other messages
|
||||
to send and the peer has not sent us too many pings.
|
||||
to send and the peer has not sent us too many pings (how many is too many?).
|
||||
|
||||
If a pong or message is not received in sufficient time after a ping, disconnect from the peer.
|
||||
If a pong or message is not received in sufficient time after a ping, the peer is disconnected from.
|
||||
|
||||
### Msg
|
||||
|
||||
Messages in channels are chopped into smaller msgPackets for multiplexing.
|
||||
Messages in channels are chopped into smaller `msgPacket`s for multiplexing.
|
||||
|
||||
```
|
||||
type msgPacket struct {
|
||||
|
@ -36,14 +38,14 @@ type msgPacket struct {
|
|||
}
|
||||
```
|
||||
|
||||
The msgPacket is serialized using go-wire, and prefixed with a 0x3.
|
||||
The `msgPacket` is serialized using [go-wire](https://github.com/tendermint/go-wire) and prefixed with 0x3.
|
||||
The received `Bytes` of a sequential set of packets are appended together
|
||||
until a packet with `EOF=1` is received, at which point the complete serialized message
|
||||
is returned for processing by the corresponding channels `onReceive` function.
|
||||
until a packet with `EOF=1` is received, then the complete serialized message
|
||||
is returned for processing by the `onReceive` function of the corresponding channel.
|
||||
|
||||
### Multiplexing
|
||||
|
||||
Messages are sent from a single `sendRoutine`, which loops over a select statement that results in the sending
|
||||
Messages are sent from a single `sendRoutine`, which loops over a select statement and results in the sending
|
||||
of a ping, a pong, or a batch of data messages. The batch of data messages may include messages from multiple channels.
|
||||
Message bytes are queued for sending in their respective channel, with each channel holding one unsent message at a time.
|
||||
Messages are chosen for a batch one at a time from the channel with the lowest ratio of recently sent bytes to channel priority.
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
# Tendermint Peer Discovery
|
||||
|
||||
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity to others.
|
||||
A Tendermint P2P network has different kinds of nodes with different requirements for connectivity compared to other types of networks.
|
||||
This document describes what kind of nodes Tendermint should enable and how they should work.
|
||||
|
||||
## Seeds
|
||||
|
||||
Seeds are the first point of contact for a new node.
|
||||
They return a list of known active peers and disconnect.
|
||||
They return a list of known active peers and disconnect....if?
|
||||
|
||||
Seeds should operate full nodes, and with the PEX reactor in a "crawler" mode
|
||||
Seeds should operate full nodes with the PEX reactor in a "crawler" mode
|
||||
that continuously explores to validate the availability of peers.
|
||||
|
||||
Seeds should only respond with some top percentile of the best peers it knows about.
|
||||
See [reputation] for details on peer quality.
|
||||
See [reputation](TODO) for details on peer quality.
|
||||
|
||||
## New Full Node
|
||||
|
||||
|
@ -31,7 +31,7 @@ dials those peers, and runs the Tendermint protocols with those it successfully
|
|||
|
||||
When the peer catches up to height H, it ensures the block hash matches HASH.
|
||||
If not, Tendermint will exit, and the user must try again - either they are connected
|
||||
to bad peers or their social consensus was invalidated.
|
||||
to bad peers or their social consensus is invalid.
|
||||
|
||||
## Restarted Full Node
|
||||
|
||||
|
@ -58,8 +58,8 @@ Validators that know and trust each other can accept incoming connections from o
|
|||
Sentry nodes are guardians of a validator node and provide it access to the rest of the network.
|
||||
They should be well connected to other full nodes on the network.
|
||||
Sentry nodes may be dynamic, but should maintain persistent connections to some evolving random subset of each other.
|
||||
They should always expect to have direct incoming connections from the validator node and its backup/s.
|
||||
They do not report the validator node's address in the PEX.
|
||||
They may be more strict about the quality of peers they keep.
|
||||
They should always expect to have direct incoming connections from the validator node and its backup(s).
|
||||
They do not report the validator node's address in the PEX and
|
||||
they may be more strict about the quality of peers they keep.
|
||||
|
||||
Sentry nodes belonging to validators that trust each other may wish to maintain persistent connections via VPN with one another, but only report each other sparingly in the PEX.
|
||||
|
|
|
@ -62,7 +62,7 @@ ie. `peer.PubKey.Address() == <ID>`.
|
|||
|
||||
The connection has now been authenticated. All traffic is encrypted.
|
||||
|
||||
Note that only the dialer can authenticate the identity of the peer,
|
||||
Note: only the dialer can authenticate the identity of the peer,
|
||||
but this is what we care about since when we join the network we wish to
|
||||
ensure we have reached the intended peer (and are not being MITMd).
|
||||
|
||||
|
@ -81,7 +81,7 @@ terminated.
|
|||
|
||||
The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
|
||||
|
||||
```
|
||||
```golang
|
||||
type NodeInfo struct {
|
||||
PubKey crypto.PubKey
|
||||
Moniker string
|
||||
|
@ -111,5 +111,4 @@ Note that each reactor may handle multiple channels.
|
|||
|
||||
Once a peer is added, incoming messages for a given reactor are handled through
|
||||
that reactor's `Receive` method, and output messages are sent directly by the Reactors
|
||||
on each peer. A typical reactor maintains per-peer go-routine/s that handle this.
|
||||
|
||||
on each peer. A typical reactor maintains per-peer go-routine(s) that handle this.
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
## Blockchain Reactor
|
||||
|
||||
* coordinates the pool for syncing
|
||||
|
|
|
@ -9,7 +9,7 @@ Tendermint full nodes run the Blockchain Reactor as a service to provide blocks
|
|||
to new nodes. New nodes run the Blockchain Reactor in "fast_sync" mode,
|
||||
where they actively make requests for more blocks until they sync up.
|
||||
Once caught up, "fast_sync" mode is disabled and the node switches to
|
||||
using the Consensus Reactor. , and turn on the Consensus Reactor.
|
||||
using (and turns on) the Consensus Reactor.
|
||||
|
||||
## Message Types
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
# Consensus Reactor
|
||||
|
||||
Consensus Reactor defines a reactor for the consensus service. It contains ConsensusState service that
|
||||
Consensus Reactor defines a reactor for the consensus service. It contains the ConsensusState service that
|
||||
manages the state of the Tendermint consensus internal state machine.
|
||||
When Consensus Reactor is started, it starts Broadcast Routine and it starts ConsensusState service.
|
||||
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manage) known peer state
|
||||
When Consensus Reactor is started, it starts Broadcast Routine which starts ConsensusState service.
|
||||
Furthermore, for each peer that is added to the Consensus Reactor, it creates (and manages) the known peer state
|
||||
(that is used extensively in gossip routines) and starts the following three routines for the peer p:
|
||||
Gossip Data Routine, Gossip Votes Routine and QueryMaj23Routine. Finally, Consensus Reactor is responsible
|
||||
for decoding messages received from a peer and for adequate processing of the message depending on its type and content.
|
||||
|
@ -41,7 +41,7 @@ RoundState defines the internal consensus state. It contains height, round, roun
|
|||
a proposal and proposal block for the current round, locked round and block (if some block is being locked), set of
|
||||
received votes and last commit and last validators set.
|
||||
|
||||
```
|
||||
```golang
|
||||
type RoundState struct {
|
||||
Height int64
|
||||
Round int
|
||||
|
@ -60,15 +60,23 @@ type RoundState struct {
|
|||
```
|
||||
|
||||
Internally, consensus will run as a state machine with the following states:
|
||||
RoundStepNewHeight, RoundStepNewRound, RoundStepPropose, RoundStepProposeWait, RoundStepPrevote,
|
||||
RoundStepPrevoteWait, RoundStepPrecommit, RoundStepPrecommitWait and RoundStepCommit.
|
||||
|
||||
- RoundStepNewHeight
|
||||
- RoundStepNewRound
|
||||
- RoundStepPropose
|
||||
- RoundStepProposeWait
|
||||
- RoundStepPrevote
|
||||
- RoundStepPrevoteWait
|
||||
- RoundStepPrecommit
|
||||
- RoundStepPrecommitWait
|
||||
- RoundStepCommit
|
||||
|
||||
## Peer Round State
|
||||
|
||||
Peer round state contains the known state of a peer. It is being updated by the Receive routine of
|
||||
Consensus Reactor and by the gossip routines upon sending a message to the peer.
|
||||
|
||||
```
|
||||
```golang
|
||||
type PeerRoundState struct {
|
||||
Height int64 // Height peer is at
|
||||
Round int // Round peer is at, -1 if unknown.
|
||||
|
@ -92,8 +100,8 @@ type PeerRoundState struct {
|
|||
The entry point of the Consensus reactor is a receive method. When a message is received from a peer p,
|
||||
normally the peer round state is updated correspondingly, and some messages
|
||||
are passed for further processing, for example to ConsensusState service. We now specify the processing of messages
|
||||
in the receive method of Consensus reactor for each message type. In the following message handler, rs denotes
|
||||
RoundState and prs PeerRoundState.
|
||||
in the receive method of Consensus reactor for each message type. In the following message handler, `rs` and `prs` denote
|
||||
`RoundState` and `PeerRoundState`, respectively.
|
||||
|
||||
### NewRoundStepMessage handler
|
||||
|
||||
|
@ -196,8 +204,8 @@ handleMessage(msg):
|
|||
## Gossip Data Routine
|
||||
|
||||
It is used to send the following messages to the peer: `BlockPartMessage`, `ProposalMessage` and
|
||||
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (denoted rs)
|
||||
and the known PeerRoundState (denotes prs). The routine repeats forever the logic shown below:
|
||||
`ProposalPOLMessage` on the DataChannel. The gossip data routine is based on the local RoundState (`rs`)
|
||||
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
|
||||
|
||||
```
|
||||
1a) if rs.ProposalBlockPartsHeader == prs.ProposalBlockPartsHeader and the peer does not have all the proposal parts then
|
||||
|
@ -246,8 +254,8 @@ The function executes the following logic:
|
|||
## Gossip Votes Routine
|
||||
|
||||
It is used to send the following message: `VoteMessage` on the VoteChannel.
|
||||
The gossip votes routine is based on the local RoundState (denoted rs)
|
||||
and the known PeerRoundState (denotes prs). The routine repeats forever the logic shown below:
|
||||
The gossip votes routine is based on the local RoundState (`rs`)
|
||||
and the known PeerRoundState (`prs`). The routine repeats forever the logic shown below:
|
||||
|
||||
```
|
||||
1a) if rs.Height == prs.Height then
|
||||
|
@ -291,8 +299,8 @@ and the known PeerRoundState (denotes prs). The routine repeats forever the logi
|
|||
## QueryMaj23Routine
|
||||
|
||||
It is used to send the following message: `VoteSetMaj23Message`. `VoteSetMaj23Message` is sent to indicate that a given
|
||||
BlockID has seen +2/3 votes. This routine is based on the local RoundState (denoted rs) and the known PeerRoundState
|
||||
(denotes prs). The routine repeats forever the logic shown below.
|
||||
BlockID has seen +2/3 votes. This routine is based on the local RoundState (`rs`) and the known PeerRoundState
|
||||
(`prs`). The routine repeats forever the logic shown below.
|
||||
|
||||
```
|
||||
1a) if rs.Height == prs.Height then
|
||||
|
@ -328,28 +336,9 @@ BlockID has seen +2/3 votes. This routine is based on the local RoundState (deno
|
|||
|
||||
## Broadcast routine
|
||||
|
||||
The Broadcast routine subscribes to internal event bus to receive new round steps, votes messages and proposal
|
||||
The Broadcast routine subscribes to an internal event bus to receive new round steps, votes messages and proposal
|
||||
heartbeat messages, and broadcasts messages to peers upon receiving those events.
|
||||
It brodcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
|
||||
broadcasting these messages does not depend on the PeerRoundState. It is sent on the StateChannel.
|
||||
It broadcasts `NewRoundStepMessage` or `CommitStepMessage` upon new round state event. Note that
|
||||
broadcasting these messages does not depend on the PeerRoundState; it is sent on the StateChannel.
|
||||
Upon receiving VoteMessage it broadcasts `HasVoteMessage` message to its peers on the StateChannel.
|
||||
`ProposalHeartbeatMessage` is sent the same way on the StateChannel.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue