Merge pull request #1519 from tendermint/bucky/p2p-cleanup

Cleanup P2P
This commit is contained in:
Ethan Buchman 2018-04-30 15:54:04 -04:00 committed by GitHub
commit e2e2127365
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 347 additions and 151 deletions

View File

@ -26,9 +26,22 @@ BUG FIXES:
## 0.19.2 (TBD) ## 0.19.2 (TBD)
FEATURES:
- [p2p] Allow peers with different Minor versions to connect
- [rpc] `/net_info` includes `n_peers`
IMPROVEMENTS:
- [p2p] Various code comments, cleanup, error types
- [p2p] Change some Error logs to Debug
BUG FIXES: BUG FIXES:
- Fix reconnect to persistent peer when first dial fails - [p2p] Fix reconnect to persistent peer when first dial fails
- [p2p] Validate NodeInfo.ListenAddr
- [p2p] Only allow (MaxNumPeers - MaxNumOutboundPeers) inbound peers
- [p2p/pex] Limit max msg size to 64kB
## 0.19.1 (April 27th, 2018) ## 0.19.1 (April 27th, 2018)

View File

@ -12,7 +12,7 @@ Seeds should operate full nodes with the PEX reactor in a "crawler" mode
that continuously explores to validate the availability of peers. that continuously explores to validate the availability of peers.
Seeds should only respond with some top percentile of the best peers it knows about. Seeds should only respond with some top percentile of the best peers it knows about.
See [reputation](TODO) for details on peer quality. See [the peer-exchange docs](/docs/specification/new-spec/reactors/pex/pex.md)for details on peer quality.
## New Full Node ## New Full Node

View File

@ -2,24 +2,23 @@
This document explains how Tendermint Peers are identified and how they connect to one another. This document explains how Tendermint Peers are identified and how they connect to one another.
For details on peer discovery, see the [peer exchange (PEX) reactor doc](pex.md). For details on peer discovery, see the [peer exchange (PEX) reactor doc](/docs/specification/new-spec/reactors/pex/pex.md).
## Peer Identity ## Peer Identity
Tendermint peers are expected to maintain long-term persistent identities in the form of a public key. Tendermint peers are expected to maintain long-term persistent identities in the form of a public key.
Each peer has an ID defined as `peer.ID == peer.PubKey.Address()`, where `Address` uses the scheme defined in go-crypto. Each peer has an ID defined as `peer.ID == peer.PubKey.Address()`, where `Address` uses the scheme defined in go-crypto.
A single peer ID can have multiple IP addresses associated with it. A single peer ID can have multiple IP addresses associated with it, but a node
TODO: define how to deal with this. will only ever connect to one at a time.
When attempting to connect to a peer, we use the PeerURL: `<ID>@<IP>:<PORT>`. When attempting to connect to a peer, we use the PeerURL: `<ID>@<IP>:<PORT>`.
We will attempt to connect to the peer at IP:PORT, and verify, We will attempt to connect to the peer at IP:PORT, and verify,
via authenticated encryption, that it is in possession of the private key via authenticated encryption, that it is in possession of the private key
corresponding to `<ID>`. This prevents man-in-the-middle attacks on the peer layer. corresponding to `<ID>`. This prevents man-in-the-middle attacks on the peer layer.
Peers can also be connected to without specifying an ID, ie. just `<IP>:<PORT>`. If `auth_enc = false`, peers can use an arbitrary ID, but they must always use
In this case, the peer must be authenticated out-of-band of Tendermint, one. Authentication can then happen out-of-band of Tendermint, for instance via VPN.
for instance via VPN.
## Connections ## Connections
@ -84,12 +83,13 @@ The Tendermint Version Handshake allows the peers to exchange their NodeInfo:
```golang ```golang
type NodeInfo struct { type NodeInfo struct {
ID p2p.ID ID p2p.ID
Moniker string
Network string
RemoteAddr string
ListenAddr string ListenAddr string
Network string
Version string Version string
Channels []int8 Channels []int8
Moniker string
Other []string Other []string
} }
``` ```
@ -98,9 +98,10 @@ The connection is disconnected if:
- `peer.NodeInfo.ID` is not equal `peerConn.ID` - `peer.NodeInfo.ID` is not equal `peerConn.ID`
- `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision - `peer.NodeInfo.Version` is not formatted as `X.X.X` where X are integers known as Major, Minor, and Revision
- `peer.NodeInfo.Version` Major is not the same as ours - `peer.NodeInfo.Version` Major is not the same as ours
- `peer.NodeInfo.Version` Minor is not the same as ours
- `peer.NodeInfo.Network` is not the same as ours - `peer.NodeInfo.Network` is not the same as ours
- `peer.Channels` does not intersect with our known Channels. - `peer.Channels` does not intersect with our known Channels.
- `peer.NodeInfo.ListenAddr` is malformed or is a DNS host that cannot be
resolved
At this point, if we have not disconnected, the peer is valid. At this point, if we have not disconnected, the peer is valid.

View File

@ -7,7 +7,8 @@ to good peers and to gossip peers to others.
## Peer Types ## Peer Types
Certain peers are special in that they are specified by the user as `persistent`, Certain peers are special in that they are specified by the user as `persistent`,
which means we auto-redial them if the connection fails. which means we auto-redial them if the connection fails, or if we fail to dial
them.
Some peers can be marked as `private`, which means Some peers can be marked as `private`, which means
we will not put them in the address book or gossip them to others. we will not put them in the address book or gossip them to others.
@ -19,22 +20,37 @@ Peer discovery begins with a list of seeds.
When we have no peers, or have been unable to find enough peers from existing ones, When we have no peers, or have been unable to find enough peers from existing ones,
we dial a randomly selected seed to get a list of peers to dial. we dial a randomly selected seed to get a list of peers to dial.
So long as we have less than `MaxPeers`, we periodically request additional peers On startup, we will also immediately dial the given list of `persistent_peers`,
and will attempt to maintain persistent connections with them. If the connections die, or we fail to dial,
we will redial every 5s for a few minutes, then switch to an exponential backoff schedule,
and after about a day of trying, stop dialing the peer.
So long as we have less than `MinNumOutboundPeers`, we periodically request additional peers
from each of our own. If sufficient time goes by and we still can't find enough peers, from each of our own. If sufficient time goes by and we still can't find enough peers,
we try the seeds again. we try the seeds again.
## Listening
Peers listen on a configurable ListenAddr that they self-report in their
NodeInfo during handshakes with other peers. Peers accept up to (MaxNumPeers -
MinNumOutboundPeers) incoming peers.
## Address Book ## Address Book
Peers are tracked via their ID (their PubKey.Address()). Peers are tracked via their ID (their PubKey.Address()).
For each ID, the address book keeps the most recent IP:PORT.
Peers are added to the address book from the PEX when they first connect to us or Peers are added to the address book from the PEX when they first connect to us or
when we hear about them from other peers. when we hear about them from other peers.
The address book is arranged in sets of buckets, and distinguishes between The address book is arranged in sets of buckets, and distinguishes between
vetted (old) and unvetted (new) peers. It keeps different sets of buckets for vetted and vetted (old) and unvetted (new) peers. It keeps different sets of buckets for vetted and
unvetted peers. Buckets provide randomization over peer selection. unvetted peers. Buckets provide randomization over peer selection. Peers are put
in buckets according to their IP groups.
A vetted peer can only be in one bucket. An unvetted peer can be in multiple buckets. A vetted peer can only be in one bucket. An unvetted peer can be in multiple buckets, and
each instance of the peer can have a different IP:PORT.
If we're trying to add a new peer but there's no space in its bucket, we'll
remove the worst peer from that bucket to make room.
## Vetting ## Vetting
@ -68,6 +84,8 @@ Connection attempts are made with exponential backoff (plus jitter). Because
the selection process happens every `ensurePeersPeriod`, we might not end up the selection process happens every `ensurePeersPeriod`, we might not end up
dialing a peer for much longer than the backoff duration. dialing a peer for much longer than the backoff duration.
If we fail to connect to the peer after 16 tries (with exponential backoff), we remove from address book completely.
## Select Peers to Exchange ## Select Peers to Exchange
When were asked for peers, we select them as follows: When were asked for peers, we select them as follows:
@ -86,9 +104,9 @@ Note that the bad behaviour may be detected outside the PEX reactor itself
(for instance, in the mconnection, or another reactor), but it must be communicated to the PEX reactor (for instance, in the mconnection, or another reactor), but it must be communicated to the PEX reactor
so it can remove and mark the peer. so it can remove and mark the peer.
In the PEX, if a peer sends us unsolicited lists of peers, In the PEX, if a peer sends us an unsolicited list of peers,
or if the peer sends too many requests for more peers in a given amount of time, or if the peer sends a request too soon after another one,
we Disconnect and Mark. we Disconnect and MarkBad.
## Trust Metric ## Trust Metric

View File

@ -21,7 +21,6 @@ import (
mempl "github.com/tendermint/tendermint/mempool" mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/p2p/pex" "github.com/tendermint/tendermint/p2p/pex"
"github.com/tendermint/tendermint/p2p/trust"
"github.com/tendermint/tendermint/proxy" "github.com/tendermint/tendermint/proxy"
rpccore "github.com/tendermint/tendermint/rpc/core" rpccore "github.com/tendermint/tendermint/rpc/core"
ctypes "github.com/tendermint/tendermint/rpc/core/types" ctypes "github.com/tendermint/tendermint/rpc/core/types"
@ -101,7 +100,6 @@ type Node struct {
// network // network
sw *p2p.Switch // p2p connections sw *p2p.Switch // p2p connections
addrBook pex.AddrBook // known peers addrBook pex.AddrBook // known peers
trustMetricStore *trust.TrustMetricStore // trust metrics for all peers
// services // services
eventBus *types.EventBus // pub/sub for services eventBus *types.EventBus // pub/sub for services
@ -262,20 +260,25 @@ func NewNode(config *cfg.Config,
sw.AddReactor("EVIDENCE", evidenceReactor) sw.AddReactor("EVIDENCE", evidenceReactor)
// Optionally, start the pex reactor // Optionally, start the pex reactor
//
// TODO:
//
// We need to set Seeds and PersistentPeers on the switch,
// since it needs to be able to use these (and their DNS names)
// even if the PEX is off. We can include the DNS name in the NetAddress,
// but it would still be nice to have a clear list of the current "PersistentPeers"
// somewhere that we can return with net_info.
//
// Let's assume we always have IDs ... and we just dont authenticate them
// if auth_enc=false.
//
// If PEX is on, it should handle dialing the seeds. Otherwise the switch does it.
// Note we currently use the addrBook regardless at least for AddOurAddress
var addrBook pex.AddrBook var addrBook pex.AddrBook
var trustMetricStore *trust.TrustMetricStore
if config.P2P.PexReactor {
addrBook = pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict) addrBook = pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict)
addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile())) addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile()))
if config.P2P.PexReactor {
// Get the trust metric history data // TODO persistent peers ? so we can have their DNS addrs saved
trustHistoryDB, err := dbProvider(&DBContext{"trusthistory", config})
if err != nil {
return nil, err
}
trustMetricStore = trust.NewTrustMetricStore(trustHistoryDB, trust.DefaultConfig())
trustMetricStore.SetLogger(p2pLogger)
pexReactor := pex.NewPEXReactor(addrBook, pexReactor := pex.NewPEXReactor(addrBook,
&pex.PEXReactorConfig{ &pex.PEXReactorConfig{
Seeds: cmn.SplitAndTrim(config.P2P.Seeds, ",", " "), Seeds: cmn.SplitAndTrim(config.P2P.Seeds, ",", " "),
@ -357,7 +360,6 @@ func NewNode(config *cfg.Config,
sw: sw, sw: sw,
addrBook: addrBook, addrBook: addrBook,
trustMetricStore: trustMetricStore,
stateDB: stateDB, stateDB: stateDB,
blockStore: blockStore, blockStore: blockStore,

View File

@ -18,3 +18,31 @@ type ErrSwitchAuthenticationFailure struct {
func (e ErrSwitchAuthenticationFailure) Error() string { func (e ErrSwitchAuthenticationFailure) Error() string {
return fmt.Sprintf("Failed to authenticate peer. Dialed %v, but got peer with ID %s", e.Dialed, e.Got) return fmt.Sprintf("Failed to authenticate peer. Dialed %v, but got peer with ID %s", e.Dialed, e.Got)
} }
//-------------------------------------------------------------------
type ErrNetAddressNoID struct {
Addr string
}
func (e ErrNetAddressNoID) Error() string {
return fmt.Sprintf("Address (%s) does not contain ID", e.Addr)
}
type ErrNetAddressInvalid struct {
Addr string
Err error
}
func (e ErrNetAddressInvalid) Error() string {
return fmt.Sprintf("Invalid address (%s): %v", e.Addr, e.Err)
}
type ErrNetAddressLookup struct {
Addr string
Err error
}
func (e ErrNetAddressLookup) Error() string {
return fmt.Sprintf("Error looking up host (%s): %v", e.Addr, e.Err)
}

View File

@ -19,9 +19,14 @@ import (
// NetAddress defines information about a peer on the network // NetAddress defines information about a peer on the network
// including its ID, IP address, and port. // including its ID, IP address, and port.
type NetAddress struct { type NetAddress struct {
ID ID ID ID `json:"id"`
IP net.IP IP net.IP `json:"ip"`
Port uint16 Port uint16 `json:"port"`
// TODO:
// Name string `json:"name"` // optional DNS name
// memoize .String()
str string str string
} }
@ -56,10 +61,11 @@ func NewNetAddress(id ID, addr net.Addr) *NetAddress {
// NewNetAddressString returns a new NetAddress using the provided address in // NewNetAddressString returns a new NetAddress using the provided address in
// the form of "ID@IP:Port". // the form of "ID@IP:Port".
// Also resolves the host if host is not an IP. // Also resolves the host if host is not an IP.
// Errors are of type ErrNetAddressXxx where Xxx is in (NoID, Invalid, Lookup)
func NewNetAddressString(addr string) (*NetAddress, error) { func NewNetAddressString(addr string) (*NetAddress, error) {
spl := strings.Split(addr, "@") spl := strings.Split(addr, "@")
if len(spl) < 2 { if len(spl) < 2 {
return nil, fmt.Errorf("Address (%s) does not contain ID", addr) return nil, ErrNetAddressNoID{addr}
} }
return NewNetAddressStringWithOptionalID(addr) return NewNetAddressStringWithOptionalID(addr)
} }
@ -76,11 +82,12 @@ func NewNetAddressStringWithOptionalID(addr string) (*NetAddress, error) {
idStr := spl[0] idStr := spl[0]
idBytes, err := hex.DecodeString(idStr) idBytes, err := hex.DecodeString(idStr)
if err != nil { if err != nil {
return nil, cmn.ErrorWrap(err, fmt.Sprintf("Address (%s) contains invalid ID", addrWithoutProtocol)) return nil, ErrNetAddressInvalid{addrWithoutProtocol, err}
} }
if len(idBytes) != IDByteLength { if len(idBytes) != IDByteLength {
return nil, fmt.Errorf("Address (%s) contains ID of invalid length (%d). Should be %d hex-encoded bytes", return nil, ErrNetAddressInvalid{
addrWithoutProtocol, len(idBytes), IDByteLength) addrWithoutProtocol,
fmt.Errorf("invalid hex length - got %d, expected %d", len(idBytes), IDByteLength)}
} }
id, addrWithoutProtocol = ID(idStr), spl[1] id, addrWithoutProtocol = ID(idStr), spl[1]
@ -88,7 +95,7 @@ func NewNetAddressStringWithOptionalID(addr string) (*NetAddress, error) {
host, portStr, err := net.SplitHostPort(addrWithoutProtocol) host, portStr, err := net.SplitHostPort(addrWithoutProtocol)
if err != nil { if err != nil {
return nil, err return nil, ErrNetAddressInvalid{addrWithoutProtocol, err}
} }
ip := net.ParseIP(host) ip := net.ParseIP(host)
@ -96,7 +103,7 @@ func NewNetAddressStringWithOptionalID(addr string) (*NetAddress, error) {
if len(host) > 0 { if len(host) > 0 {
ips, err := net.LookupIP(host) ips, err := net.LookupIP(host)
if err != nil { if err != nil {
return nil, err return nil, ErrNetAddressLookup{host, err}
} }
ip = ips[0] ip = ips[0]
} }
@ -104,7 +111,7 @@ func NewNetAddressStringWithOptionalID(addr string) (*NetAddress, error) {
port, err := strconv.ParseUint(portStr, 10, 16) port, err := strconv.ParseUint(portStr, 10, 16)
if err != nil { if err != nil {
return nil, err return nil, ErrNetAddressInvalid{portStr, err}
} }
na := NewNetAddressIPPort(ip, uint16(port)) na := NewNetAddressIPPort(ip, uint16(port))
@ -120,7 +127,7 @@ func NewNetAddressStrings(addrs []string) ([]*NetAddress, []error) {
for _, addr := range addrs { for _, addr := range addrs {
netAddr, err := NewNetAddressString(addr) netAddr, err := NewNetAddressString(addr)
if err != nil { if err != nil {
errs = append(errs, fmt.Errorf("Error in address %s: %v", addr, err)) errs = append(errs, err)
} else { } else {
netAddrs = append(netAddrs, netAddr) netAddrs = append(netAddrs, netAddr)
} }

View File

@ -21,6 +21,7 @@ func MaxNodeInfoSize() int {
// between two peers during the Tendermint P2P handshake. // between two peers during the Tendermint P2P handshake.
type NodeInfo struct { type NodeInfo struct {
// Authenticate // Authenticate
// TODO: replace with NetAddress
ID ID `json:"id"` // authenticated identifier ID ID `json:"id"` // authenticated identifier
ListenAddr string `json:"listen_addr"` // accepting incoming ListenAddr string `json:"listen_addr"` // accepting incoming
@ -37,7 +38,9 @@ type NodeInfo struct {
// Validate checks the self-reported NodeInfo is safe. // Validate checks the self-reported NodeInfo is safe.
// It returns an error if there // It returns an error if there
// are too many Channels or any duplicate Channels. // are too many Channels, if there are any duplicate Channels,
// if the ListenAddr is malformed, or if the ListenAddr is a host name
// that can not be resolved to some IP.
// TODO: constraints for Moniker/Other? Or is that for the UI ? // TODO: constraints for Moniker/Other? Or is that for the UI ?
func (info NodeInfo) Validate() error { func (info NodeInfo) Validate() error {
if len(info.Channels) > maxNumChannels { if len(info.Channels) > maxNumChannels {
@ -52,11 +55,14 @@ func (info NodeInfo) Validate() error {
} }
channels[ch] = struct{}{} channels[ch] = struct{}{}
} }
return nil
// ensure ListenAddr is good
_, err := NewNetAddressString(IDAddressString(info.ID, info.ListenAddr))
return err
} }
// CompatibleWith checks if two NodeInfo are compatible with eachother. // CompatibleWith checks if two NodeInfo are compatible with eachother.
// CONTRACT: two nodes are compatible if the major/minor versions match and network match // CONTRACT: two nodes are compatible if the major version matches and network match
// and they have at least one channel in common. // and they have at least one channel in common.
func (info NodeInfo) CompatibleWith(other NodeInfo) error { func (info NodeInfo) CompatibleWith(other NodeInfo) error {
iMajor, iMinor, _, iErr := splitVersion(info.Version) iMajor, iMinor, _, iErr := splitVersion(info.Version)
@ -77,9 +83,9 @@ func (info NodeInfo) CompatibleWith(other NodeInfo) error {
return fmt.Errorf("Peer is on a different major version. Got %v, expected %v", oMajor, iMajor) return fmt.Errorf("Peer is on a different major version. Got %v, expected %v", oMajor, iMajor)
} }
// minor version must match // minor version can differ
if iMinor != oMinor { if iMinor != oMinor {
return fmt.Errorf("Peer is on a different minor version. Got %v, expected %v", oMinor, iMinor) // ok
} }
// nodes must be on the same network // nodes must be on the same network
@ -116,8 +122,15 @@ OUTER_LOOP:
func (info NodeInfo) NetAddress() *NetAddress { func (info NodeInfo) NetAddress() *NetAddress {
netAddr, err := NewNetAddressString(IDAddressString(info.ID, info.ListenAddr)) netAddr, err := NewNetAddressString(IDAddressString(info.ID, info.ListenAddr))
if err != nil { if err != nil {
switch err.(type) {
case ErrNetAddressLookup:
// XXX If the peer provided a host name and the lookup fails here
// we're out of luck.
// TODO: use a NetAddress in NodeInfo
default:
panic(err) // everything should be well formed by now panic(err) // everything should be well formed by now
} }
}
return netAddr return netAddr
} }

View File

@ -7,7 +7,6 @@ package pex
import ( import (
"crypto/sha256" "crypto/sha256"
"encoding/binary" "encoding/binary"
"fmt"
"math" "math"
"net" "net"
"sync" "sync"
@ -169,7 +168,9 @@ func (a *addrBook) OurAddress(addr *p2p.NetAddress) bool {
return ok return ok
} }
// AddAddress implements AddrBook - adds the given address as received from the given source. // AddAddress implements AddrBook
// Add address to a "new" bucket. If it's already in one, only add it probabilistically.
// Returns error if the addr is non-routable. Does not add self.
// NOTE: addr must not be nil // NOTE: addr must not be nil
func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error { func (a *addrBook) AddAddress(addr *p2p.NetAddress, src *p2p.NetAddress) error {
a.mtx.Lock() a.mtx.Lock()
@ -220,7 +221,11 @@ func (a *addrBook) PickAddress(biasTowardsNewAddrs int) *p2p.NetAddress {
a.mtx.Lock() a.mtx.Lock()
defer a.mtx.Unlock() defer a.mtx.Unlock()
if a.size() == 0 { bookSize := a.size()
if bookSize <= 0 {
if bookSize < 0 {
a.Logger.Error("Addrbook size less than 0", "nNew", a.nNew, "nOld", a.nOld)
}
return nil return nil
} }
if biasTowardsNewAddrs > 100 { if biasTowardsNewAddrs > 100 {
@ -294,29 +299,35 @@ func (a *addrBook) MarkBad(addr *p2p.NetAddress) {
// GetSelection implements AddrBook. // GetSelection implements AddrBook.
// It randomly selects some addresses (old & new). Suitable for peer-exchange protocols. // It randomly selects some addresses (old & new). Suitable for peer-exchange protocols.
// Must never return a nil address.
func (a *addrBook) GetSelection() []*p2p.NetAddress { func (a *addrBook) GetSelection() []*p2p.NetAddress {
a.mtx.Lock() a.mtx.Lock()
defer a.mtx.Unlock() defer a.mtx.Unlock()
if a.size() == 0 { bookSize := a.size()
if bookSize <= 0 {
if bookSize < 0 {
a.Logger.Error("Addrbook size less than 0", "nNew", a.nNew, "nOld", a.nOld)
}
return nil return nil
} }
allAddr := make([]*p2p.NetAddress, a.size()) numAddresses := cmn.MaxInt(
cmn.MinInt(minGetSelection, bookSize),
bookSize*getSelectionPercent/100)
numAddresses = cmn.MinInt(maxGetSelection, numAddresses)
// XXX: instead of making a list of all addresses, shuffling, and slicing a random chunk,
// could we just select a random numAddresses of indexes?
allAddr := make([]*p2p.NetAddress, bookSize)
i := 0 i := 0
for _, ka := range a.addrLookup { for _, ka := range a.addrLookup {
allAddr[i] = ka.Addr allAddr[i] = ka.Addr
i++ i++
} }
numAddresses := cmn.MaxInt(
cmn.MinInt(minGetSelection, len(allAddr)),
len(allAddr)*getSelectionPercent/100)
numAddresses = cmn.MinInt(maxGetSelection, numAddresses)
// Fisher-Yates shuffle the array. We only need to do the first // Fisher-Yates shuffle the array. We only need to do the first
// `numAddresses' since we are throwing the rest. // `numAddresses' since we are throwing the rest.
// XXX: What's the point of this if we already loop randomly through addrLookup ?
for i := 0; i < numAddresses; i++ { for i := 0; i < numAddresses; i++ {
// pick a number between current index and the end // pick a number between current index and the end
j := cmn.RandIntn(len(allAddr)-i) + i j := cmn.RandIntn(len(allAddr)-i) + i
@ -329,6 +340,7 @@ func (a *addrBook) GetSelection() []*p2p.NetAddress {
// GetSelectionWithBias implements AddrBook. // GetSelectionWithBias implements AddrBook.
// It randomly selects some addresses (old & new). Suitable for peer-exchange protocols. // It randomly selects some addresses (old & new). Suitable for peer-exchange protocols.
// Must never return a nil address.
// //
// Each address is picked randomly from an old or new bucket according to the // Each address is picked randomly from an old or new bucket according to the
// biasTowardsNewAddrs argument, which must be between [0, 100] (or else is truncated to // biasTowardsNewAddrs argument, which must be between [0, 100] (or else is truncated to
@ -338,7 +350,11 @@ func (a *addrBook) GetSelectionWithBias(biasTowardsNewAddrs int) []*p2p.NetAddre
a.mtx.Lock() a.mtx.Lock()
defer a.mtx.Unlock() defer a.mtx.Unlock()
if a.size() == 0 { bookSize := a.size()
if bookSize <= 0 {
if bookSize < 0 {
a.Logger.Error("Addrbook size less than 0", "nNew", a.nNew, "nOld", a.nOld)
}
return nil return nil
} }
@ -350,8 +366,8 @@ func (a *addrBook) GetSelectionWithBias(biasTowardsNewAddrs int) []*p2p.NetAddre
} }
numAddresses := cmn.MaxInt( numAddresses := cmn.MaxInt(
cmn.MinInt(minGetSelection, a.size()), cmn.MinInt(minGetSelection, bookSize),
a.size()*getSelectionPercent/100) bookSize*getSelectionPercent/100)
numAddresses = cmn.MinInt(maxGetSelection, numAddresses) numAddresses = cmn.MinInt(maxGetSelection, numAddresses)
selection := make([]*p2p.NetAddress, numAddresses) selection := make([]*p2p.NetAddress, numAddresses)
@ -487,11 +503,11 @@ func (a *addrBook) getBucket(bucketType byte, bucketIdx int) map[string]*knownAd
// Adds ka to new bucket. Returns false if it couldn't do it cuz buckets full. // Adds ka to new bucket. Returns false if it couldn't do it cuz buckets full.
// NOTE: currently it always returns true. // NOTE: currently it always returns true.
func (a *addrBook) addToNewBucket(ka *knownAddress, bucketIdx int) bool { func (a *addrBook) addToNewBucket(ka *knownAddress, bucketIdx int) {
// Sanity check // Sanity check
if ka.isOld() { if ka.isOld() {
a.Logger.Error(cmn.Fmt("Cannot add address already in old bucket to a new bucket: %v", ka)) a.Logger.Error("Failed Sanity Check! Cant add old address to new bucket", "ka", ka, "bucket", bucketIdx)
return false return
} }
addrStr := ka.Addr.String() addrStr := ka.Addr.String()
@ -499,7 +515,7 @@ func (a *addrBook) addToNewBucket(ka *knownAddress, bucketIdx int) bool {
// Already exists? // Already exists?
if _, ok := bucket[addrStr]; ok { if _, ok := bucket[addrStr]; ok {
return true return
} }
// Enforce max addresses. // Enforce max addresses.
@ -517,8 +533,6 @@ func (a *addrBook) addToNewBucket(ka *knownAddress, bucketIdx int) bool {
// Add it to addrLookup // Add it to addrLookup
a.addrLookup[ka.ID()] = ka a.addrLookup[ka.ID()] = ka
return true
} }
// Adds ka to old bucket. Returns false if it couldn't do it cuz buckets full. // Adds ka to old bucket. Returns false if it couldn't do it cuz buckets full.
@ -605,19 +619,22 @@ func (a *addrBook) pickOldest(bucketType byte, bucketIdx int) *knownAddress {
// adds the address to a "new" bucket. if its already in one, // adds the address to a "new" bucket. if its already in one,
// it only adds it probabilistically // it only adds it probabilistically
func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error { func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
if a.routabilityStrict && !addr.Routable() { if addr == nil || src == nil {
return fmt.Errorf("Cannot add non-routable address %v", addr) return ErrAddrBookNilAddr{addr, src}
} }
if a.routabilityStrict && !addr.Routable() {
return ErrAddrBookNonRoutable{addr}
}
// TODO: we should track ourAddrs by ID and by IP:PORT and refuse both.
if _, ok := a.ourAddrs[addr.String()]; ok { if _, ok := a.ourAddrs[addr.String()]; ok {
// Ignore our own listener address. return ErrAddrBookSelf{addr}
return fmt.Errorf("Cannot add ourselves with address %v", addr)
} }
ka := a.addrLookup[addr.ID] ka := a.addrLookup[addr.ID]
if ka != nil { if ka != nil {
// Already old. // If its already old and the addr is the same, ignore it.
if ka.isOld() { if ka.isOld() && ka.Addr.Equals(addr) {
return nil return nil
} }
// Already in max new buckets. // Already in max new buckets.
@ -634,12 +651,7 @@ func (a *addrBook) addAddress(addr, src *p2p.NetAddress) error {
} }
bucket := a.calcNewBucket(addr, src) bucket := a.calcNewBucket(addr, src)
added := a.addToNewBucket(ka, bucket) a.addToNewBucket(ka, bucket)
if !added {
a.Logger.Info("Can't add new address, addr book is full", "address", addr, "total", a.size())
}
a.Logger.Info("Added new address", "address", addr, "total", a.size())
return nil return nil
} }
@ -674,8 +686,6 @@ func (a *addrBook) moveToOld(ka *knownAddress) {
return return
} }
// Remember one of the buckets in which ka is in.
freedBucket := ka.Buckets[0]
// Remove from all (new) buckets. // Remove from all (new) buckets.
a.removeFromAllBuckets(ka) a.removeFromAllBuckets(ka)
// It's officially old now. // It's officially old now.
@ -685,20 +695,13 @@ func (a *addrBook) moveToOld(ka *knownAddress) {
oldBucketIdx := a.calcOldBucket(ka.Addr) oldBucketIdx := a.calcOldBucket(ka.Addr)
added := a.addToOldBucket(ka, oldBucketIdx) added := a.addToOldBucket(ka, oldBucketIdx)
if !added { if !added {
// No room, must evict something // No room; move the oldest to a new bucket
oldest := a.pickOldest(bucketTypeOld, oldBucketIdx) oldest := a.pickOldest(bucketTypeOld, oldBucketIdx)
a.removeFromBucket(oldest, bucketTypeOld, oldBucketIdx) a.removeFromBucket(oldest, bucketTypeOld, oldBucketIdx)
// Find new bucket to put oldest in
newBucketIdx := a.calcNewBucket(oldest.Addr, oldest.Src) newBucketIdx := a.calcNewBucket(oldest.Addr, oldest.Src)
added := a.addToNewBucket(oldest, newBucketIdx) a.addToNewBucket(oldest, newBucketIdx)
// No space in newBucket either, just put it in freedBucket from above.
if !added { // Finally, add our ka to old bucket again.
added := a.addToNewBucket(oldest, freedBucket)
if !added {
a.Logger.Error(cmn.Fmt("Could not migrate oldest %v to freedBucket %v", oldest, freedBucket))
}
}
// Finally, add to bucket again.
added = a.addToOldBucket(ka, oldBucketIdx) added = a.addToOldBucket(ka, oldBucketIdx)
if !added { if !added {
a.Logger.Error(cmn.Fmt("Could not re-add ka %v to oldBucketIdx %v", ka, oldBucketIdx)) a.Logger.Error(cmn.Fmt("Could not re-add ka %v to oldBucketIdx %v", ka, oldBucketIdx))

32
p2p/pex/errors.go Normal file
View File

@ -0,0 +1,32 @@
package pex
import (
"fmt"
"github.com/tendermint/tendermint/p2p"
)
type ErrAddrBookNonRoutable struct {
Addr *p2p.NetAddress
}
func (err ErrAddrBookNonRoutable) Error() string {
return fmt.Sprintf("Cannot add non-routable address %v", err.Addr)
}
type ErrAddrBookSelf struct {
Addr *p2p.NetAddress
}
func (err ErrAddrBookSelf) Error() string {
return fmt.Sprintf("Cannot add ourselves with address %v", err.Addr)
}
type ErrAddrBookNilAddr struct {
Addr *p2p.NetAddress
Src *p2p.NetAddress
}
func (err ErrAddrBookNilAddr) Error() string {
return fmt.Sprintf("Cannot add a nil address. Got (addr, src) = (%v, %v)", err.Addr, err.Src)
}

View File

@ -106,7 +106,6 @@ func (ka *knownAddress) removeBucketRef(bucketIdx int) int {
All addresses that meet these criteria are assumed to be worthless and not All addresses that meet these criteria are assumed to be worthless and not
worth keeping hold of. worth keeping hold of.
XXX: so a good peer needs us to call MarkGood before the conditions above are reached!
*/ */
func (ka *knownAddress) isBad() bool { func (ka *knownAddress) isBad() bool {
// Is Old --> good // Is Old --> good
@ -115,14 +114,15 @@ func (ka *knownAddress) isBad() bool {
} }
// Has been attempted in the last minute --> good // Has been attempted in the last minute --> good
if ka.LastAttempt.Before(time.Now().Add(-1 * time.Minute)) { if ka.LastAttempt.After(time.Now().Add(-1 * time.Minute)) {
return false return false
} }
// TODO: From the future?
// Too old? // Too old?
// XXX: does this mean if we've kept a connection up for this long we'll disconnect?! // TODO: should be a timestamp of last seen, not just last attempt
// and shouldn't it be .Before ? if ka.LastAttempt.Before(time.Now().Add(-1 * numMissingDays * time.Hour * 24)) {
if ka.LastAttempt.After(time.Now().Add(-1 * numMissingDays * time.Hour * 24)) {
return true return true
} }
@ -132,7 +132,6 @@ func (ka *knownAddress) isBad() bool {
} }
// Hasn't succeeded in too long? // Hasn't succeeded in too long?
// XXX: does this mean if we've kept a connection up for this long we'll disconnect?!
if ka.LastSuccess.Before(time.Now().Add(-1*minBadDays*time.Hour*24)) && if ka.LastSuccess.Before(time.Now().Add(-1*minBadDays*time.Hour*24)) &&
ka.Attempts >= maxFailures { ka.Attempts >= maxFailures {
return true return true

View File

@ -50,6 +50,6 @@ const (
minGetSelection = 32 minGetSelection = 32
// max addresses returned by GetSelection // max addresses returned by GetSelection
// NOTE: this must match "maxPexMessageSize" // NOTE: this must match "maxMsgSize"
maxGetSelection = 250 maxGetSelection = 250
) )

View File

@ -20,11 +20,18 @@ const (
// PexChannel is a channel for PEX messages // PexChannel is a channel for PEX messages
PexChannel = byte(0x00) PexChannel = byte(0x00)
maxMsgSize = 1048576 // 1MB // over-estimate of max NetAddress size
// hexID (40) + IP (16) + Port (2) + Name (100) ...
// NOTE: dont use massive DNS name ..
maxAddressSize = 256
// NOTE: amplificaiton factor!
// small request results in up to maxMsgSize response
maxMsgSize = maxAddressSize * maxGetSelection
// ensure we have enough peers // ensure we have enough peers
defaultEnsurePeersPeriod = 30 * time.Second defaultEnsurePeersPeriod = 30 * time.Second
defaultMinNumOutboundPeers = 10 defaultMinNumOutboundPeers = p2p.DefaultMinNumOutboundPeers
// Seed/Crawler constants // Seed/Crawler constants
@ -61,7 +68,7 @@ type PEXReactor struct {
book AddrBook book AddrBook
config *PEXReactorConfig config *PEXReactorConfig
ensurePeersPeriod time.Duration ensurePeersPeriod time.Duration // TODO: should go in the config
// maps to prevent abuse // maps to prevent abuse
requestsSent *cmn.CMap // ID->struct{}: unanswered send requests requestsSent *cmn.CMap // ID->struct{}: unanswered send requests
@ -70,6 +77,12 @@ type PEXReactor struct {
attemptsToDial sync.Map // address (string) -> {number of attempts (int), last time dialed (time.Time)} attemptsToDial sync.Map // address (string) -> {number of attempts (int), last time dialed (time.Time)}
} }
func (pexR *PEXReactor) minReceiveRequestInterval() time.Duration {
// NOTE: must be less than ensurePeersPeriod, otherwise we'll request
// peers too quickly from others and they'll think we're bad!
return pexR.ensurePeersPeriod / 3
}
// PEXReactorConfig holds reactor specific configuration data. // PEXReactorConfig holds reactor specific configuration data.
type PEXReactorConfig struct { type PEXReactorConfig struct {
// Seed/Crawler mode // Seed/Crawler mode
@ -113,6 +126,7 @@ func (r *PEXReactor) OnStart() error {
} }
// return err if user provided a bad seed address // return err if user provided a bad seed address
// or a host name that we cant resolve
if err := r.checkSeeds(); err != nil { if err := r.checkSeeds(); err != nil {
return err return err
} }
@ -155,16 +169,30 @@ func (r *PEXReactor) AddPeer(p Peer) {
r.RequestAddrs(p) r.RequestAddrs(p)
} }
} else { } else {
// For inbound peers, the peer is its own source, // inbound peer is its own source
// and its NodeInfo has already been validated.
// Let the ensurePeersRoutine handle asking for more
// peers when we need - we don't trust inbound peers as much.
addr := p.NodeInfo().NetAddress() addr := p.NodeInfo().NetAddress()
if !isAddrPrivate(addr, r.config.PrivatePeerIDs) { src := addr
err := r.book.AddAddress(addr, addr)
if err != nil { // ignore private addrs
r.Logger.Error("Failed to add new address", "err", err) if isAddrPrivate(addr, r.config.PrivatePeerIDs) {
return
} }
// add to book. dont RequestAddrs right away because
// we don't trust inbound as much - let ensurePeersRoutine handle it.
err := r.book.AddAddress(addr, src)
r.logErrAddrBook(err)
}
}
func (r *PEXReactor) logErrAddrBook(err error) {
if err != nil {
switch err.(type) {
case ErrAddrBookNilAddr:
r.Logger.Error("Failed to add new address", "err", err)
default:
// non-routable, self, full book, etc.
r.Logger.Debug("Failed to add new address", "err", err)
} }
} }
} }
@ -195,6 +223,10 @@ func (r *PEXReactor) Receive(chID byte, src Peer, msgBytes []byte) {
} }
// Seeds disconnect after sending a batch of addrs // Seeds disconnect after sending a batch of addrs
// NOTE: this is a prime candidate for amplification attacks
// so it's important we
// 1) restrict how frequently peers can request
// 2) limit the output size
if r.config.SeedMode { if r.config.SeedMode {
r.SendAddrs(src, r.book.GetSelectionWithBias(biasToSelectNewPeers)) r.SendAddrs(src, r.book.GetSelectionWithBias(biasToSelectNewPeers))
r.Switch.StopPeerGracefully(src) r.Switch.StopPeerGracefully(src)
@ -213,6 +245,7 @@ func (r *PEXReactor) Receive(chID byte, src Peer, msgBytes []byte) {
} }
} }
// enforces a minimum amount of time between requests
func (r *PEXReactor) receiveRequest(src Peer) error { func (r *PEXReactor) receiveRequest(src Peer) error {
id := string(src.ID()) id := string(src.ID())
v := r.lastReceivedRequests.Get(id) v := r.lastReceivedRequests.Get(id)
@ -232,8 +265,14 @@ func (r *PEXReactor) receiveRequest(src Peer) error {
} }
now := time.Now() now := time.Now()
if now.Sub(lastReceived) < r.ensurePeersPeriod/3 { minInterval := r.minReceiveRequestInterval()
return fmt.Errorf("Peer (%v) is sending too many PEX requests. Disconnecting", src.ID()) if now.Sub(lastReceived) < minInterval {
return fmt.Errorf("Peer (%v) send next PEX request too soon. lastReceived: %v, now: %v, minInterval: %v. Disconnecting",
src.ID(),
lastReceived,
now,
minInterval,
)
} }
r.lastReceivedRequests.Set(id, now) r.lastReceivedRequests.Set(id, now)
return nil return nil
@ -254,22 +293,30 @@ func (r *PEXReactor) RequestAddrs(p Peer) {
// request for this peer and deletes the open request. // request for this peer and deletes the open request.
// If there's no open request for the src peer, it returns an error. // If there's no open request for the src peer, it returns an error.
func (r *PEXReactor) ReceiveAddrs(addrs []*p2p.NetAddress, src Peer) error { func (r *PEXReactor) ReceiveAddrs(addrs []*p2p.NetAddress, src Peer) error {
id := string(src.ID())
id := string(src.ID())
if !r.requestsSent.Has(id) { if !r.requestsSent.Has(id) {
return cmn.NewError("Received unsolicited pexAddrsMessage") return cmn.NewError("Received unsolicited pexAddrsMessage")
} }
r.requestsSent.Delete(id) r.requestsSent.Delete(id)
srcAddr := src.NodeInfo().NetAddress() srcAddr := src.NodeInfo().NetAddress()
for _, netAddr := range addrs { for _, netAddr := range addrs {
if netAddr != nil && !isAddrPrivate(netAddr, r.config.PrivatePeerIDs) { // NOTE: GetSelection methods should never return nil addrs
if netAddr == nil {
return cmn.NewError("received nil addr")
}
// ignore private peers
// TODO: give private peers to AddrBook so it can enforce this on AddAddress.
// We'd then have to check for ErrPrivatePeer on AddAddress here, which is
// an error we just ignore (maybe peer is probing us for our private peers :P)
if isAddrPrivate(netAddr, r.config.PrivatePeerIDs) {
continue
}
err := r.book.AddAddress(netAddr, srcAddr) err := r.book.AddAddress(netAddr, srcAddr)
if err != nil { r.logErrAddrBook(err)
r.Logger.Error("Failed to add new address", "err", err)
}
}
} }
return nil return nil
} }
@ -360,6 +407,9 @@ func (r *PEXReactor) ensurePeers() {
if connected := r.Switch.Peers().Has(try.ID); connected { if connected := r.Switch.Peers().Has(try.ID); connected {
continue continue
} }
// TODO: consider moving some checks from toDial into here
// so we don't even consider dialing peers that we want to wait
// before dialling again, or have dialed too many times already
r.Logger.Info("Will dial address", "addr", try) r.Logger.Info("Will dial address", "addr", try)
toDial[try.ID] = try toDial[try.ID] = try
} }
@ -387,13 +437,17 @@ func (r *PEXReactor) ensurePeers() {
} }
} }
func (r *PEXReactor) dialPeer(addr *p2p.NetAddress) { func (r *PEXReactor) dialAttemptsInfo(addr *p2p.NetAddress) (attempts int, lastDialed time.Time) {
var attempts int _attempts, ok := r.attemptsToDial.Load(addr.DialString())
var lastDialed time.Time if !ok {
if lAttempts, attempted := r.attemptsToDial.Load(addr.DialString()); attempted { return
attempts = lAttempts.(_attemptsToDial).number
lastDialed = lAttempts.(_attemptsToDial).lastDialed
} }
atd := _attempts.(_attemptsToDial)
return atd.number, atd.lastDialed
}
func (r *PEXReactor) dialPeer(addr *p2p.NetAddress) {
attempts, lastDialed := r.dialAttemptsInfo(addr)
if attempts > maxAttemptsToDial { if attempts > maxAttemptsToDial {
r.Logger.Error("Reached max attempts to dial", "addr", addr, "attempts", attempts) r.Logger.Error("Reached max attempts to dial", "addr", addr, "attempts", attempts)
@ -590,7 +644,7 @@ func (r *PEXReactor) attemptDisconnects() {
} }
} }
// isAddrPrivate returns true if addr is private. // isAddrPrivate returns true if addr.ID is a private ID.
func isAddrPrivate(addr *p2p.NetAddress, privatePeerIDs []string) bool { func isAddrPrivate(addr *p2p.NetAddress, privatePeerIDs []string) bool {
for _, id := range privatePeerIDs { for _, id := range privatePeerIDs {
if string(addr.ID) == id { if string(addr.ID) == id {

View File

@ -26,6 +26,10 @@ const (
// ie. 3**10 = 16hrs // ie. 3**10 = 16hrs
reconnectBackOffAttempts = 10 reconnectBackOffAttempts = 10
reconnectBackOffBaseSeconds = 3 reconnectBackOffBaseSeconds = 3
// keep at least this many outbound peers
// TODO: move to config
DefaultMinNumOutboundPeers = 10
) )
//----------------------------------------------------------------------------- //-----------------------------------------------------------------------------
@ -260,6 +264,7 @@ func (sw *Switch) StopPeerForError(peer Peer, reason interface{}) {
sw.stopAndRemovePeer(peer, reason) sw.stopAndRemovePeer(peer, reason)
if peer.IsPersistent() { if peer.IsPersistent() {
// NOTE: this is the self-reported addr, not the original we dialed
go sw.reconnectToPeer(peer.NodeInfo().NetAddress()) go sw.reconnectToPeer(peer.NodeInfo().NetAddress())
} }
} }
@ -285,6 +290,8 @@ func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
// to the PEX/Addrbook to find the peer with the addr again // to the PEX/Addrbook to find the peer with the addr again
// NOTE: this will keep trying even if the handshake or auth fails. // NOTE: this will keep trying even if the handshake or auth fails.
// TODO: be more explicit with error types so we only retry on certain failures // TODO: be more explicit with error types so we only retry on certain failures
// - ie. if we're getting ErrDuplicatePeer we can stop
// because the addrbook got us the peer back already
func (sw *Switch) reconnectToPeer(addr *NetAddress) { func (sw *Switch) reconnectToPeer(addr *NetAddress) {
if sw.reconnecting.Has(string(addr.ID)) { if sw.reconnecting.Has(string(addr.ID)) {
return return
@ -300,14 +307,14 @@ func (sw *Switch) reconnectToPeer(addr *NetAddress) {
} }
err := sw.DialPeerWithAddress(addr, true) err := sw.DialPeerWithAddress(addr, true)
if err != nil { if err == nil {
return // success
}
sw.Logger.Info("Error reconnecting to peer. Trying again", "tries", i, "err", err, "addr", addr) sw.Logger.Info("Error reconnecting to peer. Trying again", "tries", i, "err", err, "addr", addr)
// sleep a set amount // sleep a set amount
sw.randomSleep(reconnectInterval) sw.randomSleep(reconnectInterval)
continue continue
} else {
return
}
} }
sw.Logger.Error("Failed to reconnect to peer. Beginning exponential backoff", sw.Logger.Error("Failed to reconnect to peer. Beginning exponential backoff",
@ -351,6 +358,8 @@ func (sw *Switch) IsDialing(id ID) bool {
} }
// DialPeersAsync dials a list of peers asynchronously in random order (optionally, making them persistent). // DialPeersAsync dials a list of peers asynchronously in random order (optionally, making them persistent).
// Used to dial peers from config on startup or from unsafe-RPC (trusted sources).
// TODO: remove addrBook arg since it's now set on the switch
func (sw *Switch) DialPeersAsync(addrBook AddrBook, peers []string, persistent bool) error { func (sw *Switch) DialPeersAsync(addrBook AddrBook, peers []string, persistent bool) error {
netAddrs, errs := NewNetAddressStrings(peers) netAddrs, errs := NewNetAddressStrings(peers)
// only log errors, dial correct addresses // only log errors, dial correct addresses
@ -360,7 +369,10 @@ func (sw *Switch) DialPeersAsync(addrBook AddrBook, peers []string, persistent b
ourAddr := sw.nodeInfo.NetAddress() ourAddr := sw.nodeInfo.NetAddress()
// TODO: move this out of here ? // TODO: this code feels like it's in the wrong place.
// The integration tests depend on the addrBook being saved
// right away but maybe we can change that. Recall that
// the addrBook is only written to disk every 2min
if addrBook != nil { if addrBook != nil {
// add peers to `addrBook` // add peers to `addrBook`
for _, netAddr := range netAddrs { for _, netAddr := range netAddrs {
@ -391,8 +403,13 @@ func (sw *Switch) DialPeersAsync(addrBook AddrBook, peers []string, persistent b
sw.randomSleep(0) sw.randomSleep(0)
err := sw.DialPeerWithAddress(addr, persistent) err := sw.DialPeerWithAddress(addr, persistent)
if err != nil { if err != nil {
switch err {
case ErrSwitchConnectToSelf, ErrSwitchDuplicatePeer:
sw.Logger.Debug("Error dialing peer", "err", err)
default:
sw.Logger.Error("Error dialing peer", "err", err) sw.Logger.Error("Error dialing peer", "err", err)
} }
}
}(i) }(i)
} }
return nil return nil
@ -452,7 +469,8 @@ func (sw *Switch) listenerRoutine(l Listener) {
} }
// ignore connection if we already have enough // ignore connection if we already have enough
maxPeers := sw.config.MaxNumPeers // leave room for MinNumOutboundPeers
maxPeers := sw.config.MaxNumPeers - DefaultMinNumOutboundPeers
if maxPeers <= sw.peers.Size() { if maxPeers <= sw.peers.Size() {
sw.Logger.Info("Ignoring inbound connection: already have enough peers", "address", inConn.RemoteAddr().String(), "numPeers", sw.peers.Size(), "max", maxPeers) sw.Logger.Info("Ignoring inbound connection: already have enough peers", "address", inConn.RemoteAddr().String(), "numPeers", sw.peers.Size(), "max", maxPeers)
continue continue
@ -485,11 +503,12 @@ func (sw *Switch) addInboundPeerWithConfig(conn net.Conn, config *PeerConfig) er
// dial the peer; make secret connection; authenticate against the dialed ID; // dial the peer; make secret connection; authenticate against the dialed ID;
// add the peer. // add the peer.
// if dialing fails, start the reconnect loop. If handhsake fails, its over.
// If peer is started succesffuly, reconnectLoop will start when StopPeerForError is called
func (sw *Switch) addOutboundPeerWithConfig(addr *NetAddress, config *PeerConfig, persistent bool) error { func (sw *Switch) addOutboundPeerWithConfig(addr *NetAddress, config *PeerConfig, persistent bool) error {
sw.Logger.Info("Dialing peer", "address", addr) sw.Logger.Info("Dialing peer", "address", addr)
peerConn, err := newOutboundPeerConn(addr, config, persistent, sw.nodeKey.PrivKey) peerConn, err := newOutboundPeerConn(addr, config, persistent, sw.nodeKey.PrivKey)
if err != nil { if err != nil {
sw.Logger.Error("Failed to dial peer", "address", addr, "err", err)
if persistent { if persistent {
go sw.reconnectToPeer(addr) go sw.reconnectToPeer(addr)
} }
@ -497,7 +516,6 @@ func (sw *Switch) addOutboundPeerWithConfig(addr *NetAddress, config *PeerConfig
} }
if err := sw.addPeer(peerConn); err != nil { if err := sw.addPeer(peerConn); err != nil {
sw.Logger.Error("Failed to add peer", "address", addr, "err", err)
peerConn.CloseConn() peerConn.CloseConn()
return err return err
} }

View File

@ -49,6 +49,8 @@ func CreateRoutableAddr() (addr string, netAddr *NetAddress) {
//------------------------------------------------------------------ //------------------------------------------------------------------
// Connects switches via arbitrary net.Conn. Used for testing. // Connects switches via arbitrary net.Conn. Used for testing.
const TEST_HOST = "localhost"
// MakeConnectedSwitches returns n switches, connected according to the connect func. // MakeConnectedSwitches returns n switches, connected according to the connect func.
// If connect==Connect2Switches, the switches will be fully connected. // If connect==Connect2Switches, the switches will be fully connected.
// initSwitch defines how the i'th switch should be initialized (ie. with what reactors). // initSwitch defines how the i'th switch should be initialized (ie. with what reactors).
@ -56,7 +58,7 @@ func CreateRoutableAddr() (addr string, netAddr *NetAddress) {
func MakeConnectedSwitches(cfg *cfg.P2PConfig, n int, initSwitch func(int, *Switch) *Switch, connect func([]*Switch, int, int)) []*Switch { func MakeConnectedSwitches(cfg *cfg.P2PConfig, n int, initSwitch func(int, *Switch) *Switch, connect func([]*Switch, int, int)) []*Switch {
switches := make([]*Switch, n) switches := make([]*Switch, n)
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
switches[i] = MakeSwitch(cfg, i, "testing", "123.123.123", initSwitch) switches[i] = MakeSwitch(cfg, i, TEST_HOST, "123.123.123", initSwitch)
} }
if err := StartSwitches(switches); err != nil { if err := StartSwitches(switches); err != nil {

View File

@ -23,6 +23,7 @@ import (
// { // {
// "error": "", // "error": "",
// "result": { // "result": {
// "n_peers": 0,
// "peers": [], // "peers": [],
// "listeners": [ // "listeners": [
// "Listener(@10.0.2.15:46656)" // "Listener(@10.0.2.15:46656)"
@ -47,9 +48,13 @@ func NetInfo() (*ctypes.ResultNetInfo, error) {
ConnectionStatus: peer.Status(), ConnectionStatus: peer.Status(),
}) })
} }
// TODO: Should we include PersistentPeers and Seeds in here?
// PRO: useful info
// CON: privacy
return &ctypes.ResultNetInfo{ return &ctypes.ResultNetInfo{
Listening: listening, Listening: listening,
Listeners: listeners, Listeners: listeners,
NPeers: len(peers),
Peers: peers, Peers: peers,
}, nil }, nil
} }

View File

@ -100,6 +100,7 @@ func (s *ResultStatus) TxIndexEnabled() bool {
type ResultNetInfo struct { type ResultNetInfo struct {
Listening bool `json:"listening"` Listening bool `json:"listening"`
Listeners []string `json:"listeners"` Listeners []string `json:"listeners"`
NPeers int `json:"n_peers"`
Peers []Peer `json:"peers"` Peers []Peer `json:"peers"`
} }