Remove many uses of legacy term 'fullnode' (#6324)

This commit is contained in:
Greg Fitzgerald 2019-10-10 17:33:00 -06:00 committed by GitHub
parent 9cde67086f
commit c6e4641781
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 93 additions and 94 deletions

View File

@ -1,6 +1,6 @@
# Blockstreamer
Solana supports a node type called an _blockstreamer_. This fullnode variation is intended for applications that need to observe the data plane without participating in transaction validation or ledger replication.
Solana supports a node type called an _blockstreamer_. This validator variation is intended for applications that need to observe the data plane without participating in transaction validation or ledger replication.
A blockstreamer runs without a vote signer, and can optionally stream ledger entries out to a Unix domain socket as they are processed. The JSON-RPC service still functions as on any other node.

View File

@ -17,7 +17,7 @@ height of the block it is voting on. The account stores the 32 highest heights.
* Only the validator knows how to find its own votes directly.
Other components, such as the one that calculates confirmation time, needs to
be baked into the fullnode code. The fullnode code queries the bank for all
be baked into the validator code. The validator code queries the bank for all
accounts owned by the vote program.
* Voting ballots do not contain a PoH hash. The validator is only voting that

View File

@ -1,10 +1,10 @@
# A Solana Cluster
A Solana cluster is a set of fullnodes working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this chapter, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes.
A Solana cluster is a set of validators working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this chapter, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes.
## Creating a Cluster
Before starting any fullnodes, one first needs to create a _genesis block_. The block contains entries referencing two public keys, a _mint_ and a _bootstrap leader_. The fullnode holding the bootstrap leader's private key is responsible for appending the first entries to the ledger. It initializes its internal state with the mint's account. That account will hold the number of native tokens defined by the genesis block. The second fullnode then contacts the bootstrap leader to register as a _validator_ or _replicator_. Additional fullnodes then register with any registered member of the cluster.
Before starting any validators, one first needs to create a _genesis block_. The block contains entries referencing two public keys, a _mint_ and a _bootstrap leader_. The validator holding the bootstrap leader's private key is responsible for appending the first entries to the ledger. It initializes its internal state with the mint's account. That account will hold the number of native tokens defined by the genesis block. The second validator then contacts the bootstrap leader to register as a _validator_ or _replicator_. Additional validators then register with any registered member of the cluster.
A validator receives all entries from the leader and submits votes confirming those entries are valid. After voting, the validator is expected to store those entries until replicator nodes submit proofs that they have stored copies of it. Once the validator observes a sufficient number of copies exist, it deletes its copy.
@ -14,7 +14,7 @@ Validators and replicators enter the cluster via registration messages sent to i
## Sending Transactions to a Cluster
Clients send transactions to any fullnode's Transaction Processing Unit \(TPU\) port. If the node is in the validator role, it forwards the transaction to the designated leader. If in the leader role, the node bundles incoming transactions, timestamps them creating an _entry_, and pushes them onto the cluster's _data plane_. Once on the data plane, the transactions are validated by validator nodes and replicated by replicator nodes, effectively appending them to the ledger.
Clients send transactions to any validator's Transaction Processing Unit \(TPU\) port. If the node is in the validator role, it forwards the transaction to the designated leader. If in the leader role, the node bundles incoming transactions, timestamps them creating an _entry_, and pushes them onto the cluster's _data plane_. Once on the data plane, the transactions are validated by validator nodes and replicated by replicator nodes, effectively appending them to the ledger.
## Confirming Transactions

View File

@ -1,6 +1,6 @@
# Leader Rotation
At any given moment, a cluster expects only one fullnode to produce ledger entries. By having only one leader at a time, all validators are able to replay identical copies of the ledger. The drawback of only one leader at a time, however, is that a malicious leader is capable of censoring votes and transactions. Since censoring cannot be distinguished from the network dropping packets, the cluster cannot simply elect a single node to hold the leader role indefinitely. Instead, the cluster minimizes the influence of a malicious leader by rotating which node takes the lead.
At any given moment, a cluster expects only one validator to produce ledger entries. By having only one leader at a time, all validators are able to replay identical copies of the ledger. The drawback of only one leader at a time, however, is that a malicious leader is capable of censoring votes and transactions. Since censoring cannot be distinguished from the network dropping packets, the cluster cannot simply elect a single node to hold the leader role indefinitely. Instead, the cluster minimizes the influence of a malicious leader by rotating which node takes the lead.
Each validator selects the expected leader using the same algorithm, described below. When the validator receives a new signed ledger entry, it can be certain that entry was produced by the expected leader. The order of slots which each leader is assigned a slot is called a _leader schedule_.

View File

@ -1,8 +1,8 @@
# Managing Forks
The ledger is permitted to fork at slot boundaries. The resulting data structure forms a tree called a _blocktree_. When the fullnode interprets the blocktree, it must maintain state for each fork in the chain. We call each instance an _active fork_. It is the responsibility of a fullnode to weigh those forks, such that it may eventually select a fork.
The ledger is permitted to fork at slot boundaries. The resulting data structure forms a tree called a _blocktree_. When the validator interprets the blocktree, it must maintain state for each fork in the chain. We call each instance an _active fork_. It is the responsibility of a validator to weigh those forks, such that it may eventually select a fork.
A fullnode selects a fork by submiting a vote to a slot leader on that fork. The vote commits the fullnode for a duration of time called a _lockout period_. The fullnode is not permitted to vote on a different fork until that lockout period expires. Each subsequent vote on the same fork doubles the length of the lockout period. After some cluster-configured number of votes \(currently 32\), the length of the lockout period reaches what's called _max lockout_. Until the max lockout is reached, the fullnode has the option to wait until the lockout period is over and then vote on another fork. When it votes on another fork, it performs a operation called _rollback_, whereby the state rolls back in time to a shared checkpoint and then jumps forward to the tip of the fork that it just voted on. The maximum distance that a fork may roll back is called the _rollback depth_. Rollback depth is the number of votes required to achieve max lockout. Whenever a fullnode votes, any checkpoints beyond the rollback depth become unreachable. That is, there is no scenario in which the fullnode will need to roll back beyond rollback depth. It therefore may safely _prune_ unreachable forks and _squash_ all checkpoints beyond rollback depth into the root checkpoint.
A validator selects a fork by submiting a vote to a slot leader on that fork. The vote commits the validator for a duration of time called a _lockout period_. The validator is not permitted to vote on a different fork until that lockout period expires. Each subsequent vote on the same fork doubles the length of the lockout period. After some cluster-configured number of votes \(currently 32\), the length of the lockout period reaches what's called _max lockout_. Until the max lockout is reached, the validator has the option to wait until the lockout period is over and then vote on another fork. When it votes on another fork, it performs a operation called _rollback_, whereby the state rolls back in time to a shared checkpoint and then jumps forward to the tip of the fork that it just voted on. The maximum distance that a fork may roll back is called the _rollback depth_. Rollback depth is the number of votes required to achieve max lockout. Whenever a validator votes, any checkpoints beyond the rollback depth become unreachable. That is, there is no scenario in which the validator will need to roll back beyond rollback depth. It therefore may safely _prune_ unreachable forks and _squash_ all checkpoints beyond rollback depth into the root checkpoint.
## Active Forks
@ -19,7 +19,7 @@ The following sequences are _active forks_:
## Pruning and Squashing
A fullnode may vote on any checkpoint in the tree. In the diagram above, that's every node except the leaves of the tree. After voting, the fullnode prunes nodes that fork from a distance farther than the rollback depth and then takes the opportunity to minimize its memory usage by squashing any nodes it can into the root.
A validator may vote on any checkpoint in the tree. In the diagram above, that's every node except the leaves of the tree. After voting, the validator prunes nodes that fork from a distance farther than the rollback depth and then takes the opportunity to minimize its memory usage by squashing any nodes it can into the root.
Starting from the example above, wth a rollback depth of 2, consider a vote on 5 versus a vote on 6. First, a vote on 5:

View File

@ -1,10 +1,10 @@
# Synchronization
Fast, reliable synchronization is the biggest reason Solana is able to achieve such high throughput. Traditional blockchains synchronize on large chunks of transactions called blocks. By synchronizing on blocks, a transaction cannot be processed until a duration called "block time" has passed. In Proof of Work consensus, these block times need to be very large \(~10 minutes\) to minimize the odds of multiple fullnodes producing a new valid block at the same time. There's no such constraint in Proof of Stake consensus, but without reliable timestamps, a fullnode cannot determine the order of incoming blocks. The popular workaround is to tag each block with a [wallclock timestamp](https://en.bitcoin.it/wiki/Block_timestamp). Because of clock drift and variance in network latencies, the timestamp is only accurate within an hour or two. To workaround the workaround, these systems lengthen block times to provide reasonable certainty that the median timestamp on each block is always increasing.
Fast, reliable synchronization is the biggest reason Solana is able to achieve such high throughput. Traditional blockchains synchronize on large chunks of transactions called blocks. By synchronizing on blocks, a transaction cannot be processed until a duration called "block time" has passed. In Proof of Work consensus, these block times need to be very large \(~10 minutes\) to minimize the odds of multiple validators producing a new valid block at the same time. There's no such constraint in Proof of Stake consensus, but without reliable timestamps, a validator cannot determine the order of incoming blocks. The popular workaround is to tag each block with a [wallclock timestamp](https://en.bitcoin.it/wiki/Block_timestamp). Because of clock drift and variance in network latencies, the timestamp is only accurate within an hour or two. To workaround the workaround, these systems lengthen block times to provide reasonable certainty that the median timestamp on each block is always increasing.
Solana takes a very different approach, which it calls _Proof of History_ or _PoH_. Leader nodes "timestamp" blocks with cryptographic proofs that some duration of time has passed since the last proof. All data hashed into the proof most certainly have occurred before the proof was generated. The node then shares the new block with validator nodes, which are able to verify those proofs. The blocks can arrive at validators in any order or even could be replayed years later. With such reliable synchronization guarantees, Solana is able to break blocks into smaller batches of transactions called _entries_. Entries are streamed to validators in realtime, before any notion of block consensus.
Solana technically never sends a _block_, but uses the term to describe the sequence of entries that fullnodes vote on to achieve _confirmation_. In that way, Solana's confirmation times can be compared apples to apples to block-based systems. The current implementation sets block time to 800ms.
Solana technically never sends a _block_, but uses the term to describe the sequence of entries that validators vote on to achieve _confirmation_. In that way, Solana's confirmation times can be compared apples to apples to block-based systems. The current implementation sets block time to 800ms.
What's happening under the hood is that entries are streamed to validators as quickly as a leader node can batch a set of valid transactions into an entry. Validators process those entries long before it is time to vote on their validity. By processing the transactions optimistically, there is effectively no delay between the time the last entry is received and the time when the node can vote. In the event consensus is **not** achieved, a node simply rolls back its state. This optimisic processing technique was introduced in 1981 and called [Optimistic Concurrency Control](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.4735). It can be applied to blockchain architecture where a cluster votes on a hash that represents the full ledger up to some _block height_. In Solana, it is implemented trivially using the last entry's PoH hash.

View File

@ -1,6 +1,6 @@
# Secure Vote Signing
A validator fullnode receives entries from the current leader and submits votes confirming those entries are valid. This vote submission presents a security challenge, because forged votes that violate consensus rules could be used to slash the validator's stake.
A validator receives entries from the current leader and submits votes confirming those entries are valid. This vote submission presents a security challenge, because forged votes that violate consensus rules could be used to slash the validator's stake.
The validator votes on its chosen fork by submitting a transaction that uses an asymmetric key to sign the result of its validation work. Other entities can verify this signature using the validator's public key. If the validator's key is used to sign incorrect data \(e.g. votes on multiple forks of the ledger\), the node's stake or its resources could be compromised.

View File

@ -41,7 +41,7 @@ $ ./multinode-demo/setup.sh
### Drone
In order for the fullnodes and clients to work, we'll need to spin up a drone to give out some test tokens. The drone delivers Milton Friedman-style "air drops" \(free tokens to requesting clients\) to be used in test transactions.
In order for the validators and clients to work, we'll need to spin up a drone to give out some test tokens. The drone delivers Milton Friedman-style "air drops" \(free tokens to requesting clients\) to be used in test transactions.
Start the drone with:

View File

@ -1,6 +1,6 @@
# Leader-to-Validator Transition
A fullnode typically operates as a validator. If, however, a staker delegates its stake to a fullnode, it will occasionally be selected as a _slot leader_. As a slot leader, the fullnode is responsible for producing blocks during an assigned _slot_. A slot has a duration of some number of preconfigured _ticks_. The duration of those ticks are estimated with a _PoH Recorder_ described later in this document.
A validator typically spends its time validating blocks. If, however, a staker delegates its stake to a validator, it will occasionally be selected as a _slot leader_. As a slot leader, the validator is responsible for producing blocks during an assigned _slot_. A slot has a duration of some number of preconfigured _ticks_. The duration of those ticks are estimated with a _PoH Recorder_ described later in this document.
## BankFork

View File

@ -2,7 +2,7 @@
## Persistent Account Storage
The set of Accounts represent the current computed state of all the transactions that have been processed by a fullnode. Each fullnode needs to maintain this entire set. Each block that is proposed by the network represents a change to this set, and since each block is a potential rollback point the changes need to be reversible.
The set of Accounts represent the current computed state of all the transactions that have been processed by a validator. Each validator needs to maintain this entire set. Each block that is proposed by the network represents a change to this set, and since each block is a potential rollback point the changes need to be reversible.
Persistent storage like NVMEs are 20 to 40 times cheaper than DDR. The problem with persistent storage is that write and read performance is much slower than DDR and care must be taken in how data is read or written to. Both reads and writes can be split between multiple storage drives and accessed in parallel. This design proposes a data structure that allows for concurrent reads and concurrent writes of storage. Writes are optimized by using an AppendVec data structure, which allows a single writer to append while allowing access to many concurrent readers. The accounts index maintains a pointer to a spot where the account was appended to every fork, thus removing the need for explicit checkpointing of state.

View File

@ -12,7 +12,7 @@ A program may be written in any programming language that can target the Berkley
## Storing State between Transactions
If the program needs to store state between transactions, it does so using _accounts_. Accounts are similar to files in operating systems such as Linux. Like a file, an account may hold arbitrary data and that data persists beyond the lifetime of a program. Also like a file, an account includes metadata that tells the runtime who is allowed to access the data and how. Unlike a file, the account includes metadata for the lifetime of the file. That lifetime is expressed in "tokens", which is a number of fractional native tokens, called _lamports_. Accounts are held in validator memory and pay "rent" to stay there. Each fullnode periodically scan all accounts and collects rent. Any account that drops to zero lamports is purged.
If the program needs to store state between transactions, it does so using _accounts_. Accounts are similar to files in operating systems such as Linux. Like a file, an account may hold arbitrary data and that data persists beyond the lifetime of a program. Also like a file, an account includes metadata that tells the runtime who is allowed to access the data and how. Unlike a file, the account includes metadata for the lifetime of the file. That lifetime is expressed in "tokens", which is a number of fractional native tokens, called _lamports_. Accounts are held in validator memory and pay "rent" to stay there. Each validator periodically scan all accounts and collects rent. Any account that drops to zero lamports is purged.
If an account is marked "executable", it will only be used by a _loader_ to run programs. For example, a BPF-compiled program is marked executable and loaded by the BPF loader. No program is allowed to modify the contents of an executable account.

View File

@ -22,10 +22,10 @@ Creator of a new on-chain token \(ERC-20 interface\), may wish to do a worldwide
The drone may prefer its airdrops only target a particular Solana cluster. To do that, it listens to the cluster for new entry IDs and ensure any requests reference a recent one.
Note: to listen for new entry IDs assumes the drone is either a fullnode or a _light_ client. At the time of this writing, light clients have not been implemented and no proposal describes them. This document assumes one of the following approaches be taken:
Note: to listen for new entry IDs assumes the drone is either a validator or a _light_ client. At the time of this writing, light clients have not been implemented and no proposal describes them. This document assumes one of the following approaches be taken:
1. Define and implement a light client
2. Embed a fullnode
2. Embed a validator
3. Query the jsonrpc API for the latest last id at a rate slightly faster than
ticks are produced.

View File

@ -12,9 +12,9 @@ Tests should verify a single bug or scenario, and should be written with the lea
Tests are provided an entry point, which is a `contact_info::ContactInfo` structure, and a keypair that has already been funded.
Each node in the cluster is configured with a `fullnode::ValidatorConfig` at boot time. At boot time this configuration specifies any extra cluster configuration required for the test. The cluster should boot with the configuration when it is run in-process or in a data center.
Each node in the cluster is configured with a `validator::ValidatorConfig` at boot time. At boot time this configuration specifies any extra cluster configuration required for the test. The cluster should boot with the configuration when it is run in-process or in a data center.
Once booted, the test will discover the cluster through a gossip entry point and configure any runtime behaviors via fullnode RPC.
Once booted, the test will discover the cluster through a gossip entry point and configure any runtime behaviors via validator RPC.
## Test Interface
@ -43,13 +43,13 @@ let cluster_nodes = discover_nodes(&entry_point_info, num_nodes);
## Cluster Configuration
To enable specific scenarios, the cluster needs to be booted with special configurations. These configurations can be captured in `fullnode::ValidatorConfig`.
To enable specific scenarios, the cluster needs to be booted with special configurations. These configurations can be captured in `validator::ValidatorConfig`.
For example:
```text
let mut validator_config = ValidatorConfig::default();
validator_config.rpc_config.enable_fullnode_exit = true;
validator_config.rpc_config.enable_validator_exit = true;
let local = LocalCluster::new_with_config(
num_nodes,
10_000,
@ -81,7 +81,7 @@ pub fn test_large_invalid_gossip_nodes(
let cluster = discover_nodes(&entry_point_info, num_nodes);
// Poison the cluster.
let client = create_client(entry_point_info.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(entry_point_info.client_facing_addr(), VALIDATOR_PORT_RANGE);
for _ in 0..(num_nodes * 100) {
client.gossip_push(
cluster_info::invalid_contact_info()
@ -91,7 +91,7 @@ pub fn test_large_invalid_gossip_nodes(
// Force refresh of the active set.
for node in &cluster {
let client = create_client(node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(node.client_facing_addr(), VALIDATOR_PORT_RANGE);
client.gossip_refresh_active_set();
}

View File

@ -4,17 +4,17 @@ It is often useful to allow low resourced clients to participate in a Solana clu
## A Naive Approach
Validators store the signatures of recently confirmed transactions for a short period of time to ensure that they are not processed more than once. Validators provide a JSON RPC endpoint, which clients can use to query the cluster if a transaction has been recently processed. Validators also provide a PubSub notification, whereby a client registers to be notified when a given signature is observed by the validator. While these two mechanisms allow a client to verify a payment, they are not a proof and rely on completely trusting a fullnode.
Validators store the signatures of recently confirmed transactions for a short period of time to ensure that they are not processed more than once. Validators provide a JSON RPC endpoint, which clients can use to query the cluster if a transaction has been recently processed. Validators also provide a PubSub notification, whereby a client registers to be notified when a given signature is observed by the validator. While these two mechanisms allow a client to verify a payment, they are not a proof and rely on completely trusting a validator.
We will describe a way to minimize this trust using Merkle Proofs to anchor the fullnode's response in the ledger, allowing the client to confirm on their own that a sufficient number of their preferred validators have confirmed a transaction. Requiring multiple validator attestations further reduces trust in the fullnode, as it increases both the technical and economic difficulty of compromising several other network participants.
We will describe a way to minimize this trust using Merkle Proofs to anchor the validator's response in the ledger, allowing the client to confirm on their own that a sufficient number of their preferred validators have confirmed a transaction. Requiring multiple validator attestations further reduces trust in the validator, as it increases both the technical and economic difficulty of compromising several other network participants.
## Light Clients
A 'light client' is a cluster participant that does not itself run a fullnode. This light client would provide a level of security greater than trusting a remote fullnode, without requiring the light client to spend a lot of resources verifying the ledger.
A 'light client' is a cluster participant that does not itself run a validator. This light client would provide a level of security greater than trusting a remote validator, without requiring the light client to spend a lot of resources verifying the ledger.
Rather than providing transaction signatures directly to a light client, the fullnode instead generates a Merkle Proof from the transaction of interest to the root of a Merkle Tree of all transactions in the including block. This Merkle Root is stored in a ledger entry which is voted on by validators, providing it consensus legitimacy. The additional level of security for a light client depends on an initial canonical set of validators the light client considers to be the stakeholders of the cluster. As that set is changed, the client can update its internal set of known validators with [receipts](simple-payment-and-state-verification.md#receipts). This may become challenging with a large number of delegated stakes.
Rather than providing transaction signatures directly to a light client, the validator instead generates a Merkle Proof from the transaction of interest to the root of a Merkle Tree of all transactions in the including block. This Merkle Root is stored in a ledger entry which is voted on by validators, providing it consensus legitimacy. The additional level of security for a light client depends on an initial canonical set of validators the light client considers to be the stakeholders of the cluster. As that set is changed, the client can update its internal set of known validators with [receipts](simple-payment-and-state-verification.md#receipts). This may become challenging with a large number of delegated stakes.
Fullnodes themselves may want to use light client APIs for performance reasons. For example, during the initial launch of a fullnode, the fullnode may use a cluster provided checkpoint of the state and verify it with a receipt.
Validators themselves may want to use light client APIs for performance reasons. For example, during the initial launch of a validator, the validator may use a cluster provided checkpoint of the state and verify it with a receipt.
## Receipts

View File

@ -20,7 +20,7 @@ While many of the details of the specific implementation are currently under con
Solana's ledger validation design is based on a rotating, stake-weighted selected leader broadcasting transactions in a PoH data structure to validating nodes. These nodes, upon receiving the leader's broadcast, have the opportunity to vote on the current state and PoH height by signing a transaction into the PoH stream.
To become a Solana validator, a fullnode must deposit/lock-up some amount of SOL in a contract. This SOL will not be accessible for a specific time period. The precise duration of the staking lockup period has not been determined. However we can consider three phases of this time for which specific parameters will be necessary:
To become a Solana validator, one must deposit/lock-up some amount of SOL in a contract. This SOL will not be accessible for a specific time period. The precise duration of the staking lockup period has not been determined. However we can consider three phases of this time for which specific parameters will be necessary:
* _Warm-up period_: which SOL is deposited and inaccessible to the node,

View File

@ -10,6 +10,10 @@ A persistent file addressed by [public key](terminology.md#public-key) and with
A front-end application that interacts with a Solana cluster.
## bank state
The result of interpreting all programs on the ledger at a given [tick height](terminology.md#tick-height). It includes at least the set of all [accounts](terminology.md#account) holding nonzero [native tokens](terminology.md#native-tokens).
## block
A contiguous set of [entries](terminology.md#entry) on the ledger covered by a [vote](terminology.md#ledger-vote). A [leader](terminology.md#leader) produces at most one block per [slot](terminology.md#slot).
@ -24,7 +28,7 @@ The [entry id](terminology.md#entry-id) of the last entry in a [block](terminolo
## bootstrap leader
The first [fullnode](terminology.md#fullnode) to take the [leader](terminology.md#leader) role.
The first [validator](terminology.md#validator) to produce a [block](terminology.md#block).
## CBC block
@ -36,7 +40,7 @@ A [node](terminology.md#node) that utilizes the [cluster](terminology.md#cluster
## cluster
A set of [fullnodes](terminology.md#fullnode) maintaining a single [ledger](terminology.md#ledger).
A set of [validators](terminology.md#validator) maintaining a single [ledger](terminology.md#ledger).
## confirmation
@ -90,14 +94,6 @@ When nodes representing 2/3rd of the stake have a common [root](terminology.md#r
A [ledger](terminology.md#ledger) derived from common entries but then diverged.
## fullnode
A full participant in the [cluster](terminology.md#cluster) either a [leader](terminology.md#leader) or [validator](terminology.md#validator) node.
## fullnode state
The result of interpreting all programs on the ledger at a given [tick height](terminology.md#tick-height). It includes at least the set of all [accounts](terminology.md#account) holding nonzero [native tokens](terminology.md#native-tokens).
## genesis block
The configuration file that prepares the [ledger](terminology.md#ledger) for the first [block](terminology.md#block).
@ -124,11 +120,11 @@ A [program](terminology.md#program) with the ability to interpret the binary enc
## leader
The role of a [fullnode](terminology.md#fullnode) when it is appending [entries](terminology.md#entry) to the [ledger](terminology.md#ledger).
The role of a [validator](terminology.md#validator) when it is appending [entries](terminology.md#entry) to the [ledger](terminology.md#ledger).
## leader schedule
A sequence of [fullnode](terminology.md#fullnode) [public keys](terminology.md#public-key). The cluster uses the leader schedule to determine which fullnode is the [leader](terminology.md#leader) at any moment in time.
A sequence of [validator](terminology.md#validator) [public keys](terminology.md#public-key). The cluster uses the leader schedule to determine which validator is the [leader](terminology.md#leader) at any moment in time.
## ledger
@ -140,15 +136,15 @@ Portion of the ledger which is downloaded by the replicator where storage proof
## ledger vote
A [hash](terminology.md#hash) of the [fullnode's state](terminology.md#fullnode-state) at a given [tick height](terminology.md#tick-height). It comprises a validator's affirmation that a [block](terminology.md#block) it has received has been verified, as well as a promise not to vote for a conflicting [block](terminology.md#block) \(i.e. [fork](terminology.md#fork)\) for a specific amount of time, the [lockout](terminology.md#lockout) period.
A [hash](terminology.md#hash) of the [validator's state](terminology.md#bank-state) at a given [tick height](terminology.md#tick-height). It comprises a validator's affirmation that a [block](terminology.md#block) it has received has been verified, as well as a promise not to vote for a conflicting [block](terminology.md#block) \(i.e. [fork](terminology.md#fork)\) for a specific amount of time, the [lockout](terminology.md#lockout) period.
## light client
A type of [client](terminology.md#client) that can verify it's pointing to a valid [cluster](terminology.md#cluster). It performs more ledger verification than a [thin client](terminology.md#thin-client) and less than a [fullnode](terminology.md#fullnode).
A type of [client](terminology.md#client) that can verify it's pointing to a valid [cluster](terminology.md#cluster). It performs more ledger verification than a [thin client](terminology.md#thin-client) and less than a [validator](terminology.md#validator).
## lockout
The duration of time for which a [fullnode](terminology.md#fullnode) is unable to [vote](terminology.md#ledger-vote) on another [fork](terminology.md#fork).
The duration of time for which a [validator](terminology.md#validator) is unable to [vote](terminology.md#ledger-vote) on another [fork](terminology.md#fork).
## native token
@ -160,7 +156,7 @@ A computer participating in a [cluster](terminology.md#cluster).
## node count
The number of [fullnodes](terminology.md#fullnode) participating in a [cluster](terminology.md#cluster).
The number of [validators](terminology.md#validator) participating in a [cluster](terminology.md#cluster).
## PoH
@ -200,11 +196,11 @@ A [block](terminology.md#block) or [slot](terminology.md#slot) that has reached
## runtime
The component of a [fullnode](terminology.md#fullnode) responsible for [program](terminology.md#program) execution.
The component of a [validator](terminology.md#validator) responsible for [program](terminology.md#program) execution.
## shred
A fraction of a [block](terminology.md#block); the smallest unit sent between [fullnodes](terminology.md#fullnode).
A fraction of a [block](terminology.md#block); the smallest unit sent between [validators](terminology.md#validator).
## slot
@ -220,7 +216,7 @@ The [native token](terminology.md#native-token) tracked by a [cluster](terminolo
## stake
Tokens forfeit to the [cluster](terminology.md#cluster) if malicious [fullnode](terminology.md#fullnode) behavior can be proven.
Tokens forfeit to the [cluster](terminology.md#cluster) if malicious [validator](terminology.md#validator) behavior can be proven.
## storage proof
@ -276,7 +272,7 @@ A set of [transactions](terminology.md#transaction) that may be executed in para
## validator
The role of a [fullnode](terminology.md#fullnode) when it is validating the [leader's](terminology.md#leader) latest [entries](terminology.md#entry).
A full participant in the [cluster](terminology.md#cluster) reponsible for validating the [ledger](terminology.md#ledger) and producing new [blocks](terminology.md#block).
## VDF

View File

@ -52,7 +52,7 @@ use std::sync::{Arc, RwLock};
use std::thread::{sleep, Builder, JoinHandle};
use std::time::{Duration, Instant};
pub const FULLNODE_PORT_RANGE: PortRange = (8000, 10_000);
pub const VALIDATOR_PORT_RANGE: PortRange = (8000, 10_000);
/// The Data plane fanout size, also used as the neighborhood size
pub const DATA_PLANE_FANOUT: usize = 200;
@ -1483,7 +1483,7 @@ impl ClusterInfo {
/// An alternative to Spy Node that has a valid gossip address and fully participate in Gossip.
pub fn gossip_node(id: &Pubkey, gossip_addr: &SocketAddr) -> (ContactInfo, UdpSocket) {
let (port, (gossip_socket, _)) = Node::get_gossip_port(gossip_addr, FULLNODE_PORT_RANGE);
let (port, (gossip_socket, _)) = Node::get_gossip_port(gossip_addr, VALIDATOR_PORT_RANGE);
let daddr = socketaddr_any!();
let node = ContactInfo::new(
@ -1503,7 +1503,7 @@ impl ClusterInfo {
/// A Node with invalid ports to spy on gossip via pull requests
pub fn spy_node(id: &Pubkey) -> (ContactInfo, UdpSocket) {
let (_, gossip_socket) = bind_in_range(FULLNODE_PORT_RANGE).unwrap();
let (_, gossip_socket) = bind_in_range(VALIDATOR_PORT_RANGE).unwrap();
let daddr = socketaddr_any!();
let node = ContactInfo::new(
@ -2089,27 +2089,27 @@ mod tests {
let node = Node::new_with_external_ip(
&Pubkey::new_rand(),
&socketaddr!(ip, 0),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
check_node_sockets(&node, IpAddr::V4(ip), FULLNODE_PORT_RANGE);
check_node_sockets(&node, IpAddr::V4(ip), VALIDATOR_PORT_RANGE);
}
#[test]
fn new_with_external_ip_test_gossip() {
let ip = IpAddr::V4(Ipv4Addr::from(0));
let port = {
bind_in_range(FULLNODE_PORT_RANGE)
bind_in_range(VALIDATOR_PORT_RANGE)
.expect("Failed to bind")
.0
};
let node = Node::new_with_external_ip(
&Pubkey::new_rand(),
&socketaddr!(0, port),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
check_node_sockets(&node, ip, FULLNODE_PORT_RANGE);
check_node_sockets(&node, ip, VALIDATOR_PORT_RANGE);
assert_eq!(node.sockets.gossip.local_addr().unwrap().port(), port);
}
@ -2120,15 +2120,15 @@ mod tests {
let node = Node::new_replicator_with_external_ip(
&Pubkey::new_rand(),
&socketaddr!(ip, 0),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
let ip = IpAddr::V4(ip);
check_socket(&node.sockets.storage.unwrap(), ip, FULLNODE_PORT_RANGE);
check_socket(&node.sockets.gossip, ip, FULLNODE_PORT_RANGE);
check_socket(&node.sockets.repair, ip, FULLNODE_PORT_RANGE);
check_socket(&node.sockets.storage.unwrap(), ip, VALIDATOR_PORT_RANGE);
check_socket(&node.sockets.gossip, ip, VALIDATOR_PORT_RANGE);
check_socket(&node.sockets.repair, ip, VALIDATOR_PORT_RANGE);
check_sockets(&node.sockets.tvu, ip, FULLNODE_PORT_RANGE);
check_sockets(&node.sockets.tvu, ip, VALIDATOR_PORT_RANGE);
}
//test that all cluster_info objects only generate signed messages

View File

@ -2,7 +2,7 @@
use crate::bank_forks::BankForks;
use crate::blocktree::Blocktree;
use crate::cluster_info::{ClusterInfo, FULLNODE_PORT_RANGE};
use crate::cluster_info::{ClusterInfo, VALIDATOR_PORT_RANGE};
use crate::contact_info::ContactInfo;
use crate::service::Service;
use crate::streamer;
@ -119,7 +119,7 @@ pub fn get_clients(nodes: &[ContactInfo]) -> Vec<ThinClient> {
nodes
.iter()
.filter_map(ContactInfo::valid_client_facing_addr)
.map(|addrs| create_client(addrs, FULLNODE_PORT_RANGE))
.map(|addrs| create_client(addrs, VALIDATOR_PORT_RANGE))
.collect()
}
@ -130,7 +130,7 @@ pub fn get_client(nodes: &[ContactInfo]) -> ThinClient {
.filter_map(ContactInfo::valid_client_facing_addr)
.collect();
let select = thread_rng().gen_range(0, nodes.len());
create_client(nodes[select], FULLNODE_PORT_RANGE)
create_client(nodes[select], VALIDATOR_PORT_RANGE)
}
pub fn get_multi_client(nodes: &[ContactInfo]) -> (ThinClient, usize) {
@ -141,7 +141,7 @@ pub fn get_multi_client(nodes: &[ContactInfo]) -> (ThinClient, usize) {
.collect();
let rpc_addrs: Vec<_> = addrs.iter().map(|addr| addr.0).collect();
let tpu_addrs: Vec<_> = addrs.iter().map(|addr| addr.1).collect();
let (_, transactions_socket) = solana_netutil::bind_in_range(FULLNODE_PORT_RANGE).unwrap();
let (_, transactions_socket) = solana_netutil::bind_in_range(VALIDATOR_PORT_RANGE).unwrap();
let num_nodes = tpu_addrs.len();
(
ThinClient::new_from_addrs(rpc_addrs, tpu_addrs, transactions_socket),

View File

@ -1,6 +1,6 @@
use crate::blocktree::Blocktree;
use crate::chacha::{chacha_cbc_encrypt_ledger, CHACHA_BLOCK_SIZE};
use crate::cluster_info::{ClusterInfo, Node, FULLNODE_PORT_RANGE};
use crate::cluster_info::{ClusterInfo, Node, VALIDATOR_PORT_RANGE};
use crate::contact_info::ContactInfo;
use crate::gossip_service::GossipService;
use crate::leader_schedule_cache::LeaderScheduleCache;
@ -805,7 +805,7 @@ impl Replicator {
let exit = Arc::new(AtomicBool::new(false));
let (s_reader, r_reader) = channel();
let repair_socket = Arc::new(bind_in_range(FULLNODE_PORT_RANGE).unwrap().1);
let repair_socket = Arc::new(bind_in_range(VALIDATOR_PORT_RANGE).unwrap().1);
let t_receiver = receiver(
repair_socket.clone(),
&exit,
@ -907,7 +907,7 @@ impl Replicator {
}
fn get_replicator_segment_slot(to: SocketAddr) -> u64 {
let (_port, socket) = bind_in_range(FULLNODE_PORT_RANGE).unwrap();
let (_port, socket) = bind_in_range(VALIDATOR_PORT_RANGE).unwrap();
socket
.set_read_timeout(Some(Duration::from_secs(5)))
.unwrap();

View File

@ -6,7 +6,7 @@ use solana_client::thin_client::create_client;
/// discover the rest of the network.
use solana_core::{
blocktree::Blocktree,
cluster_info::FULLNODE_PORT_RANGE,
cluster_info::VALIDATOR_PORT_RANGE,
consensus::VOTE_THRESHOLD_DEPTH,
contact_info::ContactInfo,
entry::{Entry, EntrySlice},
@ -47,7 +47,7 @@ pub fn spend_and_verify_all_nodes<S: ::std::hash::BuildHasher>(
continue;
}
let random_keypair = Keypair::new();
let client = create_client(ingress_node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(ingress_node.client_facing_addr(), VALIDATOR_PORT_RANGE);
let bal = client
.poll_get_balance(&funding_keypair.pubkey())
.expect("balance in source");
@ -63,7 +63,7 @@ pub fn spend_and_verify_all_nodes<S: ::std::hash::BuildHasher>(
if ignore_nodes.contains(&validator.id) {
continue;
}
let client = create_client(validator.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(validator.client_facing_addr(), VALIDATOR_PORT_RANGE);
client.poll_for_signature_confirmation(&sig, confs).unwrap();
}
}
@ -73,7 +73,7 @@ pub fn verify_balances<S: ::std::hash::BuildHasher>(
expected_balances: HashMap<Pubkey, u64, S>,
node: &ContactInfo,
) {
let client = create_client(node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(node.client_facing_addr(), VALIDATOR_PORT_RANGE);
for (pk, b) in expected_balances {
let bal = client.poll_get_balance(&pk).expect("balance in source");
assert_eq!(bal, b);
@ -86,7 +86,7 @@ pub fn send_many_transactions(
max_tokens_per_transfer: u64,
num_txs: u64,
) -> HashMap<Pubkey, u64> {
let client = create_client(node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(node.client_facing_addr(), VALIDATOR_PORT_RANGE);
let mut expected_balances = HashMap::new();
for _ in 0..num_txs {
let random_keypair = Keypair::new();
@ -118,12 +118,12 @@ pub fn fullnode_exit(entry_point_info: &ContactInfo, nodes: usize) {
let (cluster_nodes, _) = discover_cluster(&entry_point_info.gossip, nodes).unwrap();
assert!(cluster_nodes.len() >= nodes);
for node in &cluster_nodes {
let client = create_client(node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(node.client_facing_addr(), VALIDATOR_PORT_RANGE);
assert!(client.fullnode_exit().unwrap());
}
sleep(Duration::from_millis(DEFAULT_SLOT_MILLIS));
for node in &cluster_nodes {
let client = create_client(node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(node.client_facing_addr(), VALIDATOR_PORT_RANGE);
assert!(client.fullnode_exit().is_err());
}
}
@ -183,7 +183,7 @@ pub fn kill_entry_and_spend_and_verify_rest(
solana_logger::setup();
let (cluster_nodes, _) = discover_cluster(&entry_point_info.gossip, nodes).unwrap();
assert!(cluster_nodes.len() >= nodes);
let client = create_client(entry_point_info.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(entry_point_info.client_facing_addr(), VALIDATOR_PORT_RANGE);
let first_two_epoch_slots = MINIMUM_SLOTS_PER_EPOCH * 3;
for ingress_node in &cluster_nodes {
@ -210,7 +210,7 @@ pub fn kill_entry_and_spend_and_verify_rest(
continue;
}
let client = create_client(ingress_node.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(ingress_node.client_facing_addr(), VALIDATOR_PORT_RANGE);
let balance = client
.poll_get_balance(&funding_keypair.pubkey())
.expect("balance in source");
@ -275,7 +275,7 @@ fn poll_all_nodes_for_signature(
if validator.id == entry_point_info.id {
continue;
}
let client = create_client(validator.client_facing_addr(), FULLNODE_PORT_RANGE);
let client = create_client(validator.client_facing_addr(), VALIDATOR_PORT_RANGE);
client.poll_for_signature_confirmation(&sig, confs)?;
}

View File

@ -2,7 +2,7 @@ use crate::cluster::{Cluster, ClusterValidatorInfo, ValidatorInfo};
use solana_client::thin_client::{create_client, ThinClient};
use solana_core::{
blocktree::create_new_tmp_ledger,
cluster_info::{Node, FULLNODE_PORT_RANGE},
cluster_info::{Node, VALIDATOR_PORT_RANGE},
contact_info::ContactInfo,
genesis_utils::{create_genesis_block_with_leader, GenesisBlockInfo},
gossip_service::discover_cluster,
@ -269,7 +269,7 @@ impl LocalCluster {
pub fn add_validator(&mut self, validator_config: &ValidatorConfig, stake: u64) {
let client = create_client(
self.entry_point_info.client_facing_addr(),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
// Must have enough tokens to fund vote account and set delegate
@ -350,7 +350,7 @@ impl LocalCluster {
let storage_pubkey = storage_keypair.pubkey();
let client = create_client(
self.entry_point_info.client_facing_addr(),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
// Give the replicator some lamports to setup its storage accounts
@ -397,7 +397,7 @@ impl LocalCluster {
pub fn transfer(&self, source_keypair: &Keypair, dest_pubkey: &Pubkey, lamports: u64) -> u64 {
let client = create_client(
self.entry_point_info.client_facing_addr(),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
Self::transfer_with_client(&client, source_keypair, dest_pubkey, lamports)
}
@ -574,7 +574,7 @@ impl Cluster for LocalCluster {
self.fullnode_infos.get(pubkey).map(|f| {
create_client(
f.info.contact_info.client_facing_addr(),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
)
})
}

View File

@ -3,7 +3,7 @@ use serial_test_derive::serial;
use solana_bench_tps::bench::{do_bench_tps, generate_and_fund_keypairs};
use solana_bench_tps::cli::Config;
use solana_client::thin_client::create_client;
use solana_core::cluster_info::FULLNODE_PORT_RANGE;
use solana_core::cluster_info::VALIDATOR_PORT_RANGE;
use solana_core::validator::ValidatorConfig;
use solana_drone::drone::run_local_drone;
#[cfg(feature = "move")]
@ -38,7 +38,7 @@ fn test_bench_tps_local_cluster(config: Config) {
let client = create_client(
(cluster.entry_point_info.rpc, cluster.entry_point_info.tpu),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
let (addr_sender, addr_receiver) = channel();

View File

@ -2,7 +2,7 @@ use crate::local_cluster::{ClusterConfig, LocalCluster};
use serial_test_derive::serial;
use solana_client::thin_client::create_client;
use solana_core::blocktree::{create_new_tmp_ledger, get_tmp_ledger_path, Blocktree};
use solana_core::cluster_info::{ClusterInfo, Node, FULLNODE_PORT_RANGE};
use solana_core::cluster_info::{ClusterInfo, Node, VALIDATOR_PORT_RANGE};
use solana_core::contact_info::ContactInfo;
use solana_core::gossip_service::discover_cluster;
use solana_core::replicator::Replicator;
@ -171,7 +171,7 @@ fn test_account_setup() {
// now check that the cluster actually has accounts for the replicator.
let client = create_client(
cluster.entry_point_info.client_facing_addr(),
FULLNODE_PORT_RANGE,
VALIDATOR_PORT_RANGE,
);
cluster.replicator_infos.iter().for_each(|(_, value)| {
assert_eq!(

View File

@ -1,6 +1,6 @@
use clap::{crate_description, crate_name, crate_version, App, Arg};
use console::style;
use solana_core::cluster_info::{Node, FULLNODE_PORT_RANGE};
use solana_core::cluster_info::{Node, VALIDATOR_PORT_RANGE};
use solana_core::contact_info::ContactInfo;
use solana_core::replicator::Replicator;
use solana_sdk::signature::{read_keypair_file, Keypair, KeypairUtil};
@ -94,8 +94,11 @@ fn main() {
addr.set_ip(solana_netutil::get_public_ip_addr(&entrypoint_addr).unwrap());
addr
};
let node =
Node::new_replicator_with_external_ip(&keypair.pubkey(), &gossip_addr, FULLNODE_PORT_RANGE);
let node = Node::new_replicator_with_external_ip(
&keypair.pubkey(),
&gossip_addr,
VALIDATOR_PORT_RANGE,
);
println!(
"{} version {} (branch={}, commit={})",

View File

@ -5,7 +5,7 @@ use indicatif::{ProgressBar, ProgressStyle};
use log::*;
use solana_client::rpc_client::RpcClient;
use solana_core::bank_forks::SnapshotConfig;
use solana_core::cluster_info::{Node, FULLNODE_PORT_RANGE};
use solana_core::cluster_info::{Node, VALIDATOR_PORT_RANGE};
use solana_core::contact_info::ContactInfo;
use solana_core::gossip_service::discover;
use solana_core::ledger_cleanup_service::DEFAULT_MAX_LEDGER_SLOTS;
@ -229,7 +229,7 @@ pub fn main() {
solana_metrics::set_panic_hook("validator");
let default_dynamic_port_range =
&format!("{}-{}", FULLNODE_PORT_RANGE.0, FULLNODE_PORT_RANGE.1);
&format!("{}-{}", VALIDATOR_PORT_RANGE.0, VALIDATOR_PORT_RANGE.1);
let matches = App::new(crate_name!()).about(crate_description!())
.version(crate_version!())