solana/net
Michael Vines 447fe48d2a
Revert "Add a stand-alone gossip node on the blocksteamer instance"
This reverts commit a217920561.

This commit is causing trouble when the TdS cluster is reset and
validators running an older genesis config are still present.
Occasionally an RPC URL from an older validator will be selected,
causing a new node to fail to boot.
2020-01-04 16:42:12 -07:00
..
datacenter-node-install Colo - Node install scripts missing latest user requests (#7540) 2019-12-17 19:00:12 -05:00
remote Revert "Add a stand-alone gossip node on the blocksteamer instance" 2020-01-04 16:42:12 -07:00
scripts Add pubkey from new buildkite agent instance 2019-12-17 18:00:15 -05:00
.gitignore Clean up net logs (#6813) 2019-11-08 10:25:17 -05:00
README.md Remove CUDA feature (#6094) 2019-09-26 13:36:51 -07:00
azure.sh Add support for Azure instances in testnet creation (#3905) 2019-04-23 16:41:45 -06:00
colo.sh Add script for managing colo resourse ala gce.sh (#5854) 2019-09-19 14:08:22 -07:00
common.sh Fix ssh connection error due to too many authentication failures (#7229) 2019-12-03 15:53:12 -08:00
ec2.sh
gce.sh Fix gce.sh info (#7054) 2019-11-19 17:49:25 -08:00
init-metrics.sh Add ability to manually create a db (#6151) 2019-09-27 12:03:20 -07:00
net.sh Revert "Add a stand-alone gossip node on the blocksteamer instance" 2020-01-04 16:42:12 -07:00
scp.sh Add net/scp.sh for easier file transfer to/from network nodes 2018-11-12 11:48:53 -08:00
ssh.sh Rename remaining uses of fullnode to validator (#6476) 2019-10-21 20:21:21 -07:00

README.md

Network Management

This directory contains scripts useful for working with a test network. It's intended to be both dev and CD friendly.

User Account Prerequisites

GCP and AWS are supported.

GCP

First authenticate with

$ gcloud auth login

AWS

Obtain your credentials from the AWS IAM Console and configure the AWS CLI with

$ aws configure

More information on AWS CLI configuration can be found here

Metrics configuration (Optional)

Ensure that $(whoami) is the name of an InfluxDB user account with enough access to create a new InfluxDB database. Ask mvines@ for help if needed.

Quick Start

NOTE: This example uses GCE. If you are using AWS EC2, replace ./gce.sh with ./ec2.sh in the commands.

$ cd net/
$ ./gce.sh create -n 5 -c 1     #<-- Create a GCE testnet with 5 additional nodes (beyond the bootstrap node) and 1 client (billing starts here)
$ ./init-metrics.sh $(whoami)   #<-- Configure a metrics database for the testnet
$ ./net.sh start                #<-- Deploy the network from the local workspace and start all clients with bench-tps
$ ./ssh.sh                      #<-- Details on how to ssh into any testnet node to access logs/etc
$ ./gce.sh delete               #<-- Dispose of the network (billing stops here)

Tips

Running the network over public IP addresses

By default private IP addresses are used with all instances in the same availability zone to avoid GCE network engress charges. However to run the network over public IP addresses:

$ ./gce.sh create -P ...

or

$ ./ec2.sh create -P ...

Deploying a tarball-based network

To deploy the latest pre-built edge channel tarball (ie, latest from the master branch), once the testnet has been created run:

$ ./net.sh start -t edge

Enabling CUDA

First ensure the network instances are created with GPU enabled:

$ ./gce.sh create -g ...

or

$ ./ec2.sh create -g ...

If deploying a tarball-based network nothing further is required, as GPU presence is detected at runtime and the CUDA build is auto selected.

How to interact with a CD testnet deployed by ci/testnet-deploy.sh

AWS-Specific Extra Setup: Follow the steps in scripts/solana-user-authorized_keys.sh, then redeploy the testnet before continuing in this section.

Taking master-testnet-solana-com as an example, configure your workspace for the testnet using:

$ ./gce.sh config -p master-testnet-solana-com

or

$ ./ec2.sh config -p master-testnet-solana-com

Then run the following for details on how to ssh into any testnet node to access logs or otherwise inspect the node

$ ./ssh.sh