solana-with-rpc-optimizations/net
Pankaj Garg be7cce1fd2
Tweak GCE scripts for higher node count (#1229)
* Tweak GCE scripts for higher node count

- Some validators were unable to rsync config from leader when
  the node count was high (e.g. 25). Looks like the leader node was
  getting more rsync requests in parallel than it count handle.
- This change staggers the validators bootup, and rsync time

* Address review comments
2018-09-14 17:17:08 -07:00
..
remote bench-tps/net sanity: add ability to check for unexpected extra nodes 2018-09-12 15:38:57 -07:00
scripts Add docker install script 2018-09-12 17:09:37 -07:00
.gitignore Morph gce_multinode-based scripts into net/ 2018-09-05 09:02:02 -07:00
README.md .sh 2018-09-11 11:29:49 -07:00
common.sh Use a common solana user on all testnet instances 2018-09-08 22:34:26 -07:00
gce.sh Add docker install script 2018-09-12 17:09:37 -07:00
init-metrics.sh Add -e option 2018-09-06 19:54:39 -07:00
net.sh Tweak GCE scripts for higher node count (#1229) 2018-09-14 17:17:08 -07:00
ssh.sh Use a common solana user on all testnet instances 2018-09-08 22:34:26 -07:00

README.md

Network Management

This directory contains scripts useful for working with a test network. It's intended to be both dev and CD friendly.

User Account Prerequisites

Log in to GCP with:

$ gcloud auth login

Also ensure that $(whoami) is the name of an InfluxDB user account with enough access to create a new database.

Quick Start

$ cd net/
$ ./gce.sh create -n 5 -c 1  #<-- Create a GCE testnet with 5 validators, 1 client (billing starts here)
$ ./init-metrics.sh $(whoami)   #<-- Configure a metrics database for the testnet
$ ./net.sh start             #<-- Deploy the network from the local workspace
$ ./ssh.sh                   #<-- Details on how to ssh into any testnet node
$ ./gce.sh delete            #<-- Dispose of the network (billing stops here)

Tips

Running the network over public IP addresses

By default private IP addresses are used with all instances in the same availability zone to avoid GCE network engress charges. However to run the network over public IP addresses:

$ ./gce.sh create -P ...

Deploying a Snap-based network

To deploy the latest pre-built edge channel Snap (ie, latest from the master branch), once the testnet has been created run:

$ ./net.sh start -s edge

Enabling CUDA

First ensure the network instances are created with GPU enabled:

$ ./gce.sh create -g ...

If deploying a Snap-based network nothing further is required, as GPU presence is detected at runtime and the CUDA build is auto selected.

If deploying a locally-built network, first run ./fetch-perf-libs.sh then ensure the cuda feature is specified at network start:

$ ./net.sh start -f "cuda,erasure"

How to interact with a CD testnet deployed by ci/testnet-deploy.sh

Taking master-testnet-solana-com as an example, configure your workspace for the testnet using:

$ ./gce.sh config -p master-testnet-solana-com
$ ./ssh.sh                                     # <-- Details on how to ssh into any testnet node