solana/net
sakridge 008b56e535
Fix validator keys path (#13772)
2020-11-26 19:20:56 -08:00
..
remote Fix validator keys path (#13772) 2020-11-26 19:20:56 -08:00
scripts Add SSH key for buildkite-agent on achille 2020-10-27 01:57:25 +00:00
.gitignore Clean up net logs (#6813) 2019-11-08 10:25:17 -05:00
README.md Remove obsolete testnet management scripts (#10130) 2020-05-19 18:26:27 -07:00
azure.sh Add support for Azure instances in testnet creation (#3905) 2019-04-23 16:41:45 -06:00
colo.sh Add script for managing colo resourse ala gce.sh (#5854) 2019-09-19 14:08:22 -07:00
common.sh Remove archiver and storage program (#9992) 2020-05-14 18:22:47 -07:00
ec2.sh Add AWS EC2 support 2018-09-17 09:26:25 -07:00
gce.sh gce.sh: Make help text example command non-executable (#10319) 2020-05-29 11:52:25 -07:00
init-metrics.sh Add ability to hard fork at any slot (#7801) 2020-01-24 17:27:04 -08:00
net.sh Fix validator keys path (#13772) 2020-11-26 19:20:56 -08:00
scp.sh Add net/scp.sh for easier file transfer to/from network nodes 2018-11-12 11:48:53 -08:00
ssh.sh Remove archiver and storage program (#9992) 2020-05-14 18:22:47 -07:00

README.md

Network Management

This directory contains scripts useful for working with a test network. It's intended to be both dev and CD friendly.

User Account Prerequisites

GCP, AWS, colo are supported.

GCP

First authenticate with

$ gcloud auth login

AWS

Obtain your credentials from the AWS IAM Console and configure the AWS CLI with

$ aws configure

More information on AWS CLI configuration can be found here

Metrics configuration (Optional)

Ensure that $(whoami) is the name of an InfluxDB user account with enough access to create a new InfluxDB database. Ask mvines@ for help if needed.

Quick Start

NOTE: This example uses GCE. If you are using AWS EC2, replace ./gce.sh with ./ec2.sh in the commands.

$ cd net/
$ ./gce.sh create -n 5 -c 1     #<-- Create a GCE testnet with 5 additional nodes (beyond the bootstrap node) and 1 client (billing starts here)
$ ./init-metrics.sh $(whoami)   #<-- Recreate a metrics database for the testnet and configure credentials
$ ./net.sh start                #<-- Deploy the network from the local workspace and start processes on all nodes including bench-tps on the client node
$ ./ssh.sh                      #<-- Show a help to ssh into any testnet node to access logs/etc
$ ./net.sh stop                 #<-- Stop running processes on all nodes
$ ./gce.sh delete               #<-- Dispose of the network (billing stops here)

Tips

Running the network over public IP addresses

By default private IP addresses are used with all instances in the same availability zone to avoid GCE network engress charges. However to run the network over public IP addresses:

$ ./gce.sh create -P ...

or

$ ./ec2.sh create -P ...

Deploying a tarball-based network

To deploy the latest pre-built edge channel tarball (ie, latest from the master branch), once the testnet has been created run:

$ ./net.sh start -t edge

Enabling CUDA

First ensure the network instances are created with GPU enabled:

$ ./gce.sh create -g ...

or

$ ./ec2.sh create -g ...

If deploying a tarball-based network nothing further is required, as GPU presence is detected at runtime and the CUDA build is auto selected.

Partition testing

To induce the partition net.sh netem --config-file <config file path> To remove partition net.sh netem --config-file <config file path> --netem-cmd cleanup The partitioning is also removed if you do net.sh stop or restart.

An example config that produces 3 almost equal partitions:

{
      "partitions":[
         34,
         33,
         33
      ],
      "interconnects":[
         {
            "a":0,
            "b":1,
            "config":"loss 15% delay 25ms"
         },
         {
            "a":1,
            "b":0,
            "config":"loss 15% delay 25ms"
         },
         {
            "a":0,
            "b":2,
            "config":"loss 10% delay 15ms"
         },
         {
            "a":2,
            "b":0,
            "config":"loss 10% delay 15ms"
         },
         {
            "a":2,
            "b":1,
            "config":"loss 5% delay 5ms"
         },
         {
            "a":1,
            "b":2,
            "config":"loss 5% delay 5ms"
         }
      ]
}