From 3f55cc343047edc504cb88585c3ec07c0dd664ef Mon Sep 17 00:00:00 2001 From: Zach Ramsay Date: Sat, 16 Sep 2017 14:13:09 -0400 Subject: [PATCH] docs: use README.rst to be pulled from tendermint --- ansible/README.rst | 288 +++++++++++++++++++++++++++++ docker/README.rst | 120 +++++++++++++ mintnet-kubernetes/README.rst | 289 ++++++++++++++++++++++++++++++ terraform-digitalocean/README.rst | 111 ++++++++++++ tm-bench/README.rst | 144 +++++++++++++++ 5 files changed, 952 insertions(+) create mode 100644 ansible/README.rst create mode 100644 docker/README.rst create mode 100644 mintnet-kubernetes/README.rst create mode 100644 terraform-digitalocean/README.rst create mode 100644 tm-bench/README.rst diff --git a/ansible/README.rst b/ansible/README.rst new file mode 100644 index 00000000..33e31c3b --- /dev/null +++ b/ansible/README.rst @@ -0,0 +1,288 @@ +Using Ansible +============= + +.. figure:: assets/a_plus_t.png + :alt: Ansible plus Tendermint + + Ansible plus Tendermint + +The playbooks in `our ansible directory `__ +run ansible `roles `__ which: + +- install and configure basecoin or ethermint +- start/stop basecoin or ethermint and reset their configuration + +Prerequisites +------------- + +- Ansible 2.0 or higher +- SSH key to the servers + +Optional for DigitalOcean droplets: \* DigitalOcean API Token \* python +dopy package + +For a description on how to get a DigitalOcean API Token, see the explanation +in the `using terraform tutorial `__. + +Optional for Amazon AWS instances: \* Amazon AWS API access key ID and +secret access key. + +The cloud inventory scripts come from the ansible team at their +`GitHub `__ page. You can get the +latest version from the ``contrib/inventory`` folder. + +Setup +----- + +Ansible requires a "command machine" or "local machine" or "orchestrator +machine" to run on. This can be your laptop or any machine that can run +ansible. (It does not have to be part of the cloud network that hosts +your servers.) + +Use the official `Ansible installation +guide `__ to +install Ansible. Here are a few examples on basic installation commands: + +Ubuntu/Debian: + +:: + + sudo apt-get install ansible + +CentOS/RedHat: + +:: + + sudo yum install epel-release + sudo yum install ansible + +Mac OSX: If you have `Homebrew `__ installed, then it's: + +:: + + brew install ansible + +If not, you can install it using ``pip``: + +:: + + sudo easy_install pip + sudo pip install ansible + +To make life easier, you can start an SSH Agent and load your SSH +key(s). This way ansible will have an uninterrupted way of connecting to +your servers. + +:: + + ssh-agent > ~/.ssh/ssh.env + source ~/.ssh/ssh.env + + ssh-add private.key + +Subsequently, as long as the agent is running, you can use +``source ~/.ssh/ssh.env`` to load the keys to the current session. Note: +On Mac OSX, you can add the ``-K`` option to ssh-add to store the +passphrase in your keychain. The security of this feature is debated but +it is convenient. + +Optional cloud dependencies +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you are using a cloud provider to host your servers, you need the +below dependencies installed on your local machine. + +DigitalOcean inventory dependencies: +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Ubuntu/Debian: + +:: + + sudo apt-get install python-pip + sudo pip install dopy + +CentOS/RedHat: + +:: + + sudo yum install python-pip + sudo pip install dopy + +Mac OSX: + +:: + + sudo pip install dopy + +Amazon AWS inventory dependencies: +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Ubuntu/Debian: + +:: + + sudo apt-get install python-boto + +CentOS/RedHat: + +:: + + sudo yum install python-boto + +Mac OSX: + +:: + + sudo pip install boto + +Refreshing the DigitalOcean inventory +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you just finished creating droplets, the local DigitalOcean inventory +cache is not up-to-date. To refresh it, run: + +:: + + DO_API_TOKEN="" + python -u inventory/digital_ocean.py --refresh-cache 1> /dev/null + +Refreshing the Amazon AWS inventory +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you just finished creating Amazon AWS EC2 instances, the local AWS +inventory cache is not up-to-date. To refresh it, run: + +:: + + AWS_ACCESS_KEY_ID='' + AWS_SECRET_ACCESS_KEY='' + python -u inventory/ec2.py --refresh-cache 1> /dev/null + +Note: you don't need the access key and secret key set, if you are +running ansible on an Amazon AMI instance with the proper IAM +permissions set. + +Running the playbooks +--------------------- + +The playbooks are locked down to only run if the environment variable +``TF_VAR_TESTNET_NAME`` is populated. This is a precaution so you don't +accidentally run the playbook on all your servers. + +The variable ``TF_VAR_TESTNET_NAME`` contains the testnet name which +ansible translates into an ansible group. If you used Terraform to +create the servers, it was the testnet name used there. + +If the playbook cannot connect to the servers because of public key +denial, your SSH Agent is not set up properly. Alternatively you can add +the SSH key to ansible using the ``--private-key`` option. + +If you need to connect to the nodes as root but your local username is +different, use the ansible option ``-u root`` to tell ansible to connect +to the servers and authenticate as the root user. + +If you secured your server and you need to ``sudo`` for root access, use +the the ``-b`` or ``--become`` option to tell ansible to sudo to root +after connecting to the server. In the Terraform-DigitalOcean example, +if you created the ec2-user by adding the ``noroot=true`` option (or if +you are simply on Amazon AWS), you need to add the options +``-u ec2-user -b`` to ansible to tell it to connect as the ec2-user and +then sudo to root to run the playbook. + +DigitalOcean +~~~~~~~~~~~~ + +:: + + DO_API_TOKEN="" + TF_VAR_TESTNET_NAME="testnet-servers" + ansible-playbook -i inventory/digital_ocean.py install.yml -e service=basecoin + +Amazon AWS +~~~~~~~~~~ + +:: + + AWS_ACCESS_KEY_ID='' + AWS_SECRET_ACCESS_KEY='' + TF_VAR_TESTNET_NAME="testnet-servers" + ansible-playbook -i inventory/ec2.py install.yml -e service=basecoin + +Installing custom versions +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +By default ansible installs the tendermint, basecoin or ethermint binary +versions from the latest release in the repository. If you build your +own version of the binaries, you can tell ansible to install that +instead. + +:: + + GOPATH="" + go get -u github.com/tendermint/basecoin/cmd/basecoin + + DO_API_TOKEN="" + TF_VAR_TESTNET_NAME="testnet-servers" + ansible-playbook -i inventory/digital_ocean.py install.yml -e service=basecoin -e release_install=false + +Alternatively you can change the variable settings in +``group_vars/all``. + +Other commands and roles +------------------------ + +There are few extra playbooks to make life easier managing your servers. + +- install.yml - Install basecoin or ethermint applications. (Tendermint + gets installed automatically.) Use the ``service`` parameter to + define which application to install. Defaults to ``basecoin``. +- reset.yml - Stop the application, reset the configuration and data, + then start the application again. You need to pass + ``-e service=``, like ``-e service=basecoin``. It will + restart the underlying tendermint application too. +- restart.yml - Restart a service on all nodes. You need to pass + ``-e service=``, like ``-e service=basecoin``. It will + restart the underlying tendermint application too. +- stop.yml - Stop the application. You need to pass + ``-e service=``. +- status.yml - Check the service status and print it. You need to pass + ``-e service=``. +- start.yml - Start the application. You need to pass + ``-e service=``. +- ubuntu16-patch.yml - Ubuntu 16.04 does not have the minimum required + python package installed to be able to run ansible. If you are using + ubuntu, run this playbook first on the target machines. This will + install the python pacakge that is required for ansible to work + correctly on the remote nodes. +- upgrade.yml - Upgrade the ``service`` on your testnet. It will stop + the service and restart it at the end. It will only work if the + upgraded version is backward compatible with the installed version. +- upgrade-reset.yml - Upgrade the ``service`` on your testnet and reset + the database. It will stop the service and restart it at the end. It + will work for upgrades where the new version is not + backward-compatible with the installed version - however it will + reset the testnet to its default. + +The roles are self-sufficient under the ``roles/`` folder. + +- install - install the application defined in the ``service`` + parameter. It can install release packages and update them with + custom-compiled binaries. +- unsafe\_reset - delete the database for a service, including the + tendermint database. +- config - configure the application defined in ``service``. It also + configures the underlying tendermint service. Check + ``group_vars/all`` for options. +- stop - stop an application. Requires the ``service`` parameter set. +- status - check the status of an application. Requires the ``service`` + parameter set. +- start - start an application. Requires the ``service`` parameter set. + +Default variables +----------------- + +Default variables are documented under ``group_vars/all``. You can the +parameters there to deploy a previously created genesis.json file +(instead of dynamically creating it) or if you want to deploy custom +built binaries instead of deploying a released version. diff --git a/docker/README.rst b/docker/README.rst new file mode 100644 index 00000000..b135c007 --- /dev/null +++ b/docker/README.rst @@ -0,0 +1,120 @@ +Using Docker +============ + +`This folder `__ contains Docker container descriptions. Using this folder +you can build your own Docker images with the tendermint application. + +It is assumed that you have already setup docker. + +If you don't want to build the images yourself, you should be able to +download them from Docker Hub. + +Tendermint +---------- + +Build the container: Copy the ``tendermint`` binary to the +``tendermint`` folder. + +:: + + docker build -t tendermint tendermint + +The application configuration will be stored at ``/tendermint`` in the +container. The ports 46656 and 46657 will be open for ABCI applications +to connect. + +Initialize tendermint configuration and keep it after the container is +finished in a docker volume called ``data``: + +:: + + docker run --rm -v data:/tendermint tendermint init + +If you want the docker volume to be a physical directory on your +filesystem, you have to give an absolute path to docker and make sure +the permissions allow the application to write it. + +Get the public key of tendermint: + +:: + + docker run --rm -v data:/tendermint tendermint show_validator + +Run the docker tendermint application with: + +:: + + docker run --rm -d -v data:/tendermint tendermint node + +Basecoin +-------- + +Build the container: Copy the ``basecoin`` binary to the ``basecoin`` +folder. + +:: + + docker build -t basecoin basecoin + +The application configuration will be stored at ``/basecoin``. + +Initialize basecoin configuration and keep it after the container is +finished: + +:: + + docker run --rm -v basecoindata:/basecoin basecoin init deadbeef + +Use your own basecoin account instead of ``deadbeef`` in the ``init`` +command. + +Get the public key of basecoin: We use a trick here: since the basecoin +and the tendermint configuration folders are similar, the ``tendermint`` +command can extract the public key for us if we feed the basecoin +configuration folder to tendermint. + +:: + + docker run --rm -v basecoindata:/tendermint tendermint show_validator + +Run the docker tendermint application with: This is a two-step process: +\* Run the basecoin container. \* Run the tendermint container and +expose the ports that allow clients to connect. The --proxy\_app should +contain the basecoin application's IP address and port. + +:: + + docker run --rm -d -v basecoindata:/basecoin basecoin start --without-tendermint + docker run --rm -d -v data:/tendermint -p 46656-46657:46656-46657 tendermint node --proxy_app tcp://172.17.0.2:46658 + +Ethermint +--------- + +Build the container: Copy the ``ethermint`` binary and the setup folder +to the ``ethermint`` folder. + +:: + + docker build -t ethermint ethermint + +The application configuration will be stored at ``/ethermint``. The +files required for initializing ethermint (the files in the source +``setup`` folder) are under ``/setup``. + +Initialize ethermint configuration: + +:: + + docker run --rm -v ethermintdata:/ethermint ethermint init /setup/genesis.json + +Start ethermint as a validator node: This is a two-step process: \* Run +the ethermint container. You will have to define where tendermint runs +as the ethermint binary connects to it explicitly. \* Run the tendermint +container and expose the ports that allow clients to connect. The +--proxy\_app should contain the ethermint application's IP address and +port. + +:: + + docker run --rm -d -v ethermintdata:/ethermint ethermint --tendermint_addr tcp://172.17.0.3:46657 + docker run --rm -d -v data:/tendermint -p 46656-46657:46656-46657 tendermint node --proxy_app tcp://172.17.0.2:46658 diff --git a/mintnet-kubernetes/README.rst b/mintnet-kubernetes/README.rst new file mode 100644 index 00000000..627cadd7 --- /dev/null +++ b/mintnet-kubernetes/README.rst @@ -0,0 +1,289 @@ +Using Kubernetes +================ + +.. figure:: assets/t_plus_k.png + :alt: Tendermint plus Kubernetes + + Tendermint plus Kubernetes + +This should primarily be used for testing purposes or for +tightly-defined chains operated by a single stakeholder (see `the +security precautions <#security>`__). If your desire is to launch an +application with many stakeholders, consider using our set of Ansible +scripts. + +Quick Start +----------- + +For either platform, see the `requirements `__ + +MacOS +^^^^^ + +:: + + curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl + curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/ + minikube start + + git clone https://github.com/tendermint/tools.git && cd tools/mintnet-kubernetes/examples/basecoin && make create + +Linux +^^^^^ + +:: + + curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/kubectl + curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/ + minikube start + + git clone https://github.com/tendermint/tools.git && cd tools/mintnet-kubernetes/examples/basecoin && make create + +Verify it worked +~~~~~~~~~~~~~~~~ + +**Using a shell:** + +First wait until all the pods are ``Running``: + +``kubectl get pods -w -o wide -L tm`` + +then query the Tendermint app logs from the first pod: + +``kubectl logs -c tm -f tm-0`` + +finally, use `Rest API `__ to fetch the status of the second pod's Tendermint app. +Note we are using ``kubectl exec`` because pods are not exposed (and should not be) to the +outer network: + +``kubectl exec -c tm tm-0 -- curl -s http://tm-1.basecoin:46657/status | json_pp`` + +**Using the dashboard:** + +:: + + minikube dashboard + +Clean up +~~~~~~~~ + +:: + + make destroy + +Usage +----- + +Setup a Kubernetes cluster +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- locally using `Minikube `__ +- on GCE with a single click in the web UI +- on AWS using `Kubernetes + Operations `__ +- on Linux machines (Digital Ocean) using + `kubeadm `__ +- on AWS, Azure, GCE or bare metal using `Kargo + (Ansible) `__ + +Please refer to `the official +documentation `__ +for overview and comparison of different options. + +Kubernetes on Digital Ocean +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Available options: + +- `kubeadm (alpha) `__ +- `kargo `__ +- `rancher `__ +- `terraform `__ + +As you can see, there is no single tool for creating a cluster on DO. +Therefore, choose the one you know and comfortable working with. If you know +and used `terraform `__ before, then choose it. If you +know Ansible, then pick kargo. If none of these seem familiar to you, go with +``kubeadm``. Rancher is a beautiful UI for deploying and managing containers in +production. + +Kubernetes on Google Cloud Engine +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Review the `Official Documentation `__ for Kubernetes on Google Compute +Engine. + +**Create a cluster** + +The recommended way is to use `Google Container +Engine `__. You should be able +to create a fully fledged cluster with just a few clicks. + +**Connect to it** + +Install ``gcloud`` as a part of `Google Cloud SDK `__. + +Make sure you have credentials for GCloud by running ``gcloud auth login``. + +In order to make API calls against GCE, you must also run ``gcloud auth +application-default login``. + +Press ``Connect``: + +.. figure:: assets/gce1.png + +and execute the first command in your shell. Then start a proxy by +executing ``kubectl` proxy``. + +.. figure:: assets/gce2.png + +Now you should be able to run ``kubectl`` command to create resources, get +resource info, logs, etc. + +**Make sure you have Kubernetes >= 1.5, because you will be using +StatefulSets, which is a beta feature in 1.5.** + +Create a configuration file +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Download a template: + +:: + + curl -Lo app.yaml https://github.com/tendermint/tools/raw/master/mintnet-kubernetes/app.template.yaml + +Open ``app.yaml`` in your favorite editor and configure your app +container (navigate to ``- name: app``). Kubernetes DSL (Domain Specific +Language) is very simple, so it should be easy. You will need to set +Docker image, command and/or run arguments. Replace variables prefixed +with ``YOUR_APP`` with corresponding values. Set genesis time to now and +preferable chain ID in ConfigMap. + +Please note if you are changing ``replicas`` number, do not forget to +update ``validators`` set in ConfigMap. You will be able to scale the +cluster up or down later, but new pods (nodes) won't become validators +automatically. + +Deploy your application +^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + kubectl create -f ./app.yaml + +Observe your cluster +^^^^^^^^^^^^^^^^^^^^ + +`web UI `__ + +The easiest way to access Dashboard is to use ``kubectl``. Run the following +command in your desktop environment: + +:: + + kubectl proxy + +``kubectl`` will handle authentication with apiserver and make Dashboard +available at http://localhost:8001/ui + +**shell** + +List all the pods: + +:: + + kubectl get pods -o wide -L tm + +StatefulSet details: + +:: + + kubectl describe statefulsets tm + +First pod details: + +:: + + kubectl describe pod tm-0 + +Tendermint app logs from the first pod: + +:: + + kubectl logs tm-0 -c tm -f + +App logs from the first pod: + +:: + + kubectl logs tm-0 -c app -f + +Status of the second pod's Tendermint app: + +:: + + kubectl exec -c tm tm-0 -- curl -s http://tm-1.:46657/status | json_pp + +Security +-------- + +Due to the nature of Kubernetes, where you typically have a single +master, the master could be a SPOF (Single Point Of Failure). Therefore, +you need to make sure only authorized people can access it. And these +people themselves had taken basic measures in order not to get hacked. + +These are the best practices: + +- all access to the master is over TLS +- access to the API Server is X.509 certificate or token based +- etcd is not exposed directly to the cluster +- ensure that images are free of vulnerabilities + (`1 `__) +- ensure that only authorized images are used in your environment +- disable direct access to Kubernetes nodes (no SSH) +- define resource quota + +Resources: + +- https://kubernetes.io/docs/admin/accessing-the-api/ +- http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html +- https://blog.openshift.com/securing-kubernetes/ + +Fault tolerance +--------------- + +Having a single master (API server) is a bad thing also because if +something happens to it, you risk being left without an access to the +application. + +To avoid that you can `run Kubernetes in multiple +zones `__, each zone +running an `API +server `__ and load +balance requests between them. Do not forget to make sure only one +instance of scheduler and controller-manager are running at once. + +Running in multiple zones is a lightweight version of a broader `Cluster +Federation feature `__. +Federated deployments could span across multiple regions (not zones). We +haven't tried this feature yet, so any feedback is highly appreciated! +Especially, related to additional latency and cost of exchanging data +between the regions. + +Resources: + +- https://kubernetes.io/docs/admin/high-availability/ + +Starting process +---------------- + +.. figure:: assets/statefulset.png + :alt: StatefulSet + + StatefulSet + +Init containers (``tm-gen-validator``) are run before all other +containers, creating public-private key pair for each pod. Every ``tm`` +container then asks other pods for their public keys, which are served +with nginx (``pub-key`` container). When ``tm`` container have all the +keys, it forms a genesis file and starts the Tendermint process. diff --git a/terraform-digitalocean/README.rst b/terraform-digitalocean/README.rst new file mode 100644 index 00000000..95af507f --- /dev/null +++ b/terraform-digitalocean/README.rst @@ -0,0 +1,111 @@ +Using Terraform +=============== + +This is a generic `Terraform `__ +configuration that sets up DigitalOcean droplets. See the +`terraform-digitalocean `__ +for the required files. + +Prerequisites +------------- + +- Install `HashiCorp Terraform `__ on a linux + machine. +- Create a `DigitalOcean API + token `__ with + read and write capability. +- Create a private/public key pair for SSH. This is needed to log onto + your droplets as well as by Ansible to connect for configuration + changes. +- Set up the public SSH key at the `DigitalOcean security + page `__. + `Here `__'s + a tutorial. +- Find out your SSH key ID at DigitalOcean by querying the below + command on your linux box: + +:: + + DO_API_TOKEN="" + curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer $DO_API_TOKEN" "https://api.digitalocean.com/v2/account/keys" + +Initialization +-------------- + +If this is your first time using terraform, you have to initialize it by +running the below command. (Note: initialization can be run multiple +times) + +:: + + terraform init + +After initialization it's good measure to create a new Terraform +environment for the droplets so they are always managed together. + +:: + + TESTNET_NAME="testnet-servers" + terraform env new "$TESTNET_NAME" + +Note this ``terraform env`` command is only available in terraform +``v0.9`` and up. + +Execution +--------- + +The below command will create 4 nodes in DigitalOcean. They will be +named ``testnet-servers-node0`` to ``testnet-servers-node3`` and they +will be tagged as ``testnet-servers``. + +:: + + DO_API_TOKEN="" + SSH_IDS="[ \"\" ]" + terraform apply -var TESTNET_NAME="testnet-servers" -var servers=4 -var DO_API_TOKEN="$DO_API_TOKEN" -var ssh_keys="$SSH_IDS" + +Note: ``ssh_keys`` is a list of strings. You can add multiple keys. For +example: ``["1234567","9876543"]``. + +Alternatively you can use the default settings. The number of default +servers is 4 and the testnet name is ``tf-testnet1``. Variables can also +be defined as environment variables instead of the command-line. +Environment variables that start with ``TF_VAR_`` will be translated +into the Terraform configuration. For example the number of servers can +be overriden by setting the ``TF_VAR_servers`` variable. + +:: + + TF_VAR_DO_API_TOKEN="" + TF_VAR_TESTNET_NAME="testnet-servers" + terraform-apply + +Security +-------- + +DigitalOcean uses the root user by default on its droplets. This is fine +as long as SSH keys are used. However some people still would like to +disable root and use an alternative user to connect to the droplets - +then ``sudo`` from there. Terraform can do this but it requires SSH +agent running on the machine where terraform is run, with one of the SSH +keys of the droplets added to the agent. (This will be neede for ansible +too, so it's worth setting it up here. Check out the +`ansible `__ +page for more information.) After setting up the SSH key, run +``terraform apply`` with ``-var noroot=true`` to create your droplets. +Terraform will create a user called ``ec2-user`` and move the SSH keys +over, this way disabling SSH login for root. It also adds the +``ec2-user`` to the sudoers file, so after logging in as ec2-user you +can ``sudo`` to ``root``. + +DigitalOcean announced firewalls but the current version of Terraform +(0.9.8 as of this writing) does not support it yet. Fortunately it is +quite easy to set it up through the web interface (and not that bad +through the `RESTful +API `__ +either). When adding droplets to a firewall rule, you can add tags. All +droplets in a testnet are tagged with the testnet name so it's enough to +define the testnet name in the firewall rule. It is not necessary to add +the nodes one-by-one. Also, the firewall rule "remembers" the testnet +name tag so if you change the servers but keep the name, the firewall +rules will still apply. diff --git a/tm-bench/README.rst b/tm-bench/README.rst new file mode 100644 index 00000000..676f2bbf --- /dev/null +++ b/tm-bench/README.rst @@ -0,0 +1,144 @@ +Benchmarking and Monitoring +=========================== + +tm-bench +-------- + +Tendermint blockchain benchmarking tool: https://github.com/tendermint/tools/tree/master/tm-bench + +For example, the following: + +:: + + tm-bench -T 10 -r 1000 localhost:46657 + +will output: + +:: + + Stats Avg Stdev Max + Block latency 6.18ms 3.19ms 14ms + Blocks/sec 0.828 0.378 1 + Txs/sec 963 493 1811 + +Quick Start +^^^^^^^^^^^ + +Docker +~~~~~~ + +:: + + docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init + docker run -it --rm -v "/tmp:/tendermint" -p "46657:46657" --name=tm tendermint/tendermint + + docker run -it --rm --link=tm tendermint/bench tm:46657 + +Binaries +~~~~~~~~ + +If **Linux**, start with: + +:: + + curl -L https://s3-us-west-2.amazonaws.com/tendermint/0.10.4/tendermint_linux_amd64.zip && sudo unzip -d /usr/local/bin tendermint_linux_amd64.zip && sudo chmod +x tendermint + +if **Mac OS**, start with: + +:: + + curl -L https://s3-us-west-2.amazonaws.com/tendermint/0.10.4/tendermint_darwin_amd64.zip && sudo unzip -d /usr/local/bin tendermint_darwin_amd64.zip && sudo chmod +x tendermint + +then run: + +:: + + tendermint init + tendermint node --app_proxy=dummy + + tm-bench localhost:46657 + +with the last command being in a seperate window. + +Usage +^^^^^ + +:: + + tm-bench [-c 1] [-T 10] [-r 1000] [endpoints] + + Examples: + tm-bench localhost:46657 + Flags: + -T int + Exit after the specified amount of time in seconds (default 10) + -c int + Connections to keep open per endpoint (default 1) + -r int + Txs per second to send in a connection (default 1000) + -v Verbose output + +tm-monitor +---------- + +Tendermint blockchain monitoring tool; watches over one or more nodes, collecting and providing various statistics to the user: https://github.com/tendermint/tools/tree/master/tm-monitor + +Quick Start +^^^^^^^^^^^ + +Docker +~~~~~~ + +:: + + docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init + docker run -it --rm -v "/tmp:/tendermint" -p "46657:46657" --name=tm tendermint/tendermint + + docker run -it --rm --link=tm tendermint/monitor tm:46657 + +Binaries +~~~~~~~~ + +This will be the same as you did for ``tm-bench`` above, except for the last line which should be: + +:: + + tm-monitor localhost:46657 + +Usage +^^^^^ + +:: + + tm-monitor [-v] [-no-ton] [-listen-addr="tcp://0.0.0.0:46670"] [endpoints] + + Examples: + # monitor single instance + tm-monitor localhost:46657 + + # monitor a few instances by providing comma-separated list of RPC endpoints + tm-monitor host1:46657,host2:46657 + Flags: + -listen-addr string + HTTP and Websocket server listen address (default "tcp://0.0.0.0:46670") + -no-ton + Do not show ton (table of nodes) + -v verbose logging + +RPC UI +^^^^^^ + +Run ``tm-monitor`` and visit http://localhost:46670 +You should see the list of the available RPC endpoints: + +:: + + http://localhost:46670/status + http://localhost:46670/status/network + http://localhost:46670/monitor?endpoint=_ + http://localhost:46670/status/node?name=_ + http://localhost:46670/unmonitor?endpoint=_ + +The API is available as GET requests with URI encoded parameters, or as JSONRPC +POST requests. The JSONRPC methods are also exposed over websocket. +