cleanup _attic
This commit is contained in:
parent
12a180786a
commit
822ebdb501
|
@ -1,289 +0,0 @@
|
||||||
Basecoin Basics
|
|
||||||
===============
|
|
||||||
|
|
||||||
Here we explain how to get started with a basic Basecoin blockchain, how
|
|
||||||
to send transactions between accounts using the ``basecoin`` tool, and
|
|
||||||
what is happening under the hood.
|
|
||||||
|
|
||||||
Install
|
|
||||||
-------
|
|
||||||
|
|
||||||
With go, it's one command:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
go get -u github.com/cosmos/cosmos-sdk
|
|
||||||
|
|
||||||
If you have trouble, see the `installation guide <./install.html>`__.
|
|
||||||
|
|
||||||
TODO: update all the below
|
|
||||||
|
|
||||||
Generate some keys
|
|
||||||
~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Let's generate two keys, one to receive an initial allocation of coins,
|
|
||||||
and one to send some coins to later:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli keys new cool
|
|
||||||
basecli keys new friend
|
|
||||||
|
|
||||||
You'll need to enter passwords. You can view your key names and
|
|
||||||
addresses with ``basecli keys list``, or see a particular key's address
|
|
||||||
with ``basecli keys get <NAME>``.
|
|
||||||
|
|
||||||
Initialize Basecoin
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
To initialize a new Basecoin blockchain, run:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecoin init <ADDRESS>
|
|
||||||
|
|
||||||
If you prefer not to copy-paste, you can provide the address
|
|
||||||
programatically:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecoin init $(basecli keys get cool | awk '{print $2}')
|
|
||||||
|
|
||||||
This will create the necessary files for a Basecoin blockchain with one
|
|
||||||
validator and one account (corresponding to your key) in
|
|
||||||
``~/.basecoin``. For more options on setup, see the `guide to using the
|
|
||||||
Basecoin tool </docs/guide/basecoin-tool.md>`__.
|
|
||||||
|
|
||||||
If you like, you can manually add some more accounts to the blockchain
|
|
||||||
by generating keys and editing the ``~/.basecoin/genesis.json``.
|
|
||||||
|
|
||||||
Start Basecoin
|
|
||||||
~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Now we can start Basecoin:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecoin start
|
|
||||||
|
|
||||||
You should see blocks start streaming in!
|
|
||||||
|
|
||||||
Initialize Light-Client
|
|
||||||
-----------------------
|
|
||||||
|
|
||||||
Now that Basecoin is running we can initialize ``basecli``, the
|
|
||||||
light-client utility. Basecli is used for sending transactions and
|
|
||||||
querying the state. Leave Basecoin running and open a new terminal
|
|
||||||
window. Here run:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli init --node=tcp://localhost:26657 --genesis=$HOME/.basecoin/genesis.json
|
|
||||||
|
|
||||||
If you provide the genesis file to basecli, it can calculate the proper
|
|
||||||
chainID and validator hash. Basecli needs to get this information from
|
|
||||||
some trusted source, so all queries done with ``basecli`` can be
|
|
||||||
cryptographically proven to be correct according to a known validator
|
|
||||||
set.
|
|
||||||
|
|
||||||
Note: that ``--genesis`` only works if there have been no validator set
|
|
||||||
changes since genesis. If there are validator set changes, you need to
|
|
||||||
find the current set through some other method.
|
|
||||||
|
|
||||||
Send transactions
|
|
||||||
~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Now we are ready to send some transactions. First Let's check the
|
|
||||||
balance of the two accounts we setup earlier:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
ME=$(basecli keys get cool | awk '{print $2}')
|
|
||||||
YOU=$(basecli keys get friend | awk '{print $2}')
|
|
||||||
basecli query account $ME
|
|
||||||
basecli query account $YOU
|
|
||||||
|
|
||||||
The first account is flush with cash, while the second account doesn't
|
|
||||||
exist. Let's send funds from the first account to the second:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli tx send --name=cool --amount=1000mycoin --to=$YOU --sequence=1
|
|
||||||
|
|
||||||
Now if we check the second account, it should have ``1000`` 'mycoin'
|
|
||||||
coins!
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli query account $YOU
|
|
||||||
|
|
||||||
We can send some of these coins back like so:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli tx send --name=friend --amount=500mycoin --to=$ME --sequence=1
|
|
||||||
|
|
||||||
Note how we use the ``--name`` flag to select a different account to
|
|
||||||
send from.
|
|
||||||
|
|
||||||
If we try to send too much, we'll get an error:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli tx send --name=friend --amount=500000mycoin --to=$ME --sequence=2
|
|
||||||
|
|
||||||
Let's send another transaction:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli tx send --name=cool --amount=2345mycoin --to=$YOU --sequence=2
|
|
||||||
|
|
||||||
Note the ``hash`` value in the response - this is the hash of the
|
|
||||||
transaction. We can query for the transaction by this hash:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli query tx <HASH>
|
|
||||||
|
|
||||||
See ``basecli tx send --help`` for additional details.
|
|
||||||
|
|
||||||
Proof
|
|
||||||
-----
|
|
||||||
|
|
||||||
Even if you don't see it in the UI, the result of every query comes with
|
|
||||||
a proof. This is a Merkle proof that the result of the query is actually
|
|
||||||
contained in the state. And the state's Merkle root is contained in a
|
|
||||||
recent block header. Behind the scenes, ``countercli`` will not only
|
|
||||||
verify that this state matches the header, but also that the header is
|
|
||||||
properly signed by the known validator set. It will even update the
|
|
||||||
validator set as needed, so long as there have not been major changes
|
|
||||||
and it is secure to do so. So, if you wonder why the query may take a
|
|
||||||
second... there is a lot of work going on in the background to make sure
|
|
||||||
even a lying full node can't trick your client.
|
|
||||||
|
|
||||||
Accounts and Transactions
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
For a better understanding of how to further use the tools, it helps to
|
|
||||||
understand the underlying data structures.
|
|
||||||
|
|
||||||
Accounts
|
|
||||||
~~~~~~~~
|
|
||||||
|
|
||||||
The Basecoin state consists entirely of a set of accounts. Each account
|
|
||||||
contains a public key, a balance in many different coin denominations,
|
|
||||||
and a strictly increasing sequence number for replay protection. This
|
|
||||||
type of account was directly inspired by accounts in Ethereum, and is
|
|
||||||
unlike Bitcoin's use of Unspent Transaction Outputs (UTXOs). Note
|
|
||||||
Basecoin is a multi-asset cryptocurrency, so each account can have many
|
|
||||||
different kinds of tokens.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type Account struct {
|
|
||||||
PubKey crypto.PubKey `json:"pub_key"` // May be nil, if not known.
|
|
||||||
Sequence int `json:"sequence"`
|
|
||||||
Balance Coins `json:"coins"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type Coins []Coin
|
|
||||||
|
|
||||||
type Coin struct {
|
|
||||||
Denom string `json:"denom"`
|
|
||||||
Amount int64 `json:"amount"`
|
|
||||||
}
|
|
||||||
|
|
||||||
If you want to add more coins to a blockchain, you can do so manually in
|
|
||||||
the ``~/.basecoin/genesis.json`` before you start the blockchain for the
|
|
||||||
first time.
|
|
||||||
|
|
||||||
Accounts are serialized and stored in a Merkle tree under the key
|
|
||||||
``base/a/<address>``, where ``<address>`` is the address of the account.
|
|
||||||
Typically, the address of the account is the 20-byte ``RIPEMD160`` hash
|
|
||||||
of the public key, but other formats are acceptable as well, as defined
|
|
||||||
in the `Tendermint crypto
|
|
||||||
library <https://github.com/tendermint/go-crypto>`__. The Merkle tree
|
|
||||||
used in Basecoin is a balanced, binary search tree, which we call an
|
|
||||||
`IAVL tree <https://github.com/tendermint/iavl>`__.
|
|
||||||
|
|
||||||
Transactions
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Basecoin defines a transaction type, the ``SendTx``, which allows tokens
|
|
||||||
to be sent to other accounts. The ``SendTx`` takes a list of inputs and
|
|
||||||
a list of outputs, and transfers all the tokens listed in the inputs
|
|
||||||
from their corresponding accounts to the accounts listed in the output.
|
|
||||||
The ``SendTx`` is structured as follows:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type SendTx struct {
|
|
||||||
Gas int64 `json:"gas"`
|
|
||||||
Fee Coin `json:"fee"`
|
|
||||||
Inputs []TxInput `json:"inputs"`
|
|
||||||
Outputs []TxOutput `json:"outputs"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type TxInput struct {
|
|
||||||
Address []byte `json:"address"` // Hash of the PubKey
|
|
||||||
Coins Coins `json:"coins"` //
|
|
||||||
Sequence int `json:"sequence"` // Must be 1 greater than the last committed TxInput
|
|
||||||
Signature crypto.Signature `json:"signature"` // Depends on the PubKey type and the whole Tx
|
|
||||||
PubKey crypto.PubKey `json:"pub_key"` // Is present iff Sequence == 0
|
|
||||||
}
|
|
||||||
|
|
||||||
type TxOutput struct {
|
|
||||||
Address []byte `json:"address"` // Hash of the PubKey
|
|
||||||
Coins Coins `json:"coins"` //
|
|
||||||
}
|
|
||||||
|
|
||||||
Note the ``SendTx`` includes a field for ``Gas`` and ``Fee``. The
|
|
||||||
``Gas`` limits the total amount of computation that can be done by the
|
|
||||||
transaction, while the ``Fee`` refers to the total amount paid in fees.
|
|
||||||
This is slightly different from Ethereum's concept of ``Gas`` and
|
|
||||||
``GasPrice``, where ``Fee = Gas x GasPrice``. In Basecoin, the ``Gas``
|
|
||||||
and ``Fee`` are independent, and the ``GasPrice`` is implicit.
|
|
||||||
|
|
||||||
In Basecoin, the ``Fee`` is meant to be used by the validators to inform
|
|
||||||
the ordering of transactions, like in Bitcoin. And the ``Gas`` is meant
|
|
||||||
to be used by the application plugin to control its execution. There is
|
|
||||||
currently no means to pass ``Fee`` information to the Tendermint
|
|
||||||
validators, but it will come soon...
|
|
||||||
|
|
||||||
Note also that the ``PubKey`` only needs to be sent for
|
|
||||||
``Sequence == 0``. After that, it is stored under the account in the
|
|
||||||
Merkle tree and subsequent transactions can exclude it, using only the
|
|
||||||
``Address`` to refer to the sender. Ethereum does not require public
|
|
||||||
keys to be sent in transactions as it uses a different elliptic curve
|
|
||||||
scheme which enables the public key to be derived from the signature
|
|
||||||
itself.
|
|
||||||
|
|
||||||
Finally, note that the use of multiple inputs and multiple outputs
|
|
||||||
allows us to send many different types of tokens between many different
|
|
||||||
accounts at once in an atomic transaction. Thus, the ``SendTx`` can
|
|
||||||
serve as a basic unit of decentralized exchange. When using multiple
|
|
||||||
inputs and outputs, you must make sure that the sum of coins of the
|
|
||||||
inputs equals the sum of coins of the outputs (no creating money), and
|
|
||||||
that all accounts that provide inputs have signed the transaction.
|
|
||||||
|
|
||||||
Clean Up
|
|
||||||
--------
|
|
||||||
|
|
||||||
**WARNING:** Running these commands will wipe out any existing
|
|
||||||
information in both the ``~/.basecli`` and ``~/.basecoin`` directories,
|
|
||||||
including private keys.
|
|
||||||
|
|
||||||
To remove all the files created and refresh your environment (e.g., if
|
|
||||||
starting this tutorial again or trying something new), the following
|
|
||||||
commands are run:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli reset_all
|
|
||||||
rm -rf ~/.basecoin
|
|
||||||
|
|
||||||
In this guide, we introduced the ``basecoin`` and ``basecli`` tools,
|
|
||||||
demonstrated how to start a new basecoin blockchain and how to send
|
|
||||||
tokens between accounts, and discussed the underlying data types for
|
|
||||||
accounts and transactions, specifically the ``Account`` and the
|
|
||||||
``SendTx``.
|
|
|
@ -1,215 +0,0 @@
|
||||||
Basecoin Extensions
|
|
||||||
===================
|
|
||||||
|
|
||||||
TODO: re-write for extensions
|
|
||||||
|
|
||||||
In the `previous guide <basecoin-basics.md>`__, we saw how to use the
|
|
||||||
``basecoin`` tool to start a blockchain and the ``basecli`` tools to
|
|
||||||
send transactions. We also learned about ``Account`` and ``SendTx``, the
|
|
||||||
basic data types giving us a multi-asset cryptocurrency. Here, we will
|
|
||||||
demonstrate how to extend the tools to use another transaction type, the
|
|
||||||
``AppTx``, so we can send data to a custom plugin. In this example we
|
|
||||||
explore a simple plugin named ``counter``.
|
|
||||||
|
|
||||||
Example Plugin
|
|
||||||
--------------
|
|
||||||
|
|
||||||
The design of the ``basecoin`` tool makes it easy to extend for custom
|
|
||||||
functionality. The Counter plugin is bundled with basecoin, so if you
|
|
||||||
have already `installed basecoin <install.md>`__ and run
|
|
||||||
``make install`` then you should be able to run a full node with
|
|
||||||
``counter`` and the a light-client ``countercli`` from terminal. The
|
|
||||||
Counter plugin is just like the ``basecoin`` tool. They both use the
|
|
||||||
same library of commands, including one for signing and broadcasting
|
|
||||||
``SendTx``.
|
|
||||||
|
|
||||||
Counter transactions take two custom inputs, a boolean argument named
|
|
||||||
``valid``, and a coin amount named ``countfee``. The transaction is only
|
|
||||||
accepted if both ``valid`` is set to true and the transaction input
|
|
||||||
coins is greater than ``countfee`` that the user provides.
|
|
||||||
|
|
||||||
A new blockchain can be initialized and started just like in the
|
|
||||||
`previous guide <basecoin-basics.md>`__:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# WARNING: this wipes out data - but counter is only for demos...
|
|
||||||
rm -rf ~/.counter
|
|
||||||
countercli reset_all
|
|
||||||
|
|
||||||
countercli keys new cool
|
|
||||||
countercli keys new friend
|
|
||||||
|
|
||||||
counter init $(countercli keys get cool | awk '{print $2}')
|
|
||||||
|
|
||||||
counter start
|
|
||||||
|
|
||||||
The default files are stored in ``~/.counter``. In another window we can
|
|
||||||
initialize the light-client and send a transaction:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
countercli init --node=tcp://localhost:26657 --genesis=$HOME/.counter/genesis.json
|
|
||||||
|
|
||||||
YOU=$(countercli keys get friend | awk '{print $2}')
|
|
||||||
countercli tx send --name=cool --amount=1000mycoin --to=$YOU --sequence=1
|
|
||||||
|
|
||||||
But the Counter has an additional command, ``countercli tx counter``,
|
|
||||||
which crafts an ``AppTx`` specifically for this plugin:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
countercli tx counter --name cool
|
|
||||||
countercli tx counter --name cool --valid
|
|
||||||
|
|
||||||
The first transaction is rejected by the plugin because it was not
|
|
||||||
marked as valid, while the second transaction passes. We can build
|
|
||||||
plugins that take many arguments of different types, and easily extend
|
|
||||||
the tool to accomodate them. Of course, we can also expose queries on
|
|
||||||
our plugin:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
countercli query counter
|
|
||||||
|
|
||||||
Tada! We can now see that our custom counter plugin transactions went
|
|
||||||
through. You should see a Counter value of 1 representing the number of
|
|
||||||
valid transactions. If we send another transaction, and then query
|
|
||||||
again, we will see the value increment. Note that we need the sequence
|
|
||||||
number here to send the coins (it didn't increment when we just pinged
|
|
||||||
the counter)
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
countercli tx counter --name cool --countfee=2mycoin --sequence=2 --valid
|
|
||||||
countercli query counter
|
|
||||||
|
|
||||||
The Counter value should be 2, because we sent a second valid
|
|
||||||
transaction. And this time, since we sent a countfee (which must be less
|
|
||||||
than or equal to the total amount sent with the tx), it stores the
|
|
||||||
``TotalFees`` on the counter as well.
|
|
||||||
|
|
||||||
Keep it mind that, just like with ``basecli``, the ``countercli``
|
|
||||||
verifies a proof that the query response is correct and up-to-date.
|
|
||||||
|
|
||||||
Now, before we implement our own plugin and tooling, it helps to
|
|
||||||
understand the ``AppTx`` and the design of the plugin system.
|
|
||||||
|
|
||||||
AppTx
|
|
||||||
-----
|
|
||||||
|
|
||||||
The ``AppTx`` is similar to the ``SendTx``, but instead of sending coins
|
|
||||||
from inputs to outputs, it sends coins from one input to a plugin, and
|
|
||||||
can also send some data.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type AppTx struct {
|
|
||||||
Gas int64 `json:"gas"`
|
|
||||||
Fee Coin `json:"fee"`
|
|
||||||
Input TxInput `json:"input"`
|
|
||||||
Name string `json:"type"` // Name of the plugin
|
|
||||||
Data []byte `json:"data"` // Data for the plugin to process
|
|
||||||
}
|
|
||||||
|
|
||||||
The ``AppTx`` enables Basecoin to be extended with arbitrary additional
|
|
||||||
functionality through the use of plugins. The ``Name`` field in the
|
|
||||||
``AppTx`` refers to the particular plugin which should process the
|
|
||||||
transaction, and the ``Data`` field of the ``AppTx`` is the data to be
|
|
||||||
forwarded to the plugin for processing.
|
|
||||||
|
|
||||||
Note the ``AppTx`` also has a ``Gas`` and ``Fee``, with the same meaning
|
|
||||||
as for the ``SendTx``. It also includes a single ``TxInput``, which
|
|
||||||
specifies the sender of the transaction, and some coins that can be
|
|
||||||
forwarded to the plugin as well.
|
|
||||||
|
|
||||||
Plugins
|
|
||||||
-------
|
|
||||||
|
|
||||||
A plugin is simply a Go package that implements the ``Plugin``
|
|
||||||
interface:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type Plugin interface {
|
|
||||||
|
|
||||||
// Name of this plugin, should be short.
|
|
||||||
Name() string
|
|
||||||
|
|
||||||
// Run a transaction from ABCI DeliverTx
|
|
||||||
RunTx(store KVStore, ctx CallContext, txBytes []byte) (res abci.Result)
|
|
||||||
|
|
||||||
// Other ABCI message handlers
|
|
||||||
SetOption(store KVStore, key string, value string) (log string)
|
|
||||||
InitChain(store KVStore, vals []*abci.Validator)
|
|
||||||
BeginBlock(store KVStore, hash []byte, header *abci.Header)
|
|
||||||
EndBlock(store KVStore, height uint64) (res abci.ResponseEndBlock)
|
|
||||||
}
|
|
||||||
|
|
||||||
type CallContext struct {
|
|
||||||
CallerAddress []byte // Caller's Address (hash of PubKey)
|
|
||||||
CallerAccount *Account // Caller's Account, w/ fee & TxInputs deducted
|
|
||||||
Coins Coins // The coins that the caller wishes to spend, excluding fees
|
|
||||||
}
|
|
||||||
|
|
||||||
The workhorse of the plugin is ``RunTx``, which is called when an
|
|
||||||
``AppTx`` is processed. The ``Data`` from the ``AppTx`` is passed in as
|
|
||||||
the ``txBytes``, while the ``Input`` from the ``AppTx`` is used to
|
|
||||||
populate the ``CallContext``.
|
|
||||||
|
|
||||||
Note that ``RunTx`` also takes a ``KVStore`` - this is an abstraction
|
|
||||||
for the underlying Merkle tree which stores the account data. By passing
|
|
||||||
this to the plugin, we enable plugins to update accounts in the Basecoin
|
|
||||||
state directly, and also to store arbitrary other information in the
|
|
||||||
state. In this way, the functionality and state of a Basecoin-derived
|
|
||||||
cryptocurrency can be greatly extended. One could imagine going so far
|
|
||||||
as to implement the Ethereum Virtual Machine as a plugin!
|
|
||||||
|
|
||||||
For details on how to initialize the state using ``SetOption``, see the
|
|
||||||
`guide to using the basecoin tool <basecoin-tool.md#genesis>`__.
|
|
||||||
|
|
||||||
Implement your own
|
|
||||||
------------------
|
|
||||||
|
|
||||||
To implement your own plugin and tooling, make a copy of
|
|
||||||
``docs/guide/counter``, and modify the code accordingly. Here, we will
|
|
||||||
briefly describe the design and the changes to be made, but see the code
|
|
||||||
for more details.
|
|
||||||
|
|
||||||
First is the ``cmd/counter/main.go``, which drives the program. It can
|
|
||||||
be left alone, but you should change any occurrences of ``counter`` to
|
|
||||||
whatever your plugin tool is going to be called. You must also register
|
|
||||||
your plugin(s) with the basecoin app with ``RegisterStartPlugin``.
|
|
||||||
|
|
||||||
The light-client is located in ``cmd/countercli/main.go`` and allows for
|
|
||||||
transaction and query commands. This file can also be left mostly alone
|
|
||||||
besides replacing the application name and adding references to new
|
|
||||||
plugin commands.
|
|
||||||
|
|
||||||
Next is the custom commands in ``cmd/countercli/commands/``. These files
|
|
||||||
are where we extend the tool with any new commands and flags we need to
|
|
||||||
send transactions or queries to our plugin. You define custom ``tx`` and
|
|
||||||
``query`` subcommands, which are registered in ``main.go`` (avoiding
|
|
||||||
``init()`` auto-registration, for less magic and more control in the
|
|
||||||
main executable).
|
|
||||||
|
|
||||||
Finally is ``plugins/counter/counter.go``, where we provide an
|
|
||||||
implementation of the ``Plugin`` interface. The most important part of
|
|
||||||
the implementation is the ``RunTx`` method, which determines the meaning
|
|
||||||
of the data sent along in the ``AppTx``. In our example, we define a new
|
|
||||||
transaction type, the ``CounterTx``, which we expect to be encoded in
|
|
||||||
the ``AppTx.Data``, and thus to be decoded in the ``RunTx`` method, and
|
|
||||||
used to update the plugin state.
|
|
||||||
|
|
||||||
For more examples and inspiration, see our `repository of example
|
|
||||||
plugins <https://github.com/tendermint/basecoin-examples>`__.
|
|
||||||
|
|
||||||
Conclusion
|
|
||||||
----------
|
|
||||||
|
|
||||||
In this guide, we demonstrated how to create a new plugin and how to
|
|
||||||
extend the ``basecoin`` tool to start a blockchain with the plugin
|
|
||||||
enabled and send transactions to it. In the next guide, we introduce a
|
|
||||||
`plugin for Inter Blockchain Communication <ibc.md>`__, which allows us
|
|
||||||
to publish proofs of the state of one blockchain to another, and thus to
|
|
||||||
transfer tokens and data between them.
|
|
|
@ -1,230 +0,0 @@
|
||||||
Glossary
|
|
||||||
========
|
|
||||||
|
|
||||||
This glossary defines many terms used throughout documentation of Quark.
|
|
||||||
If there is every a concept that seems unclear, check here. This is
|
|
||||||
mainly to provide a background and general understanding of the
|
|
||||||
different words and concepts that are used. Other documents will explain
|
|
||||||
in more detail how to combine these concepts to build a particular
|
|
||||||
application.
|
|
||||||
|
|
||||||
Transaction
|
|
||||||
-----------
|
|
||||||
|
|
||||||
A transaction is a packet of binary data that contains all information
|
|
||||||
to validate and perform an action on the blockchain. The only other data
|
|
||||||
that it interacts with is the current state of the chain (key-value
|
|
||||||
store), and it must have a deterministic action. The transaction is the
|
|
||||||
main piece of one request.
|
|
||||||
|
|
||||||
We currently make heavy use of
|
|
||||||
`go-amino <https://github.com/tendermint/go-amino>`__ to
|
|
||||||
provide binary and json encodings and decodings for ``struct`` or
|
|
||||||
interface\ ``objects. Here, encoding and decoding operations are designed to operate with interfaces nested any amount times (like an onion!). There is one public``\ TxMapper\`
|
|
||||||
in the basecoin root package, and all modules can register their own
|
|
||||||
transaction types there. This allows us to deserialize the entire
|
|
||||||
transaction in one location (even with types defined in other repos), to
|
|
||||||
easily embed an arbitrary transaction inside another without specifying
|
|
||||||
the type, and provide an automatic json representation allowing for
|
|
||||||
users (or apps) to inspect the chain.
|
|
||||||
|
|
||||||
Note how we can wrap any other transaction, add a fee level, and not
|
|
||||||
worry about the encoding in our module any more?
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type Fee struct {
|
|
||||||
Fee coin.Coin `json:"fee"`
|
|
||||||
Payer basecoin.Actor `json:"payer"` // the address who pays the fee
|
|
||||||
Tx basecoin.Tx `json:"tx"`
|
|
||||||
}
|
|
||||||
|
|
||||||
Context (ctx)
|
|
||||||
-------------
|
|
||||||
|
|
||||||
As a request passes through the system, it may pick up information such
|
|
||||||
as the block height the request runs at. In order to carry this information
|
|
||||||
between modules it is saved to the context. Further, all information
|
|
||||||
must be deterministic from the context in which the request runs (based
|
|
||||||
on the transaction and the block it was included in) and can be used to
|
|
||||||
validate the transaction.
|
|
||||||
|
|
||||||
Data Store
|
|
||||||
----------
|
|
||||||
|
|
||||||
In order to provide proofs to Tendermint, we keep all data in one
|
|
||||||
key-value (kv) store which is indexed with a merkle tree. This allows
|
|
||||||
for the easy generation of a root hash and proofs for queries without
|
|
||||||
requiring complex logic inside each module. Standardization of this
|
|
||||||
process also allows powerful light-client tooling as any store data may
|
|
||||||
be verified on the fly.
|
|
||||||
|
|
||||||
The largest limitation of the current implemenation of the kv-store is
|
|
||||||
that interface that the application must use can only ``Get`` and
|
|
||||||
``Set`` single data points. That said, there are some data structures
|
|
||||||
like queues and range queries that are available in ``state`` package.
|
|
||||||
These provide higher-level functionality in a standard format, but have
|
|
||||||
not yet been integrated into the kv-store interface.
|
|
||||||
|
|
||||||
Isolation
|
|
||||||
---------
|
|
||||||
|
|
||||||
One of the main arguments for blockchain is security. So while we
|
|
||||||
encourage the use of third-party modules, all developers must be
|
|
||||||
vigilant against security holes. If you use the
|
|
||||||
`stack <https://github.com/cosmos/cosmos-sdk/tree/master/stack>`__
|
|
||||||
package, it will provide two different types of compartmentalization
|
|
||||||
security.
|
|
||||||
|
|
||||||
The first is to limit the working kv-store space of each module. When
|
|
||||||
``DeliverTx`` is called for a module, it is never given the entire data
|
|
||||||
store, but rather only its own prefixed subset of the store. This is
|
|
||||||
achieved by prefixing all keys transparently with
|
|
||||||
``<module name> + 0x0``, using the null byte as a separator. Since the
|
|
||||||
module name must be a string, no malicious naming scheme can ever lead
|
|
||||||
to a collision. Inside a module, we can write using any key value we
|
|
||||||
desire without the possibility that we have modified data belonging to
|
|
||||||
separate module.
|
|
||||||
|
|
||||||
The second is to add permissions to the transaction context. The
|
|
||||||
transaction context can specify that the tx has been signed by one or
|
|
||||||
multiple specific actors.
|
|
||||||
|
|
||||||
A transactions will only be executed if the permission requirements have
|
|
||||||
been fulfilled. For example the sender of funds must have signed, or 2
|
|
||||||
out of 3 multi-signature actors must have signed a joint account. To
|
|
||||||
prevent the forgery of account signatures from unintended modules each
|
|
||||||
permission is associated with the module that granted it (in this case
|
|
||||||
`auth <https://github.com/cosmos/cosmos-sdk/tree/master/x/auth>`__),
|
|
||||||
and if a module tries to add a permission for another module, it will
|
|
||||||
panic. There is also protection if a module creates a brand new fake
|
|
||||||
context to trick the downstream modules. Each context enforces the rules
|
|
||||||
on how to make child contexts, and the stack builder enforces
|
|
||||||
that the context passed from one level to the next is a valid child of
|
|
||||||
the original one.
|
|
||||||
|
|
||||||
These security measures ensure that modules can confidently write to
|
|
||||||
their local section of the database and trust the permissions associated
|
|
||||||
with the context, without concern of interference from other modules.
|
|
||||||
(Okay, if you see a bunch of C-code in the module traversing through all
|
|
||||||
the memory space of the application, then get worried....)
|
|
||||||
|
|
||||||
Handler
|
|
||||||
-------
|
|
||||||
|
|
||||||
The ABCI interface is handled by ``app``, which translates these data
|
|
||||||
structures into an internal format that is more convenient, but unable
|
|
||||||
to travel over the wire. The basic interface for any code that modifies
|
|
||||||
state is the ``Handler`` interface, which provides four methods:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
Name() string
|
|
||||||
CheckTx(ctx Context, store state.KVStore, tx Tx) (Result, error)
|
|
||||||
DeliverTx(ctx Context, store state.KVStore, tx Tx) (Result, error)
|
|
||||||
SetOption(l log.Logger, store state.KVStore, module, key, value string) (string, error)
|
|
||||||
|
|
||||||
Note the ``Context``, ``KVStore``, and ``Tx`` as principal carriers of
|
|
||||||
information. And that Result is always success, and we have a second
|
|
||||||
error return for errors (which is much more standard golang that
|
|
||||||
``res.IsErr()``)
|
|
||||||
|
|
||||||
The ``Handler`` interface is designed to be the basis for all modules
|
|
||||||
that execute transactions, and this can provide a large degree of code
|
|
||||||
interoperability, much like ``http.Handler`` does in golang web
|
|
||||||
development.
|
|
||||||
|
|
||||||
Modules
|
|
||||||
-------
|
|
||||||
|
|
||||||
TODO: update (s/Modules/handlers+mappers+stores/g) & add Msg + Tx (a signed message)
|
|
||||||
|
|
||||||
A module is a set of functionality which should be typically designed as
|
|
||||||
self-sufficient. Common elements of a module are:
|
|
||||||
|
|
||||||
- transaction types (either end transactions, or transaction wrappers)
|
|
||||||
- custom error codes
|
|
||||||
- data models (to persist in the kv-store)
|
|
||||||
- handler (to handle any end transactions)
|
|
||||||
|
|
||||||
Dispatcher
|
|
||||||
----------
|
|
||||||
|
|
||||||
We usually will want to have multiple modules working together, and need
|
|
||||||
to make sure the correct transactions get to the correct module. So we
|
|
||||||
have ``coin`` sending money, ``roles`` to create multi-sig accounts, and
|
|
||||||
``ibc`` for following other chains all working together without
|
|
||||||
interference.
|
|
||||||
|
|
||||||
We can then register a ``Dispatcher``, which
|
|
||||||
also implements the ``Handler`` interface. We then register a list of
|
|
||||||
modules with the dispatcher. Every module has a unique ``Name()``, which
|
|
||||||
is used for isolating its state space. We use this same name for routing
|
|
||||||
transactions. Each transaction implementation must be registed with
|
|
||||||
go-amino via ``TxMapper``, so we just look at the registered name of this
|
|
||||||
transaction, which should be of the form ``<module name>/xxx``. The
|
|
||||||
dispatcher grabs the appropriate module name from the tx name and routes
|
|
||||||
it if the module is present.
|
|
||||||
|
|
||||||
This all seems like a bit of magic, but really we're just making use of
|
|
||||||
go-amino magic that we are already using, rather than add another layer.
|
|
||||||
For all the transactions to be properly routed, the only thing you need
|
|
||||||
to remember is to use the following pattern:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
const (
|
|
||||||
NameCoin = "coin"
|
|
||||||
TypeSend = NameCoin + "/send"
|
|
||||||
)
|
|
||||||
|
|
||||||
Permissions
|
|
||||||
-----------
|
|
||||||
|
|
||||||
TODO: replaces perms with object capabilities/object capability keys
|
|
||||||
- get rid of IPC
|
|
||||||
|
|
||||||
IPC requires a more complex permissioning system to allow the modules to
|
|
||||||
have limited access to each other and also to allow more types of
|
|
||||||
permissions than simple public key signatures. Rather than just use an
|
|
||||||
address to identify who is performing an action, we can use a more
|
|
||||||
complex structure:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type Actor struct {
|
|
||||||
ChainID string `json:"chain"` // this is empty unless it comes from a different chain
|
|
||||||
App string `json:"app"` // the app that the actor belongs to
|
|
||||||
Address data.Bytes `json:"addr"` // arbitrary app-specific unique id
|
|
||||||
}
|
|
||||||
|
|
||||||
Here, the ``Actor`` abstracts any address that can authorize actions,
|
|
||||||
hold funds, or initiate any sort of transaction. It doesn't just have to
|
|
||||||
be a pubkey on this chain, it could stem from another app (such as
|
|
||||||
multi-sig account), or even another chain (via IBC)
|
|
||||||
|
|
||||||
``ChainID`` is for IBC, discussed below. Let's focus on ``App`` and
|
|
||||||
``Address``. For a signature, the App is ``auth``, and any modules can
|
|
||||||
check to see if a specific public key address signed like this
|
|
||||||
``ctx.HasPermission(auth.SigPerm(addr))``. However, we can also
|
|
||||||
authorize a tx with ``roles``, which handles multi-sig accounts, it
|
|
||||||
checks if there were enough signatures by checking as above, then it can
|
|
||||||
add the role permission like
|
|
||||||
``ctx= ctx.WithPermissions(NewPerm(assume.Role))``
|
|
||||||
|
|
||||||
In addition to the permissions schema, the Actors are addresses just
|
|
||||||
like public key addresses. So one can create a mulit-sig role, then send
|
|
||||||
coin there, which can only be moved upon meeting the authorization
|
|
||||||
requirements from that module. ``coin`` doesn't even know the existence
|
|
||||||
of ``roles`` and one could build any other sort of module to provide
|
|
||||||
permissions (like bind the outcome of an election to move coins or to
|
|
||||||
modify the accounts on a role).
|
|
||||||
|
|
||||||
One idea - not yet implemented - is to provide scopes on the
|
|
||||||
permissions. Currently, if I sign a transaction to one module, it can
|
|
||||||
pass it on to any other module over IPC with the same permissions. It
|
|
||||||
could move coins, vote in an election, or anything else. Ideally, when
|
|
||||||
signing, one could also specify the scope(s) that this signature
|
|
||||||
authorizes. The `oauth
|
|
||||||
protocol <https://api.slack.com/docs/oauth-scopes>`__ also has to deal
|
|
||||||
with a similar problem, and maybe could provide some inspiration.
|
|
|
@ -1,424 +0,0 @@
|
||||||
IBC
|
|
||||||
===
|
|
||||||
|
|
||||||
TODO: update in light of latest SDK (this document is currently out of date)
|
|
||||||
|
|
||||||
One of the most exciting elements of the Cosmos Network is the
|
|
||||||
InterBlockchain Communication (IBC) protocol, which enables
|
|
||||||
interoperability across different blockchains. We implemented IBC as a
|
|
||||||
basecoin plugin, and we'll show you how to use it to send tokens across
|
|
||||||
blockchains!
|
|
||||||
|
|
||||||
Please note: this tutorial assumes familiarity with the Cosmos SDK.
|
|
||||||
|
|
||||||
The IBC plugin defines a new set of transactions as subtypes of the
|
|
||||||
``AppTx``. The plugin's functionality is accessed by setting the
|
|
||||||
``AppTx.Name`` field to ``"IBC"``, and setting the ``Data`` field to the
|
|
||||||
serialized IBC transaction type.
|
|
||||||
|
|
||||||
We'll demonstrate exactly how this works below.
|
|
||||||
|
|
||||||
Inter BlockChain Communication
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
Let's review the IBC protocol. The purpose of IBC is to enable one
|
|
||||||
blockchain to function as a light-client of another. Since we are using
|
|
||||||
a classical Byzantine Fault Tolerant consensus algorithm, light-client
|
|
||||||
verification is cheap and easy: all we have to do is check validator
|
|
||||||
signatures on the latest block, and verify a Merkle proof of the state.
|
|
||||||
|
|
||||||
In Tendermint, validators agree on a block before processing it. This
|
|
||||||
means that the signatures and state root for that block aren't included
|
|
||||||
until the next block. Thus, each block contains a field called
|
|
||||||
``LastCommit``, which contains the votes responsible for committing the
|
|
||||||
previous block, and a field in the block header called ``AppHash``,
|
|
||||||
which refers to the Merkle root hash of the application after processing
|
|
||||||
the transactions from the previous block. So, if we want to verify the
|
|
||||||
``AppHash`` from height H, we need the signatures from ``LastCommit`` at
|
|
||||||
height H+1. (And remember that this ``AppHash`` only contains the
|
|
||||||
results from all transactions up to and including block H-1)
|
|
||||||
|
|
||||||
Unlike Proof-of-Work, the light-client protocol does not need to
|
|
||||||
download and check all the headers in the blockchain - the client can
|
|
||||||
always jump straight to the latest header available, so long as the
|
|
||||||
validator set has not changed much. If the validator set is changing,
|
|
||||||
the client needs to track these changes, which requires downloading
|
|
||||||
headers for each block in which there is a significant change. Here, we
|
|
||||||
will assume the validator set is constant, and postpone handling
|
|
||||||
validator set changes for another time.
|
|
||||||
|
|
||||||
Now we can describe exactly how IBC works. Suppose we have two
|
|
||||||
blockchains, ``chain1`` and ``chain2``, and we want to send some data
|
|
||||||
from ``chain1`` to ``chain2``. We need to do the following: 1. Register
|
|
||||||
the details (ie. chain ID and genesis configuration) of ``chain1`` on
|
|
||||||
``chain2`` 2. Within ``chain1``, broadcast a transaction that creates an
|
|
||||||
outgoing IBC packet destined for ``chain2`` 3. Broadcast a transaction
|
|
||||||
to ``chain2`` informing it of the latest state (ie. header and commit
|
|
||||||
signatures) of ``chain1`` 4. Post the outgoing packet from ``chain1`` to
|
|
||||||
``chain2``, including the proof that it was indeed committed on
|
|
||||||
``chain1``. Note ``chain2`` can only verify this proof because it has a
|
|
||||||
recent header and commit.
|
|
||||||
|
|
||||||
Each of these steps involves a separate IBC transaction type. Let's take
|
|
||||||
them up in turn.
|
|
||||||
|
|
||||||
IBCRegisterChainTx
|
|
||||||
~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The ``IBCRegisterChainTx`` is used to register one chain on another. It
|
|
||||||
contains the chain ID and genesis configuration of the chain to
|
|
||||||
register:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type IBCRegisterChainTx struct { BlockchainGenesis }
|
|
||||||
|
|
||||||
type BlockchainGenesis struct { ChainID string Genesis string }
|
|
||||||
|
|
||||||
This transaction should only be sent once for a given chain ID, and
|
|
||||||
successive sends will return an error.
|
|
||||||
|
|
||||||
IBCUpdateChainTx
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The ``IBCUpdateChainTx`` is used to update the state of one chain on
|
|
||||||
another. It contains the header and commit signatures for some block in
|
|
||||||
the chain:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type IBCUpdateChainTx struct {
|
|
||||||
Header tm.Header
|
|
||||||
Commit tm.Commit
|
|
||||||
}
|
|
||||||
|
|
||||||
In the future, it needs to be updated to include changes to the
|
|
||||||
validator set as well. Anyone can relay an ``IBCUpdateChainTx``, and
|
|
||||||
they only need to do so as frequently as packets are being sent or the
|
|
||||||
validator set is changing.
|
|
||||||
|
|
||||||
IBCPacketCreateTx
|
|
||||||
~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The ``IBCPacketCreateTx`` is used to create an outgoing packet on one
|
|
||||||
chain. The packet itself contains the source and destination chain IDs,
|
|
||||||
a sequence number (i.e. an integer that increments with every message
|
|
||||||
sent between this pair of chains), a packet type (e.g. coin, data,
|
|
||||||
etc.), and a payload.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type IBCPacketCreateTx struct {
|
|
||||||
Packet
|
|
||||||
}
|
|
||||||
|
|
||||||
type Packet struct {
|
|
||||||
SrcChainID string
|
|
||||||
DstChainID string
|
|
||||||
Sequence uint64
|
|
||||||
Type string
|
|
||||||
Payload []byte
|
|
||||||
}
|
|
||||||
|
|
||||||
We have yet to define the format for the payload, so, for now, it's just
|
|
||||||
arbitrary bytes.
|
|
||||||
|
|
||||||
One way to think about this is that ``chain2`` has an account on
|
|
||||||
``chain1``. With a ``IBCPacketCreateTx`` on ``chain1``, we send funds to
|
|
||||||
that account. Then we can prove to ``chain2`` that there are funds
|
|
||||||
locked up for it in it's account on ``chain1``. Those funds can only be
|
|
||||||
unlocked with corresponding IBC messages back from ``chain2`` to
|
|
||||||
``chain1`` sending the locked funds to another account on ``chain1``.
|
|
||||||
|
|
||||||
IBCPacketPostTx
|
|
||||||
~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The ``IBCPacketPostTx`` is used to post an outgoing packet from one
|
|
||||||
chain to another. It contains the packet and a proof that the packet was
|
|
||||||
committed into the state of the sending chain:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
type IBCPacketPostTx struct {
|
|
||||||
FromChainID string // The immediate source of the packet, not always Packet.SrcChainID
|
|
||||||
FromChainHeight uint64 // The block height in which Packet was committed, to check Proof Packet
|
|
||||||
Proof *merkle.IAVLProof
|
|
||||||
}
|
|
||||||
|
|
||||||
The proof is a Merkle proof in an IAVL tree, our implementation of a
|
|
||||||
balanced, Merklized binary search tree. It contains a list of nodes in
|
|
||||||
the tree, which can be hashed together to get the Merkle root hash. This
|
|
||||||
hash must match the ``AppHash`` contained in the header at
|
|
||||||
``FromChainHeight + 1``
|
|
||||||
|
|
||||||
- note the ``+ 1`` is necessary since ``FromChainHeight`` is the height
|
|
||||||
in which the packet was committed, and the resulting state root is
|
|
||||||
not included until the next block.
|
|
||||||
|
|
||||||
IBC State
|
|
||||||
~~~~~~~~~
|
|
||||||
|
|
||||||
Now that we've seen all the transaction types, let's talk about the
|
|
||||||
state. Each chain stores some IBC state in its Merkle tree. For each
|
|
||||||
chain being tracked by our chain, we store:
|
|
||||||
|
|
||||||
- Genesis configuration
|
|
||||||
- Latest state
|
|
||||||
- Headers for recent heights
|
|
||||||
|
|
||||||
We also store all incoming (ingress) and outgoing (egress) packets.
|
|
||||||
|
|
||||||
The state of a chain is updated every time an ``IBCUpdateChainTx`` is
|
|
||||||
committed. New packets are added to the egress state upon
|
|
||||||
``IBCPacketCreateTx``. New packets are added to the ingress state upon
|
|
||||||
``IBCPacketPostTx``, assuming the proof checks out.
|
|
||||||
|
|
||||||
Merkle Queries
|
|
||||||
--------------
|
|
||||||
|
|
||||||
The Basecoin application uses a single Merkle tree that is shared across
|
|
||||||
all its state, including the built-in accounts state and all plugin
|
|
||||||
state. For this reason, it's important to use explicit key names and/or
|
|
||||||
hashes to ensure there are no collisions.
|
|
||||||
|
|
||||||
We can query the Merkle tree using the ABCI Query method. If we pass in
|
|
||||||
the correct key, it will return the corresponding value, as well as a
|
|
||||||
proof that the key and value are contained in the Merkle tree.
|
|
||||||
|
|
||||||
The results of a query can thus be used as proof in an
|
|
||||||
``IBCPacketPostTx``.
|
|
||||||
|
|
||||||
Relay
|
|
||||||
-----
|
|
||||||
|
|
||||||
While we need all these packet types internally to keep track of all the
|
|
||||||
proofs on both chains in a secure manner, for the normal work-flow, we
|
|
||||||
can run a relay node that handles the cross-chain interaction.
|
|
||||||
|
|
||||||
In this case, there are only two steps. First ``basecoin relay init``,
|
|
||||||
which must be run once to register each chain with the other one, and
|
|
||||||
make sure they are ready to send and recieve. And then
|
|
||||||
``basecoin relay start``, which is a long-running process polling the
|
|
||||||
queue on each side, and relaying all new message to the other block.
|
|
||||||
|
|
||||||
This requires that the relay has access to accounts with some funds on
|
|
||||||
both chains to pay for all the ibc packets it will be forwarding.
|
|
||||||
|
|
||||||
Try it out
|
|
||||||
----------
|
|
||||||
|
|
||||||
Now that we have all the background knowledge, let's actually walk
|
|
||||||
through the tutorial.
|
|
||||||
|
|
||||||
Make sure you have installed `basecoin and
|
|
||||||
basecli </docs/guide/install.md>`__.
|
|
||||||
|
|
||||||
Basecoin is a framework for creating new cryptocurrency applications. It
|
|
||||||
comes with an ``IBC`` plugin enabled by default.
|
|
||||||
|
|
||||||
You will also want to install the
|
|
||||||
`jq <https://stedolan.github.io/jq/>`__ for handling JSON at the command
|
|
||||||
line.
|
|
||||||
|
|
||||||
If you have any trouble with this, you can also look at the `test
|
|
||||||
scripts </tests/cli/ibc.sh>`__ or just run ``make test_cli`` in basecoin
|
|
||||||
repo. Otherwise, open up 5 (yes 5!) terminal tabs....
|
|
||||||
|
|
||||||
Preliminaries
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# first, clean up any old garbage for a fresh slate...
|
|
||||||
rm -rf ~/.ibcdemo/
|
|
||||||
|
|
||||||
Let's start by setting up some environment variables and aliases:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export BCHOME1_CLIENT=~/.ibcdemo/chain1/client
|
|
||||||
export BCHOME1_SERVER=~/.ibcdemo/chain1/server
|
|
||||||
export BCHOME2_CLIENT=~/.ibcdemo/chain2/client
|
|
||||||
export BCHOME2_SERVER=~/.ibcdemo/chain2/server
|
|
||||||
alias basecli1="basecli --home $BCHOME1_CLIENT"
|
|
||||||
alias basecli2="basecli --home $BCHOME2_CLIENT"
|
|
||||||
alias basecoin1="basecoin --home $BCHOME1_SERVER"
|
|
||||||
alias basecoin2="basecoin --home $BCHOME2_SERVER"
|
|
||||||
|
|
||||||
This will give us some new commands to use instead of raw ``basecli``
|
|
||||||
and ``basecoin`` to ensure we're using the right configuration for the
|
|
||||||
chain we want to talk to.
|
|
||||||
|
|
||||||
We also want to set some chain IDs:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export CHAINID1="test-chain-1"
|
|
||||||
export CHAINID2="test-chain-2"
|
|
||||||
|
|
||||||
And since we will run two different chains on one machine, we need to
|
|
||||||
maintain different sets of ports:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export PORT_PREFIX1=1234
|
|
||||||
export PORT_PREFIX2=2345
|
|
||||||
export RPC_PORT1=${PORT_PREFIX1}7
|
|
||||||
export RPC_PORT2=${PORT_PREFIX2}7
|
|
||||||
|
|
||||||
Setup Chain 1
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Now, let's create some keys that we can use for accounts on
|
|
||||||
test-chain-1:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli1 keys new money
|
|
||||||
basecli1 keys new gotnone
|
|
||||||
export MONEY=$(basecli1 keys get money | awk '{print $2}')
|
|
||||||
export GOTNONE=$(basecli1 keys get gotnone | awk '{print $2}')
|
|
||||||
|
|
||||||
and create an initial configuration giving lots of coins to the $MONEY
|
|
||||||
key:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecoin1 init --chain-id $CHAINID1 $MONEY
|
|
||||||
|
|
||||||
Now start basecoin:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
sed -ie "s/4665/$PORT_PREFIX1/" $BCHOME1_SERVER/config.toml
|
|
||||||
|
|
||||||
basecoin1 start &> basecoin1.log &
|
|
||||||
|
|
||||||
Note the ``sed`` command to replace the ports in the config file. You
|
|
||||||
can follow the logs with ``tail -f basecoin1.log``
|
|
||||||
|
|
||||||
Now we can attach the client to the chain and verify the state. The
|
|
||||||
first account should have money, the second none:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli1 init --node=tcp://localhost:${RPC_PORT1} --genesis=${BCHOME1_SERVER}/genesis.json
|
|
||||||
basecli1 query account $MONEY
|
|
||||||
basecli1 query account $GOTNONE
|
|
||||||
|
|
||||||
Setup Chain 2
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This is the same as above, except with ``basecli2``, ``basecoin2``, and
|
|
||||||
``$CHAINID2``. We will also need to change the ports, since we're
|
|
||||||
running another chain on the same local machine.
|
|
||||||
|
|
||||||
Let's create new keys for test-chain-2:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli2 keys new moremoney
|
|
||||||
basecli2 keys new broke
|
|
||||||
MOREMONEY=$(basecli2 keys get moremoney | awk '{print $2}')
|
|
||||||
BROKE=$(basecli2 keys get broke | awk '{print $2}')
|
|
||||||
|
|
||||||
And prepare the genesis block, and start the server:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecoin2 init --chain-id $CHAINID2 $(basecli2 keys get moremoney | awk '{print $2}')
|
|
||||||
|
|
||||||
sed -ie "s/4665/$PORT_PREFIX2/" $BCHOME2_SERVER/config.toml
|
|
||||||
|
|
||||||
basecoin2 start &> basecoin2.log &
|
|
||||||
|
|
||||||
Now attach the client to the chain and verify the state. The first
|
|
||||||
account should have money, the second none:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecli2 init --node=tcp://localhost:${RPC_PORT2} --genesis=${BCHOME2_SERVER}/genesis.json
|
|
||||||
basecli2 query account $MOREMONEY
|
|
||||||
basecli2 query account $BROKE
|
|
||||||
|
|
||||||
Connect these chains
|
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
OK! So we have two chains running on your local machine, with different
|
|
||||||
keys on each. Let's hook them up together by starting a relay process to
|
|
||||||
forward messages from one chain to the other.
|
|
||||||
|
|
||||||
The relay account needs some money in it to pay for the ibc messages, so
|
|
||||||
for now, we have to transfer some cash from the rich accounts before we
|
|
||||||
start the actual relay.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# note that this key.json file is a hardcoded demo for all chains, this will
|
|
||||||
# be updated in a future release
|
|
||||||
RELAY_KEY=$BCHOME1_SERVER/key.json
|
|
||||||
RELAY_ADDR=$(cat $RELAY_KEY | jq .address | tr -d \")
|
|
||||||
|
|
||||||
basecli1 tx send --amount=100000mycoin --sequence=1 --to=$RELAY_ADDR--name=money
|
|
||||||
basecli1 query account $RELAY_ADDR
|
|
||||||
|
|
||||||
basecli2 tx send --amount=100000mycoin --sequence=1 --to=$RELAY_ADDR --name=moremoney
|
|
||||||
basecli2 query account $RELAY_ADDR
|
|
||||||
|
|
||||||
Now we can start the relay process.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
basecoin relay init --chain1-id=$CHAINID1 --chain2-id=$CHAINID2 \
|
|
||||||
--chain1-addr=tcp://localhost:${RPC_PORT1} --chain2-addr=tcp://localhost:${RPC_PORT2} \
|
|
||||||
--genesis1=${BCHOME1_SERVER}/genesis.json --genesis2=${BCHOME2_SERVER}/genesis.json \
|
|
||||||
--from=$RELAY_KEY
|
|
||||||
|
|
||||||
basecoin relay start --chain1-id=$CHAINID1 --chain2-id=$CHAINID2 \
|
|
||||||
--chain1-addr=tcp://localhost:${RPC_PORT1} --chain2-addr=tcp://localhost:${RPC_PORT2} \
|
|
||||||
--from=$RELAY_KEY &> relay.log &
|
|
||||||
|
|
||||||
This should start up the relay, and assuming no error messages came out,
|
|
||||||
the two chains are now fully connected over IBC. Let's use this to send
|
|
||||||
our first tx accross the chains...
|
|
||||||
|
|
||||||
Sending cross-chain payments
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The hard part is over, we set up two blockchains, a few private keys,
|
|
||||||
and a secure relay between them. Now we can enjoy the fruits of our
|
|
||||||
labor...
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# Here's an empty account on test-chain-2
|
|
||||||
basecli2 query account $BROKE
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# Let's send some funds from test-chain-1
|
|
||||||
basecli1 tx send --amount=12345mycoin --sequence=2 --to=test-chain-2/$BROKE --name=money
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# give it time to arrive...
|
|
||||||
sleep 2
|
|
||||||
# now you should see 12345 coins!
|
|
||||||
basecli2 query account $BROKE
|
|
||||||
|
|
||||||
You're no longer broke! Cool, huh? Now have fun exploring and sending
|
|
||||||
coins across the chains. And making more accounts as you want to.
|
|
||||||
|
|
||||||
Conclusion
|
|
||||||
----------
|
|
||||||
|
|
||||||
In this tutorial we explained how IBC works, and demonstrated how to use
|
|
||||||
it to communicate between two chains. We did the simplest communciation
|
|
||||||
possible: a one way transfer of data from chain1 to chain2. The most
|
|
||||||
important part was that we updated chain2 with the latest state (i.e.
|
|
||||||
header and commit) of chain1, and then were able to post a proof to
|
|
||||||
chain2 that a packet was committed to the outgoing state of chain1.
|
|
||||||
|
|
||||||
In a future tutorial, we will demonstrate how to use IBC to actually
|
|
||||||
transfer tokens between two blockchains, but we'll do it with real
|
|
||||||
testnets deployed across multiple nodes on the network. Stay tuned!
|
|
|
@ -1,119 +0,0 @@
|
||||||
# Keys CLI
|
|
||||||
|
|
||||||
**WARNING: out-of-date and parts are wrong.... please update**
|
|
||||||
|
|
||||||
This is as much an example how to expose cobra/viper, as for a cli itself
|
|
||||||
(I think this code is overkill for what go-keys needs). But please look at
|
|
||||||
the commands, and give feedback and changes.
|
|
||||||
|
|
||||||
`RootCmd` calls some initialization functions (`cobra.OnInitialize` and `RootCmd.PersistentPreRunE`) which serve to connect environmental variables and cobra flags, as well as load the config file. It also validates the flags registered on root and creates the cryptomanager, which will be used by all subcommands.
|
|
||||||
|
|
||||||
## Help info
|
|
||||||
|
|
||||||
```
|
|
||||||
# keys help
|
|
||||||
|
|
||||||
Keys allows you to manage your local keystore for tendermint.
|
|
||||||
|
|
||||||
These keys may be in any format supported by go-crypto and can be
|
|
||||||
used by light-clients, full nodes, or any other application that
|
|
||||||
needs to sign with a private key.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
keys [command]
|
|
||||||
|
|
||||||
Available Commands:
|
|
||||||
get Get details of one key
|
|
||||||
list List all keys
|
|
||||||
new Create a new public/private key pair
|
|
||||||
serve Run the key manager as an http server
|
|
||||||
update Change the password for a private key
|
|
||||||
|
|
||||||
Flags:
|
|
||||||
--keydir string Directory to store private keys (subdir of root) (default "keys")
|
|
||||||
-o, --output string Output format (text|json) (default "text")
|
|
||||||
-r, --root string root directory for config and data (default "/Users/ethan/.tlc")
|
|
||||||
|
|
||||||
Use "keys [command] --help" for more information about a command.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Getting the config file
|
|
||||||
|
|
||||||
The first step is to load in root, by checking the following in order:
|
|
||||||
|
|
||||||
* -r, --root command line flag
|
|
||||||
* TM_ROOT environmental variable
|
|
||||||
* default ($HOME/.tlc evaluated at runtime)
|
|
||||||
|
|
||||||
Once the `rootDir` is established, the script looks for a config file named `keys.{json,toml,yaml,hcl}` in that directory and parses it. These values will provide defaults for flags of the same name.
|
|
||||||
|
|
||||||
There is an example config file for testing out locally, which writes keys to `./.mykeys`. You can
|
|
||||||
|
|
||||||
## Getting/Setting variables
|
|
||||||
|
|
||||||
When we want to get the value of a user-defined variable (eg. `output`), we can call `viper.GetString("output")`, which will do the following checks, until it finds a match:
|
|
||||||
|
|
||||||
* Is `--output` command line flag present?
|
|
||||||
* Is `TM_OUTPUT` environmental variable set?
|
|
||||||
* Was a config file found and does it have an `output` variable?
|
|
||||||
* Is there a default set on the command line flag?
|
|
||||||
|
|
||||||
If no variable is set and there was no default, we get back "".
|
|
||||||
|
|
||||||
This setup allows us to have powerful command line flags, but use env variables or config files (local or 12-factor style) to avoid passing these arguments every time.
|
|
||||||
|
|
||||||
## Nesting structures
|
|
||||||
|
|
||||||
Sometimes we don't just need key-value pairs, but actually a multi-level config file, like
|
|
||||||
|
|
||||||
```
|
|
||||||
[mail]
|
|
||||||
from = "no-reply@example.com"
|
|
||||||
server = "mail.example.com"
|
|
||||||
port = 567
|
|
||||||
password = "XXXXXX"
|
|
||||||
```
|
|
||||||
|
|
||||||
This CLI is too simple to warant such a structure, but I think eg. tendermint could benefit from such an approach. Here are some pointers:
|
|
||||||
|
|
||||||
* [Accessing nested keys from config files](https://github.com/spf13/viper#accessing-nested-keys)
|
|
||||||
* [Overriding nested values with envvars](https://www.netlify.com/blog/2016/09/06/creating-a-microservice-boilerplate-in-go/#nested-config-values) - the mentioned outstanding PR is already merged into master!
|
|
||||||
* Overriding nested values with cli flags? (use `--log_config.level=info` ??)
|
|
||||||
|
|
||||||
I'd love to see an example of this fully worked out in a more complex CLI.
|
|
||||||
|
|
||||||
## Have your cake and eat it too
|
|
||||||
|
|
||||||
It's easy to render data different ways. Some better for viewing, some better for importing to other programs. You can just add some global (persistent) flags to control the output formatting, and everyone gets what they want.
|
|
||||||
|
|
||||||
```
|
|
||||||
# keys list -e hex
|
|
||||||
All keys:
|
|
||||||
betty d0789984492b1674e276b590d56b7ae077f81adc
|
|
||||||
john b77f4720b220d1411a649b6c7f1151eb6b1c226a
|
|
||||||
|
|
||||||
# keys list -e btc
|
|
||||||
All keys:
|
|
||||||
betty 3uTF4r29CbtnzsNHZoPSYsE4BDwH
|
|
||||||
john 3ZGp2Md35iw4XVtRvZDUaAEkCUZP
|
|
||||||
|
|
||||||
# keys list -e b64 -o json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"name": "betty",
|
|
||||||
"address": "0HiZhEkrFnTidrWQ1Wt64Hf4Gtw=",
|
|
||||||
"pubkey": {
|
|
||||||
"type": "secp256k1",
|
|
||||||
"data": "F83WvhT0KwttSoqQqd_0_r2ztUUaQix5EXdO8AZyREoV31Og780NW59HsqTAb2O4hZ-w-j0Z-4b2IjfdqqfhVQ=="
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "john",
|
|
||||||
"address": "t39HILIg0UEaZJtsfxFR62scImo=",
|
|
||||||
"pubkey": {
|
|
||||||
"type": "ed25519",
|
|
||||||
"data": "t1LFmbg_8UTwj-n1wkqmnTp6NfaOivokEhlYySlGYCY="
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
|
@ -1,38 +0,0 @@
|
||||||
Replay Protection
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
In order to prevent `replay
|
|
||||||
attacks <https://en.wikipedia.org/wiki/Replay_attack>`__ a multi account
|
|
||||||
nonce system has been constructed as a module, which can be found in
|
|
||||||
``modules/nonce``. By adding the nonce module to the stack, each
|
|
||||||
transaction is verified for authenticity against replay attacks. This is
|
|
||||||
achieved by requiring that a new signed copy of the sequence number
|
|
||||||
which must be exactly 1 greater than the sequence number of the previous
|
|
||||||
transaction. A distinct sequence number is assigned per chain-id,
|
|
||||||
application, and group of signers. Each sequence number is tracked as a
|
|
||||||
nonce-store entry where the key is the marshaled list of actors after
|
|
||||||
having been sorted by chain, app, and address.
|
|
||||||
|
|
||||||
.. code:: golang
|
|
||||||
|
|
||||||
// Tx - Nonce transaction structure, contains list of signers and current sequence number
|
|
||||||
type Tx struct {
|
|
||||||
Sequence uint32 `json:"sequence"`
|
|
||||||
Signers []basecoin.Actor `json:"signers"`
|
|
||||||
Tx basecoin.Tx `json:"tx"`
|
|
||||||
}
|
|
||||||
|
|
||||||
By distinguishing sequence numbers across groups of Signers,
|
|
||||||
multi-signature Actors need not lock up use of their Address while
|
|
||||||
waiting for all the members of a multi-sig transaction to occur. Instead
|
|
||||||
only the multi-sig account will be locked, while other accounts
|
|
||||||
belonging to that signer can be used and signed with other sequence
|
|
||||||
numbers.
|
|
||||||
|
|
||||||
By abstracting out the nonce module in the stack, entire series of
|
|
||||||
transactions can occur without needing to verify the nonce for each
|
|
||||||
member of the series. An common example is a stack which will send coins
|
|
||||||
and charge a fee. Within the SDK this can be achieved using separate
|
|
||||||
modules in a stack, one to send the coins and the other to charge the
|
|
||||||
fee, however both modules do not need to check the nonce. This can occur
|
|
||||||
as a separate module earlier in the stack.
|
|
|
@ -0,0 +1,402 @@
|
||||||
|
Using The Staking Module
|
||||||
|
========================
|
||||||
|
|
||||||
|
This project is a demonstration of the Cosmos Hub staking functionality; it is
|
||||||
|
designed to get validator acquianted with staking concepts and procedures.
|
||||||
|
|
||||||
|
Potential validators will be declaring their candidacy, after which users can
|
||||||
|
delegate and, if they so wish, unbond. This can be practiced using a local or
|
||||||
|
public testnet.
|
||||||
|
|
||||||
|
This example covers initial setup of a two-node testnet between a server in the cloud and a local machine. Begin this tutorial from a cloud machine that you've ``ssh``'d into.
|
||||||
|
|
||||||
|
Install
|
||||||
|
-------
|
||||||
|
|
||||||
|
The ``gaiad`` and ``gaiacli`` binaries:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
go get github.com/cosmos/cosmos-sdk
|
||||||
|
cd $GOPATH/src/github.com/cosmos/cosmos-sdk
|
||||||
|
make get_vendor_deps
|
||||||
|
make install
|
||||||
|
|
||||||
|
Let's jump right into it. First, we initialize some default files:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiad init
|
||||||
|
|
||||||
|
which will output:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
I[03-30|11:20:13.365] Found private validator module=main path=/root/.gaiad/config/priv_validator.json
|
||||||
|
I[03-30|11:20:13.365] Found genesis file module=main path=/root/.gaiad/config/genesis.json
|
||||||
|
Secret phrase to access coins:
|
||||||
|
citizen hungry tennis noise park hire glory exercise link glow dolphin labor design grit apple abandon
|
||||||
|
|
||||||
|
This tell us we have a ``priv_validator.json`` and ``genesis.json`` in the ``~/.gaiad/config`` directory. A ``config.toml`` was also created in the same directory. It is a good idea to get familiar with those files. Write down the seed.
|
||||||
|
|
||||||
|
The next thing we'll need to is add the key from ``priv_validator.json`` to the ``gaiacli`` key manager. For this we need a seed and a password:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli keys add alice --recover
|
||||||
|
|
||||||
|
which will give you three prompts:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
Enter a passphrase for your key:
|
||||||
|
Repeat the passphrase:
|
||||||
|
Enter your recovery seed phrase:
|
||||||
|
|
||||||
|
create a password and copy in your seed phrase. The name and address of the key will be output:
|
||||||
|
|
||||||
|
::
|
||||||
|
NAME: ADDRESS: PUBKEY:
|
||||||
|
alice 67997DD03D527EB439B7193F2B813B05B219CC02 1624DE6220BB89786C1D597050438C728202436552C6226AB67453CDB2A4D2703402FB52B6
|
||||||
|
|
||||||
|
You can see all available keys with:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli keys list
|
||||||
|
|
||||||
|
Setup Testnet
|
||||||
|
-------------
|
||||||
|
|
||||||
|
Next, we start the daemon (do this in another window):
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiad start
|
||||||
|
|
||||||
|
and you'll see blocks start streaming through.
|
||||||
|
|
||||||
|
For this example, we're doing the above on a cloud machine. The next steps should be done on your local machine or another server in the cloud, which will join the running testnet then bond/unbond.
|
||||||
|
|
||||||
|
Accounts
|
||||||
|
--------
|
||||||
|
|
||||||
|
We have:
|
||||||
|
|
||||||
|
- ``alice`` the initial validator (in the cloud)
|
||||||
|
- ``bob`` receives tokens from ``alice`` then declares candidacy (from local machine)
|
||||||
|
- ``charlie`` will bond and unbond to ``bob`` (from local machine)
|
||||||
|
|
||||||
|
Remember that ``alice`` was already created. On your second machine, install the binaries and create two new keys:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli keys add bob
|
||||||
|
gaiacli keys add charlie
|
||||||
|
|
||||||
|
both of which will prompt you for a password. Now we need to copy the ``genesis.json`` and ``config.toml`` from the first machine (with ``alice``) to the second machine. This is a good time to look at both these files.
|
||||||
|
|
||||||
|
The ``genesis.json`` should look something like:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{
|
||||||
|
"app_state": {
|
||||||
|
"accounts": [
|
||||||
|
{
|
||||||
|
"address": "1D9B2356CAADF46D3EE3488E3CCE3028B4283DEE",
|
||||||
|
"coins": [
|
||||||
|
{
|
||||||
|
"denom": "steak",
|
||||||
|
"amount": 100000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stake": {
|
||||||
|
"pool": {
|
||||||
|
"total_supply": 0,
|
||||||
|
"bonded_shares": {
|
||||||
|
"num": 0,
|
||||||
|
"denom": 1
|
||||||
|
},
|
||||||
|
"unbonded_shares": {
|
||||||
|
"num": 0,
|
||||||
|
"denom": 1
|
||||||
|
},
|
||||||
|
"bonded_pool": 0,
|
||||||
|
"unbonded_pool": 0,
|
||||||
|
"inflation_last_time": 0,
|
||||||
|
"inflation": {
|
||||||
|
"num": 7,
|
||||||
|
"denom": 100
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"params": {
|
||||||
|
"inflation_rate_change": {
|
||||||
|
"num": 13,
|
||||||
|
"denom": 100
|
||||||
|
},
|
||||||
|
"inflation_max": {
|
||||||
|
"num": 20,
|
||||||
|
"denom": 100
|
||||||
|
},
|
||||||
|
"inflation_min": {
|
||||||
|
"num": 7,
|
||||||
|
"denom": 100
|
||||||
|
},
|
||||||
|
"goal_bonded": {
|
||||||
|
"num": 67,
|
||||||
|
"denom": 100
|
||||||
|
},
|
||||||
|
"max_validators": 100,
|
||||||
|
"bond_denom": "steak"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"validators": [
|
||||||
|
{
|
||||||
|
"pub_key": {
|
||||||
|
"type": "AC26791624DE60",
|
||||||
|
"value": "rgpc/ctVld6RpSfwN5yxGBF17R1PwMTdhQ9gKVUZp5g="
|
||||||
|
},
|
||||||
|
"power": 10,
|
||||||
|
"name": ""
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"app_hash": "",
|
||||||
|
"genesis_time": "0001-01-01T00:00:00Z",
|
||||||
|
"chain_id": "test-chain-Uv1EVU"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
To notice is that the ``accounts`` field has a an address and a whole bunch of "mycoin". This is ``alice``'s address (todo: dbl check). Under ``validators`` we see the ``pub_key.data`` field, which will match the same field in the ``priv_validator.json`` file.
|
||||||
|
|
||||||
|
The ``config.toml`` is long so let's focus on one field:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
# Comma separated list of seed nodes to connect to
|
||||||
|
seeds = ""
|
||||||
|
|
||||||
|
On the ``alice`` cloud machine, we don't need to do anything here. Instead, we need its IP address. After copying this file (and the ``genesis.json`` to your local machine, you'll want to put the IP in the ``seeds = "138.197.161.74"`` field, in this case, we have a made-up IP. For joining testnets with many nodes, you can add more comma-seperated IPs to the list.
|
||||||
|
|
||||||
|
|
||||||
|
Now that your files are all setup, it's time to join the network. On your local machine, run:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiad start
|
||||||
|
|
||||||
|
and your new node will connect to the running validator (``alice``).
|
||||||
|
|
||||||
|
Sending Tokens
|
||||||
|
--------------
|
||||||
|
|
||||||
|
We'll have ``alice`` send some ``mycoin`` to ``bob``, who has now joined the network:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli send --amount=1000mycoin --sequence=0 --name=alice --to=5A35E4CC7B7DC0A5CB49CEA91763213A9AE92AD6 --chain-id=test-chain-Uv1EVU
|
||||||
|
|
||||||
|
where the ``--sequence`` flag is to be incremented for each transaction, the ``--name`` flag is the sender (alice), and the ``--to`` flag takes ``bob``'s address. You'll see something like:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
Please enter passphrase for alice:
|
||||||
|
{
|
||||||
|
"check_tx": {
|
||||||
|
"gas": 30
|
||||||
|
},
|
||||||
|
"deliver_tx": {
|
||||||
|
"tags": [
|
||||||
|
{
|
||||||
|
"key": "height",
|
||||||
|
"value_type": 1,
|
||||||
|
"value_int": 2963
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"key": "coin.sender",
|
||||||
|
"value_string": "5D93A6059B6592833CBC8FA3DA90EE0382198985"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"key": "coin.receiver",
|
||||||
|
"value_string": "5A35E4CC7B7DC0A5CB49CEA91763213A9AE92AD6"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"hash": "423BD7EA3C4B36AF8AFCCA381C0771F8A698BA77",
|
||||||
|
"height": 2963
|
||||||
|
}
|
||||||
|
|
||||||
|
TODO: check the above with current actual output.
|
||||||
|
|
||||||
|
Check out ``bob``'s account, which should now have 1000 mycoin:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli account 5A35E4CC7B7DC0A5CB49CEA91763213A9AE92AD6
|
||||||
|
|
||||||
|
Adding a Second Validator
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
**This section is wrong/needs to be updated**
|
||||||
|
|
||||||
|
Next, let's add the second node as a validator.
|
||||||
|
|
||||||
|
First, we need the pub_key data:
|
||||||
|
|
||||||
|
** need to make bob a priv_Val above?
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
cat $HOME/.gaia2/priv_validator.json
|
||||||
|
|
||||||
|
the first part will look like:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{"address":"7B78527942C831E16907F10C3263D5ED933F7E99","pub_key":{"type":"ed25519","data":"96864CE7085B2E342B0F96F2E92B54B18C6CC700186238810D5AA7DFDAFDD3B2"},
|
||||||
|
|
||||||
|
and you want the ``pub_key`` ``data`` that starts with ``96864CE``.
|
||||||
|
|
||||||
|
Now ``bob`` can create a validator with that pubkey.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli stake create-validator --amount=10mycoin --name=bob --address-validator=<address> --pub-key=<pubkey> --moniker=bobby
|
||||||
|
|
||||||
|
with an output like:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
Please enter passphrase for bob:
|
||||||
|
{
|
||||||
|
"check_tx": {
|
||||||
|
"gas": 30
|
||||||
|
},
|
||||||
|
"deliver_tx": {},
|
||||||
|
"hash": "2A2A61FFBA1D7A59138E0068C82CC830E5103799",
|
||||||
|
"height": 4075
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
We should see ``bob``'s account balance decrease by 10 mycoin:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli account 5D93A6059B6592833CBC8FA3DA90EE0382198985
|
||||||
|
|
||||||
|
To confirm for certain the new validator is active, ask the tendermint node:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
curl localhost:26657/validators
|
||||||
|
|
||||||
|
If you now kill either node, blocks will stop streaming in, because
|
||||||
|
there aren't enough validators online. Turn it back on and they will
|
||||||
|
start streaming again.
|
||||||
|
|
||||||
|
Now that ``bob`` has declared candidacy, which essentially bonded 10 mycoin and made him a validator, we're going to get ``charlie`` to delegate some coins to ``bob``.
|
||||||
|
|
||||||
|
Delegating
|
||||||
|
----------
|
||||||
|
|
||||||
|
First let's have ``alice`` send some coins to ``charlie``:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli send --amount=1000mycoin --sequence=2 --name=alice --to=48F74F48281C89E5E4BE9092F735EA519768E8EF
|
||||||
|
|
||||||
|
Then ``charlie`` will delegate some mycoin to ``bob``:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli stake delegate --amount=10mycoin --address-delegator=<charlie's address> --address-validator=<bob's address> --name=charlie
|
||||||
|
|
||||||
|
You'll see output like:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
Please enter passphrase for charlie:
|
||||||
|
{
|
||||||
|
"check_tx": {
|
||||||
|
"gas": 30
|
||||||
|
},
|
||||||
|
"deliver_tx": {},
|
||||||
|
"hash": "C3443BA30FCCC1F6E3A3D6AAAEE885244F8554F0",
|
||||||
|
"height": 51585
|
||||||
|
}
|
||||||
|
|
||||||
|
And that's it. You can query ``charlie``'s account to see the decrease in mycoin.
|
||||||
|
|
||||||
|
To get more information about the candidate, try:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli stake validator <address>
|
||||||
|
|
||||||
|
and you'll see output similar to:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{
|
||||||
|
"height": 51899,
|
||||||
|
"data": {
|
||||||
|
"pub_key": {
|
||||||
|
"type": "ed25519",
|
||||||
|
"data": "52D6FCD8C92A97F7CCB01205ADF310A18411EA8FDCC10E65BF2FCDB05AD1689B"
|
||||||
|
},
|
||||||
|
"owner": {
|
||||||
|
"chain": "",
|
||||||
|
"app": "sigs",
|
||||||
|
"addr": "5A35E4CC7B7DC0A5CB49CEA91763213A9AE92AD6"
|
||||||
|
},
|
||||||
|
"shares": 20,
|
||||||
|
"voting_power": 20,
|
||||||
|
"description": {
|
||||||
|
"moniker": "bobby",
|
||||||
|
"identity": "",
|
||||||
|
"website": "",
|
||||||
|
"details": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
It's also possible the query the delegator's bond like so:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli stake delegation --address-delegator=<address> --address-validator=<address>
|
||||||
|
|
||||||
|
with an output similar to:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
{
|
||||||
|
"height": 325782,
|
||||||
|
"data": {
|
||||||
|
"PubKey": {
|
||||||
|
"type": "ed25519",
|
||||||
|
"data": "52D6FCD8C92A97F7CCB01205ADF310A18411EA8FDCC10E65BF2FCDB05AD1689B"
|
||||||
|
},
|
||||||
|
"Shares": 20
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
where the ``--address-delegator`` is ``charlie``'s address and the ``--address-validator`` is ``bob``'s address.
|
||||||
|
|
||||||
|
|
||||||
|
Unbonding
|
||||||
|
---------
|
||||||
|
|
||||||
|
Finally, to relinquish your voting power, unbond some coins. You should see
|
||||||
|
your VotingPower reduce and your account balance increase.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
gaiacli stake unbond --amount=5mycoin --name=charlie --address-delegator=<address> --address-validator=<address>
|
||||||
|
gaiacli account 48F74F48281C89E5E4BE9092F735EA519768E8EF
|
||||||
|
|
||||||
|
See the bond decrease with ``gaiacli stake delegation`` like above.
|
|
@ -1,204 +0,0 @@
|
||||||
Key Management
|
|
||||||
==============
|
|
||||||
|
|
||||||
Here we explain a bit how to work with your keys, using the
|
|
||||||
``gaia client keys`` subcommand.
|
|
||||||
|
|
||||||
**Note:** This keys tooling is not considered production ready and is
|
|
||||||
for dev only.
|
|
||||||
|
|
||||||
We'll look at what you can do using the six sub-commands of
|
|
||||||
``gaia client keys``:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
new
|
|
||||||
list
|
|
||||||
get
|
|
||||||
delete
|
|
||||||
recover
|
|
||||||
update
|
|
||||||
|
|
||||||
Create keys
|
|
||||||
-----------
|
|
||||||
|
|
||||||
``gaia client keys new`` has two inputs (name, password) and two outputs
|
|
||||||
(address, seed).
|
|
||||||
|
|
||||||
First, we name our key:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys new alice
|
|
||||||
|
|
||||||
This will prompt (10 character minimum) password entry which must be
|
|
||||||
re-typed. You'll see:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
Enter a passphrase:
|
|
||||||
Repeat the passphrase:
|
|
||||||
alice A159C96AE911F68913E715ED889D211C02EC7D70
|
|
||||||
**Important** write this seed phrase in a safe place.
|
|
||||||
It is the only way to recover your account if you ever forget your password.
|
|
||||||
|
|
||||||
pelican amateur empower assist awkward claim brave process cliff save album pigeon intact asset
|
|
||||||
|
|
||||||
which shows the address of your key named ``alice``, and its recovery
|
|
||||||
seed. We'll use these shortly.
|
|
||||||
|
|
||||||
Adding the ``--output json`` flag to the above command would give this
|
|
||||||
output:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
Enter a passphrase:
|
|
||||||
Repeat the passphrase:
|
|
||||||
{
|
|
||||||
"key": {
|
|
||||||
"name": "alice",
|
|
||||||
"address": "A159C96AE911F68913E715ED889D211C02EC7D70",
|
|
||||||
"pubkey": {
|
|
||||||
"type": "ed25519",
|
|
||||||
"data": "4BF22554B0F0BF2181187E5E5456E3BF3D96DB4C416A91F07F03A9C36F712B77"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"seed": "pelican amateur empower assist awkward claim brave process cliff save album pigeon intact asset"
|
|
||||||
}
|
|
||||||
|
|
||||||
To avoid the prompt, it's possible to pipe the password into the
|
|
||||||
command, e.g.:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
echo 1234567890 | gaia client keys new fred --output json
|
|
||||||
|
|
||||||
After trying each of the three ways to create a key, look at them, use:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys list
|
|
||||||
|
|
||||||
to list all the keys:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
All keys:
|
|
||||||
alice 6FEA9C99E2565B44FCC3C539A293A1378CDA7609
|
|
||||||
bob A159C96AE911F68913E715ED889D211C02EC7D70
|
|
||||||
charlie 784D623E0C15DE79043C126FA6449B68311339E5
|
|
||||||
|
|
||||||
Again, we can use the ``--output json`` flag:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"name": "alice",
|
|
||||||
"address": "6FEA9C99E2565B44FCC3C539A293A1378CDA7609",
|
|
||||||
"pubkey": {
|
|
||||||
"type": "ed25519",
|
|
||||||
"data": "878B297F1E863CC30CAD71E04A8B3C23DB71C18F449F39E35B954EDB2276D32D"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "bob",
|
|
||||||
"address": "A159C96AE911F68913E715ED889D211C02EC7D70",
|
|
||||||
"pubkey": {
|
|
||||||
"type": "ed25519",
|
|
||||||
"data": "2127CAAB96C08E3042C5B33C8B5A820079AAE8DD50642DCFCC1E8B74821B2BB9"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "charlie",
|
|
||||||
"address": "784D623E0C15DE79043C126FA6449B68311339E5",
|
|
||||||
"pubkey": {
|
|
||||||
"type": "ed25519",
|
|
||||||
"data": "4BF22554B0F0BF2181187E5E5456E3BF3D96DB4C416A91F07F03A9C36F712B77"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
]
|
|
||||||
|
|
||||||
to get machine readable output.
|
|
||||||
|
|
||||||
If we want information about one specific key, then:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys get charlie --output json
|
|
||||||
|
|
||||||
will, for example, return the info for only the "charlie" key returned
|
|
||||||
from the previous ``gaia client keys list`` command.
|
|
||||||
|
|
||||||
The keys tooling can support different types of keys with a flag:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys new bit --type secp256k1
|
|
||||||
|
|
||||||
and you'll see the difference in the ``"type": field from``\ gaia client
|
|
||||||
keys get\`
|
|
||||||
|
|
||||||
Before moving on, let's set an enviroment variable to make
|
|
||||||
``--output json`` the default.
|
|
||||||
|
|
||||||
Either run or put in your ``~/.bash_profile`` the following line:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
export BC_OUTPUT=json
|
|
||||||
|
|
||||||
Recover a key
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Let's say, for whatever reason, you lose a key or forget the password.
|
|
||||||
On creation, you were given a seed. We'll use it to recover a lost key.
|
|
||||||
|
|
||||||
First, let's simulate the loss by deleting a key:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys delete alice
|
|
||||||
|
|
||||||
which prompts for your current password, now rendered obsolete, and
|
|
||||||
gives a warning message. The only way you can recover your key now is
|
|
||||||
using the 12 word seed given on initial creation of the key. Let's try
|
|
||||||
it:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys recover alice-again
|
|
||||||
|
|
||||||
which prompts for a new password then the seed:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
Enter the new passphrase:
|
|
||||||
Enter your recovery seed phrase:
|
|
||||||
strike alien praise vendor term left market practice junior better deputy divert front calm
|
|
||||||
alice-again CBF5D9CE6DDCC32806162979495D07B851C53451
|
|
||||||
|
|
||||||
and voila! You've recovered your key. Note that the seed can be typed
|
|
||||||
out, pasted in, or piped into the command alongside the password.
|
|
||||||
|
|
||||||
To change the password of a key, we can:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys update alice-again
|
|
||||||
|
|
||||||
and follow the prompts.
|
|
||||||
|
|
||||||
That covers most features of the keys sub command.
|
|
||||||
|
|
||||||
.. raw:: html
|
|
||||||
|
|
||||||
<!-- use later in a test script, or more advance tutorial?
|
|
||||||
SEED=$(echo 1234567890 | gaia client keys new fred -o json | jq .seed | tr -d \")
|
|
||||||
echo $SEED
|
|
||||||
(echo qwertyuiop; echo $SEED stamp) | gaia client keys recover oops
|
|
||||||
(echo qwertyuiop; echo $SEED) | gaia client keys recover derf
|
|
||||||
gaia client keys get fred -o json
|
|
||||||
gaia client keys get derf -o json
|
|
||||||
```
|
|
||||||
-->
|
|
|
@ -1,83 +0,0 @@
|
||||||
Local Testnet
|
|
||||||
=============
|
|
||||||
|
|
||||||
This tutorial demonstrates the basics of setting up a gaia
|
|
||||||
testnet locally.
|
|
||||||
|
|
||||||
If you haven't already made a key, make one now:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client keys new alice
|
|
||||||
|
|
||||||
otherwise, use an existing key.
|
|
||||||
|
|
||||||
Initialize The Chain
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Now initialize a gaia chain, using ``alice``'s address:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia node init 5D93A6059B6592833CBC8FA3DA90EE0382198985 --home=$HOME/.gaia1 --chain-id=gaia-test
|
|
||||||
|
|
||||||
This will create all the files necessary to run a single node chain in
|
|
||||||
``$HOME/.gaia1``: a ``priv_validator.json`` file with the validators
|
|
||||||
private key, and a ``genesis.json`` file with the list of validators and
|
|
||||||
accounts.
|
|
||||||
|
|
||||||
We'll add a second node on our local machine by initiating a node in a
|
|
||||||
new directory, with the same address, and copying in the genesis:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia node init 5D93A6059B6592833CBC8FA3DA90EE0382198985 --home=$HOME/.gaia2 --chain-id=gaia-test
|
|
||||||
cp $HOME/.gaia1/genesis.json $HOME/.gaia2/genesis.json
|
|
||||||
|
|
||||||
We also need to modify ``$HOME/.gaia2/config.toml`` to set new seeds
|
|
||||||
and ports. It should look like:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
proxy_app = "tcp://127.0.0.1:26668"
|
|
||||||
moniker = "anonymous"
|
|
||||||
fast_sync = true
|
|
||||||
db_backend = "leveldb"
|
|
||||||
log_level = "state:info,*:error"
|
|
||||||
|
|
||||||
[rpc]
|
|
||||||
laddr = "tcp://0.0.0.0:26667"
|
|
||||||
|
|
||||||
[p2p]
|
|
||||||
laddr = "tcp://0.0.0.0:26666"
|
|
||||||
seeds = "0.0.0.0:26656"
|
|
||||||
|
|
||||||
Start Nodes
|
|
||||||
-----------
|
|
||||||
|
|
||||||
Now that we've initialized the chains, we can start both nodes:
|
|
||||||
|
|
||||||
NOTE: each command below must be started in separate terminal windows. Alternatively, to run this testnet across multiple machines, you'd replace the ``seeds = "0.0.0.0"`` in ``~/.gaia2.config.toml`` with the IP of the first node, and could skip the modifications we made to the config file above because port conflicts would be avoided.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia node start --home=$HOME/.gaia1
|
|
||||||
gaia node start --home=$HOME/.gaia2
|
|
||||||
|
|
||||||
Now we can initialize a client for the first node, and look up our
|
|
||||||
account:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client init --chain-id=gaia-test --node=tcp://localhost:26657
|
|
||||||
gaia client query account 5D93A6059B6592833CBC8FA3DA90EE0382198985
|
|
||||||
|
|
||||||
To see what tendermint considers the validator set is, use:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
curl localhost:26657/validators
|
|
||||||
|
|
||||||
and compare the information in this file: ``~/.gaia1/priv_validator.json``. The ``address`` and ``pub_key`` fields should match.
|
|
||||||
|
|
||||||
To add a second validator on your testnet, you'll need to bond some tokens be declaring candidacy.
|
|
|
@ -0,0 +1,216 @@
|
||||||
|
//TODO update .rst
|
||||||
|
|
||||||
|
# Staking Module
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Cosmos Hub is a Tendermint-based Delegated Proof of Stake (DPos) blockchain
|
||||||
|
system that serves as a backbone of the Cosmos ecosystem. It is operated and
|
||||||
|
secured by an open and globally decentralized set of validators. Tendermint is
|
||||||
|
a Byzantine fault-tolerant distributed protocol for consensus among distrusting
|
||||||
|
parties, in this case the group of validators which produce the blocks for the
|
||||||
|
Cosmos Hub. To avoid the nothing-at-stake problem, a validator in Tendermint
|
||||||
|
needs to lock up coins in a bond deposit. Each bond's atoms are illiquid, they
|
||||||
|
cannot be transferred - in order to become liquid, they must be unbonded, a
|
||||||
|
process which will take 3 weeks by default at Cosmos Hub launch. Tendermint
|
||||||
|
protocol messages are signed by the validator's private key and are therefor
|
||||||
|
attributable. Validators acting outside protocol specifications can be made
|
||||||
|
accountable through punishing by slashing (burning) their bonded Atoms. On the
|
||||||
|
other hand, validators are rewarded for their service of securing blockchain
|
||||||
|
network by the inflationary provisions and transactions fees. This incentivizes
|
||||||
|
correct behavior of the validators and provides the economic security of the
|
||||||
|
network.
|
||||||
|
|
||||||
|
The native token of the Cosmos Hub is called the Atom; becoming a validator of the
|
||||||
|
Cosmos Hub requires holding Atoms. However, not all Atom holders are validators
|
||||||
|
of the Cosmos Hub. More precisely, there is a selection process that determines
|
||||||
|
the validator set as a subset of all validators (Atom holders that
|
||||||
|
want to become a validator). The other option for Atom holders is to delegate
|
||||||
|
their atoms to validators, i.e., being a delegator. A delegator is an Atom
|
||||||
|
holder that has put its Atoms at stake by delegating it to a validator. By bonding
|
||||||
|
Atoms to secure the network (and taking a risk of being slashed in case of
|
||||||
|
misbehaviour), a user is rewarded with inflationary provisions and transaction
|
||||||
|
fees proportional to the amount of its bonded Atoms. The Cosmos Hub is
|
||||||
|
designed to efficiently facilitate a small numbers of validators (hundreds),
|
||||||
|
and large numbers of delegators (tens of thousands). More precisely, it is the
|
||||||
|
role of the Staking module of the Cosmos Hub to support various staking
|
||||||
|
functionality including validator set selection, delegating, bonding and
|
||||||
|
withdrawing Atoms, and the distribution of inflationary provisions and
|
||||||
|
transaction fees.
|
||||||
|
|
||||||
|
## Basic Terms and Definitions
|
||||||
|
|
||||||
|
* Cosmsos Hub - a Tendermint-based Delegated Proof of Stake (DPos)
|
||||||
|
blockchain system
|
||||||
|
* Atom - native token of the Cosmsos Hub
|
||||||
|
* Atom holder - an entity that holds some amount of Atoms
|
||||||
|
* Pool - Global object within the Cosmos Hub which accounts global state
|
||||||
|
including the total amount of bonded, unbonding, and unbonded atoms
|
||||||
|
* Validator Share - Share which a validator holds to represent its portion of
|
||||||
|
bonded, unbonding or unbonded atoms in the pool
|
||||||
|
* Delegation Share - Shares which a delegation bond holds to represent its
|
||||||
|
portion of bonded, unbonding or unbonded shares in a validator
|
||||||
|
* Bond Atoms - a process of locking Atoms in a delegation share which holds them
|
||||||
|
under protocol control.
|
||||||
|
* Slash Atoms - the process of burning atoms in the pool and assoiated
|
||||||
|
validator shares of a misbehaving validator, (not behaving according to the
|
||||||
|
protocol specification). This process devalues the worth of delegation shares
|
||||||
|
of the given validator
|
||||||
|
* Unbond Shares - Process of retrieving atoms from shares. If the shares are
|
||||||
|
bonded the shares must first remain in an inbetween unbonding state for the
|
||||||
|
duration of the unbonding period
|
||||||
|
* Redelegating Shares - Process of redelegating atoms from one validator to
|
||||||
|
another. This process is instantaneous, but the redelegated atoms are
|
||||||
|
retrospecively slashable if the old validator is found to misbehave for any
|
||||||
|
blocks before the redelegation. These atoms are simultaniously slashable
|
||||||
|
for any new blocks which the new validator misbehavess
|
||||||
|
* Validator - entity with atoms which is either actively validating the Tendermint
|
||||||
|
protocol (bonded validator) or vying to validate .
|
||||||
|
* Bonded Validator - a validator whose atoms are currently bonded and liable to
|
||||||
|
be slashed. These validators are to be able to sign protocol messages for
|
||||||
|
Tendermint consensus. At Cosmos Hub genesis there is a maximum of 100
|
||||||
|
bonded validator positions. Only Bonded Validators receive atom provisions
|
||||||
|
and fee rewards.
|
||||||
|
* Delegator - an Atom holder that has bonded Atoms to a validator
|
||||||
|
* Unbonding period - time required in the unbonding state when unbonding
|
||||||
|
shares. Time slashable to old validator after a redelegation. Time for which
|
||||||
|
validators can be slashed after an infraction. To provide the requisite
|
||||||
|
cryptoeconomic security guarantees, all of these must be equal.
|
||||||
|
* Atom provisions - The process of increasing the Atom supply. Atoms are
|
||||||
|
periodically created on the Cosmos Hub and issued to bonded Atom holders.
|
||||||
|
The goal of inflation is to incentize most of the Atoms in existence to be
|
||||||
|
bonded. Atoms are distributed unbonded and using the fee_distribution mechanism
|
||||||
|
* Transaction fees - transaction fee is a fee that is included in a Cosmsos Hub
|
||||||
|
transaction. The fees are collected by the current validator set and
|
||||||
|
distributed among validators and delegators in proportion to their bonded
|
||||||
|
Atom share
|
||||||
|
* Commission fee - a fee taken from the transaction fees by a validator for
|
||||||
|
their service
|
||||||
|
|
||||||
|
## The pool and the share
|
||||||
|
|
||||||
|
At the core of the Staking module is the concept of a pool which denotes a
|
||||||
|
collection of Atoms contributed by different Atom holders. There are three
|
||||||
|
pools in the Staking module: the bonded, unbonding, and unbonded pool. Bonded
|
||||||
|
Atoms are part of the global bonded pool. If a validator or delegator wants to
|
||||||
|
unbond its shares, these Shares are moved to the the unbonding pool for the
|
||||||
|
duration of the unbonding period. From here normally Atoms will be moved
|
||||||
|
directly into the delegators wallet, however under the situation thatn an
|
||||||
|
entire validator gets unbonded, the Atoms of the delegations will remain with
|
||||||
|
the validator and moved to the unbonded pool. For each pool, the total amount
|
||||||
|
of bonded, unbonding, or unbonded Atoms are tracked as well as the current
|
||||||
|
amount of issued pool-shares, the specific holdings of these shares by
|
||||||
|
validators are tracked in protocol by the validator object.
|
||||||
|
|
||||||
|
A share is a unit of Atom distribution and the value of the share
|
||||||
|
(share-to-atom exchange rate) can change during system execution. The
|
||||||
|
share-to-atom exchange rate can be computed as:
|
||||||
|
|
||||||
|
`share-to-atom-exchange-rate = size of the pool / ammount of issued shares`
|
||||||
|
|
||||||
|
Then for each validator (in a per validator data structure) the protocol keeps
|
||||||
|
track of the amount of shares the validator owns in a pool. At any point in
|
||||||
|
time, the exact amount of Atoms a validator has in the pool can be computed as
|
||||||
|
the number of shares it owns multiplied with the current share-to-atom exchange
|
||||||
|
rate:
|
||||||
|
|
||||||
|
`validator-coins = validator.Shares * share-to-atom-exchange-rate`
|
||||||
|
|
||||||
|
The benefit of such accounting of the pool resources is the fact that a
|
||||||
|
modification to the pool from bonding/unbonding/slashing of Atoms affects only
|
||||||
|
global data (size of the pool and the number of shares) and not the related
|
||||||
|
validator data structure, i.e., the data structure of other validators do not
|
||||||
|
need to be modified. This has the advantage that modifying global data is much
|
||||||
|
cheaper computationally than modifying data of every validator. Let's explain
|
||||||
|
this further with several small examples:
|
||||||
|
|
||||||
|
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||||
|
XXX TODO make way less verbose lets use bullet points to describe the example
|
||||||
|
XXX Also need to update to not include bonded atom provisions all atoms are
|
||||||
|
XXX redistributed with the fee pool now
|
||||||
|
|
||||||
|
We consider initially 4 validators p1, p2, p3 and p4, and that each validator
|
||||||
|
has bonded 10 Atoms to the bonded pool. Furthermore, let's assume that we have
|
||||||
|
issued initially 40 shares (note that the initial distribution of the shares,
|
||||||
|
i.e., share-to-atom exchange rate can be set to any meaningful value), i.e.,
|
||||||
|
share-to-atom-ex-rate = 1 atom per share. Then at the global pool level we
|
||||||
|
have, the size of the pool is 40 Atoms, and the amount of issued shares is
|
||||||
|
equal to 40. And for each validator we store in their corresponding data
|
||||||
|
structure that each has 10 shares of the bonded pool. Now lets assume that the
|
||||||
|
validator p4 starts process of unbonding of 5 shares. Then the total size of
|
||||||
|
the pool is decreased and now it will be 35 shares and the amount of Atoms is
|
||||||
|
35 . Note that the only change in other data structures needed is reducing the
|
||||||
|
number of shares for a validator p4 from 10 to 5.
|
||||||
|
|
||||||
|
Let's consider now the case where a validator p1 wants to bond 15 more atoms to
|
||||||
|
the pool. Now the size of the pool is 50, and as the exchange rate hasn't
|
||||||
|
changed (1 share is still worth 1 Atom), we need to create more shares, i.e. we
|
||||||
|
now have 50 shares in the pool in total. Validators p2, p3 and p4 still have
|
||||||
|
(correspondingly) 10, 10 and 5 shares each worth of 1 atom per share, so we
|
||||||
|
don't need to modify anything in their corresponding data structures. But p1
|
||||||
|
now has 25 shares, so we update the amount of shares owned by p1 in its
|
||||||
|
data structure. Note that apart from the size of the pool that is in Atoms, all
|
||||||
|
other data structures refer only to shares.
|
||||||
|
|
||||||
|
Finally, let's consider what happens when new Atoms are created and added to
|
||||||
|
the pool due to inflation. Let's assume that the inflation rate is 10 percent
|
||||||
|
and that it is applied to the current state of the pool. This means that 5
|
||||||
|
Atoms are created and added to the pool and that each validator now
|
||||||
|
proportionally increase it's Atom count. Let's analyse how this change is
|
||||||
|
reflected in the data structures. First, the size of the pool is increased and
|
||||||
|
is now 55 atoms. As a share of each validator in the pool hasn't changed, this
|
||||||
|
means that the total number of shares stay the same (50) and that the amount of
|
||||||
|
shares of each validator stays the same (correspondingly 25, 10, 10, 5). But
|
||||||
|
the exchange rate has changed and each share is now worth 55/50 Atoms per
|
||||||
|
share, so each validator has effectively increased amount of Atoms it has. So
|
||||||
|
validators now have (correspondingly) 55/2, 55/5, 55/5 and 55/10 Atoms.
|
||||||
|
|
||||||
|
The concepts of the pool and its shares is at the core of the accounting in the
|
||||||
|
Staking module. It is used for managing the global pools (such as bonding and
|
||||||
|
unbonding pool), but also for distribution of Atoms between validator and its
|
||||||
|
delegators (we will explain this in section X).
|
||||||
|
|
||||||
|
#### Delegator shares
|
||||||
|
|
||||||
|
A validator is, depending on its status, contributing Atoms to either the
|
||||||
|
unbonding or unbonded pool - the validator in turn holds some amount of pool
|
||||||
|
shares. Not all of a validator's Atoms (and respective shares) are necessarily
|
||||||
|
owned by the validator, some may be owned by delegators to that validator. The
|
||||||
|
mechanism for distribution of Atoms (and shares) between a validator and its
|
||||||
|
delegators is based on a notion of delegator shares. More precisely, every
|
||||||
|
validator is issuing (local) delegator shares
|
||||||
|
(`Validator.IssuedDelegatorShares`) that represents some portion of global
|
||||||
|
shares managed by the validator (`Validator.GlobalStakeShares`). The principle
|
||||||
|
behind managing delegator shares is the same as described in [Section](#The
|
||||||
|
pool and the share). We now illustrate it with an example.
|
||||||
|
|
||||||
|
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||||
|
XXX TODO make way less verbose lets use bullet points to describe the example
|
||||||
|
XXX Also need to update to not include bonded atom provisions all atoms are
|
||||||
|
XXX redistributed with the fee pool now
|
||||||
|
|
||||||
|
Let's consider 4 validators p1, p2, p3 and p4, and assume that each validator
|
||||||
|
has bonded 10 Atoms to the bonded pool. Furthermore, let's assume that we have
|
||||||
|
issued initially 40 global shares, i.e., that
|
||||||
|
`share-to-atom-exchange-rate = 1 atom per share`. So we will set
|
||||||
|
`GlobalState.BondedPool = 40` and `GlobalState.BondedShares = 40` and in the
|
||||||
|
Validator data structure of each validator `Validator.GlobalStakeShares = 10`.
|
||||||
|
Furthermore, each validator issued 10 delegator shares which are initially
|
||||||
|
owned by itself, i.e., `Validator.IssuedDelegatorShares = 10`, where
|
||||||
|
`delegator-share-to-global-share-ex-rate = 1 global share per delegator share`.
|
||||||
|
Now lets assume that a delegator d1 delegates 5 atoms to a validator p1 and
|
||||||
|
consider what are the updates we need to make to the data structures. First,
|
||||||
|
`GlobalState.BondedPool = 45` and `GlobalState.BondedShares = 45`. Then, for
|
||||||
|
validator p1 we have `Validator.GlobalStakeShares = 15`, but we also need to
|
||||||
|
issue also additional delegator shares, i.e.,
|
||||||
|
`Validator.IssuedDelegatorShares = 15` as the delegator d1 now owns 5 delegator
|
||||||
|
shares of validator p1, where each delegator share is worth 1 global shares,
|
||||||
|
i.e, 1 Atom. Lets see now what happens after 5 new Atoms are created due to
|
||||||
|
inflation. In that case, we only need to update `GlobalState.BondedPool` which
|
||||||
|
is now equal to 50 Atoms as created Atoms are added to the bonded pool. Note
|
||||||
|
that the amount of global and delegator shares stay the same but they are now
|
||||||
|
worth more as share-to-atom-exchange-rate is now worth 50/45 Atoms per share.
|
||||||
|
Therefore, a delegator d1 now owns:
|
||||||
|
|
||||||
|
`delegatorCoins = 5 (delegator shares) * 1 (delegator-share-to-global-share-ex-rate) * 50/45 (share-to-atom-ex-rate) = 5.55 Atoms`
|
||||||
|
|
|
@ -1,64 +0,0 @@
|
||||||
Public Testnets
|
|
||||||
===============
|
|
||||||
|
|
||||||
Here we'll cover the basics of joining a public testnet. These testnets
|
|
||||||
come and go with various names are we release new versions of tendermint
|
|
||||||
core. This tutorial covers joining the ``gaia-1`` testnet. To join
|
|
||||||
other testnets, choose different initialization files, described below.
|
|
||||||
|
|
||||||
Get Tokens
|
|
||||||
----------
|
|
||||||
|
|
||||||
If you haven't already `created a key <../key-management.html>`__,
|
|
||||||
do so now. Copy your key's address and enter it into
|
|
||||||
`this utility <http://www.cosmosvalidators.com/>`__ which will send you
|
|
||||||
some ``steak`` testnet tokens.
|
|
||||||
|
|
||||||
Get Files
|
|
||||||
---------
|
|
||||||
|
|
||||||
Now, to sync with the testnet, we need the genesis file and seeds. The
|
|
||||||
easiest way to get them is to clone and navigate to the tendermint
|
|
||||||
testnet repo:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
git clone https://github.com/tendermint/testnets ~/testnets
|
|
||||||
cd ~/testnets/gaia-1/gaia
|
|
||||||
|
|
||||||
NOTE: to join a different testnet, change the ``gaia-1/gaia`` filepath
|
|
||||||
to another directory with testnet inititalization files *and* an
|
|
||||||
active testnet.
|
|
||||||
|
|
||||||
Start Node
|
|
||||||
----------
|
|
||||||
|
|
||||||
Now we can start a new node:it may take awhile to sync with the
|
|
||||||
existing testnet.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia node start --home=$HOME/testnets/gaia-1/gaia
|
|
||||||
|
|
||||||
Once blocks slow down to about one per second, you're all caught up.
|
|
||||||
|
|
||||||
The ``gaia node start`` command will automaticaly generate a validator
|
|
||||||
private key found in ``~/testnets/gaia-1/gaia/priv_validator.json``.
|
|
||||||
|
|
||||||
Finally, let's initialize the gaia client to interact with the testnet:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client init --chain-id=gaia-1 --node=tcp://localhost:26657
|
|
||||||
|
|
||||||
and check our balance:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
gaia client query account $MYADDR
|
|
||||||
|
|
||||||
Where ``$MYADDR`` is the address originally generated by ``gaia keys new bob``.
|
|
||||||
|
|
||||||
You are now ready to declare candidacy or delegate some steaks. See the
|
|
||||||
`staking module overview <./staking-module.html>`__ for more information
|
|
||||||
on using the ``gaia client``.
|
|
|
@ -0,0 +1,94 @@
|
||||||
|
# Testnet Setup
|
||||||
|
|
||||||
|
**Note:** This document is incomplete and may not be up-to-date with the
|
||||||
|
state of the code.
|
||||||
|
|
||||||
|
See the [installation guide](../sdk/install.html) for details on
|
||||||
|
installation.
|
||||||
|
|
||||||
|
Here is a quick example to get you off your feet:
|
||||||
|
|
||||||
|
First, generate a couple of genesis transactions to be incorporated into
|
||||||
|
the genesis file, this will create two keys with the password
|
||||||
|
`1234567890`:
|
||||||
|
|
||||||
|
```
|
||||||
|
gaiad init gen-tx --name=foo --home=$HOME/.gaiad1
|
||||||
|
gaiad init gen-tx --name=bar --home=$HOME/.gaiad2
|
||||||
|
gaiacli keys list
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** If you've already run these tests you may need to overwrite
|
||||||
|
keys using the `--owk` flag When you list the keys you should see two
|
||||||
|
addresses, we'll need these later so take note. Now let's actually
|
||||||
|
create the genesis files for both nodes:
|
||||||
|
|
||||||
|
```
|
||||||
|
cp -a ~/.gaiad2/config/gentx/. ~/.gaiad1/config/gentx/
|
||||||
|
cp -a ~/.gaiad1/config/gentx/. ~/.gaiad2/config/gentx/
|
||||||
|
gaiad init --gen-txs --home=$HOME/.gaiad1 --chain-id=test-chain
|
||||||
|
gaiad init --gen-txs --home=$HOME/.gaiad2 --chain-id=test-chain
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** If you've already run these tests you may need to overwrite
|
||||||
|
genesis using the `-o` flag. What we just did is copy the genesis
|
||||||
|
transactions between each of the nodes so there is a common genesis
|
||||||
|
transaction set; then we created both genesis files independently from
|
||||||
|
each home directory. Importantly both nodes have independently created
|
||||||
|
their `genesis.json` and `config.toml` files, which should be identical
|
||||||
|
between nodes.
|
||||||
|
|
||||||
|
Great, now that we've initialized the chains, we can start both nodes in
|
||||||
|
the background:
|
||||||
|
|
||||||
|
```
|
||||||
|
gaiad start --home=$HOME/.gaiad1 &> gaia1.log &
|
||||||
|
NODE1_PID=$!
|
||||||
|
gaia start --home=$HOME/.gaiad2 &> gaia2.log &
|
||||||
|
NODE2_PID=$!
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that we save the PID so we can later kill the processes. You can
|
||||||
|
peak at your logs with `tail gaia1.log`, or follow them for a bit with
|
||||||
|
`tail -f gaia1.log`.
|
||||||
|
|
||||||
|
Nice. We can also lookup the validator set:
|
||||||
|
|
||||||
|
```
|
||||||
|
gaiacli validatorset
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, we try to transfer some `steak` to another account:
|
||||||
|
|
||||||
|
```
|
||||||
|
gaiacli account <FOO-ADDR>
|
||||||
|
gaiacli account <BAR-ADDR>
|
||||||
|
gaiacli send --amount=10steak --to=<BAR-ADDR> --name=foo --chain-id=test-chain
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** We need to be careful with the `chain-id` and `sequence`
|
||||||
|
|
||||||
|
Check the balance & sequence with:
|
||||||
|
|
||||||
|
```
|
||||||
|
gaiacli account <BAR-ADDR>
|
||||||
|
```
|
||||||
|
|
||||||
|
To confirm for certain the new validator is active, check tendermint:
|
||||||
|
|
||||||
|
```
|
||||||
|
curl localhost:46657/validators
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, to relinquish all your power, unbond some coins. You should see
|
||||||
|
your VotingPower reduce and your account balance increase.
|
||||||
|
|
||||||
|
```
|
||||||
|
gaiacli unbond --chain-id=<chain-id> --name=test
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it!
|
||||||
|
|
||||||
|
**Note:** TODO demonstrate edit-candidacy **Note:** TODO demonstrate
|
||||||
|
delegation **Note:** TODO demonstrate unbond of delegation **Note:**
|
||||||
|
TODO demonstrate unbond candidate
|
Loading…
Reference in New Issue