docs: rename all files from .md to .rst

This commit is contained in:
Zach Ramsay 2017-09-01 22:04:16 -04:00
parent ae928d0de9
commit 085d0cb44e
20 changed files with 2468 additions and 2250 deletions

365
docs/basecoin-basics.rst Normal file
View File

@ -0,0 +1,365 @@
.. raw:: html
<!--- shelldown script template, see github.com/rigelrozanski/shelldown
#!/bin/bash
testTutorial_BasecoinBasics() {
#shelldown[1][3] >/dev/null
#shelldown[1][4] >/dev/null
KEYPASS=qwertyuiop
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[1][6])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[1][7])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
#shelldown[3][-1]
assertTrue "Expected true for line $LINENO" $?
#shelldown[4][-1] >>/dev/null 2>&1 &
sleep 5
PID_SERVER=$!
disown
RES=$((echo y) | #shelldown[5][-1] $1)
assertTrue "Line $LINENO: Expected to contain validator, got $RES" '[[ $RES == *validator* ]]'
#shelldown[6][0]
#shelldown[6][1]
RES=$(#shelldown[6][2] | jq '.data.coins[0].denom' | tr -d '"')
assertTrue "Line $LINENO: Expected to have mycoins, got $RES" '[[ $RES == mycoin ]]'
RES="$(#shelldown[6][3] 2>&1)"
assertTrue "Line $LINENO: Expected to contain ERROR, got $RES" '[[ $RES == *ERROR* ]]'
RES=$((echo $KEYPASS) | #shelldown[7][-1] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$(#shelldown[8][-1] | jq '.data.coins[0].amount')
assertTrue "Line $LINENO: Expected to contain 1000 mycoin, got $RES" '[[ $RES == 1000 ]]'
RES=$((echo $KEYPASS) | #shelldown[9][-1] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$((echo $KEYPASS) | #shelldown[10][-1])
assertTrue "Line $LINENO: Expected to contain insufficient funds error, got $RES" \
'[[ $RES == *"Insufficient Funds"* ]]'
#perform a substitution within the final tests
HASH=$((echo $KEYPASS) | #shelldown[11][-1] | jq '.hash' | tr -d '"')
PRESUB="#shelldown[12][-1]"
RES=$(eval ${PRESUB/<HASH>/$HASH})
assertTrue "Line $LINENO: Expected to not contain Error, got $RES" '[[ $RES != *Error* ]]'
}
oneTimeTearDown() {
kill -9 $PID_SERVER >/dev/null 2>&1
sleep 1
}
# load and run these tests with shunit2!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" #get this files directory
. $DIR/shunit2
-->
Basecoin Basics
===============
Here we explain how to get started with a basic Basecoin blockchain, how
to send transactions between accounts using the ``basecoin`` tool, and
what is happening under the hood.
Install
-------
With go, it's one command:
.. code:: shelldown[0]
go get -u github.com/tendermint/basecoin/cmd/...
If you have trouble, see the `installation guide <install.md>`__.
Note the above command installs two binaries: ``basecoin`` and
``basecli``. The former is the running node. The latter is a
command-line light-client. This tutorial assumes you have a 'fresh'
working environment. See `how to clean up, below <#clean-up>`__.
Generate some keys
------------------
Let's generate two keys, one to receive an initial allocation of coins,
and one to send some coins to later:
.. code:: shelldown[1]
basecli keys new cool
basecli keys new friend
You'll need to enter passwords. You can view your key names and
addresses with ``basecli keys list``, or see a particular key's address
with ``basecli keys get <NAME>``.
Initialize Basecoin
-------------------
To initialize a new Basecoin blockchain, run:
.. code:: shelldown[2]
basecoin init <ADDRESS>
If you prefer not to copy-paste, you can provide the address
programatically:
.. code:: shelldown[3]
basecoin init $(basecli keys get cool | awk '{print $2}')
This will create the necessary files for a Basecoin blockchain with one
validator and one account (corresponding to your key) in
``~/.basecoin``. For more options on setup, see the `guide to using the
Basecoin tool </docs/guide/basecoin-tool.md>`__.
If you like, you can manually add some more accounts to the blockchain
by generating keys and editing the ``~/.basecoin/genesis.json``.
Start
-----
Now we can start Basecoin:
.. code:: shelldown[4]
basecoin start
You should see blocks start streaming in!
Initialize Light-Client
-----------------------
Now that Basecoin is running we can initialize ``basecli``, the
light-client utility. Basecli is used for sending transactions and
querying the state. Leave Basecoin running and open a new terminal
window. Here run:
.. code:: shelldown[5]
basecli init --node=tcp://localhost:46657 --genesis=$HOME/.basecoin/genesis.json
If you provide the genesis file to basecli, it can calculate the proper
chainID and validator hash. Basecli needs to get this information from
some trusted source, so all queries done with ``basecli`` can be
cryptographically proven to be correct according to a known validator
set.
Note: that --genesis only works if there have been no validator set
changes since genesis. If there are validator set changes, you need to
find the current set through some other method.
Send transactions
-----------------
Now we are ready to send some transactions. First Let's check the
balance of the two accounts we setup earlier:
.. code:: shelldown[6]
ME=$(basecli keys get cool | awk '{print $2}')
YOU=$(basecli keys get friend | awk '{print $2}')
basecli query account $ME
basecli query account $YOU
The first account is flush with cash, while the second account doesn't
exist. Let's send funds from the first account to the second:
.. code:: shelldown[7]
basecli tx send --name=cool --amount=1000mycoin --to=$YOU --sequence=1
Now if we check the second account, it should have ``1000`` 'mycoin'
coins!
.. code:: shelldown[8]
basecli query account $YOU
We can send some of these coins back like so:
.. code:: shelldown[9]
basecli tx send --name=friend --amount=500mycoin --to=$ME --sequence=1
Note how we use the ``--name`` flag to select a different account to
send from.
If we try to send too much, we'll get an error:
.. code:: shelldown[10]
basecli tx send --name=friend --amount=500000mycoin --to=$ME --sequence=2
Let's send another transaction:
.. code:: shelldown[11]
basecli tx send --name=cool --amount=2345mycoin --to=$YOU --sequence=2
Note the ``hash`` value in the response - this is the hash of the
transaction. We can query for the transaction by this hash:
.. code:: shelldown[12]
basecli query tx <HASH>
See ``basecli tx send --help`` for additional details.
Proof
-----
Even if you don't see it in the UI, the result of every query comes with
a proof. This is a Merkle proof that the result of the query is actually
contained in the state. And the state's Merkle root is contained in a
recent block header. Behind the scenes, ``countercli`` will not only
verify that this state matches the header, but also that the header is
properly signed by the known validator set. It will even update the
validator set as needed, so long as there have not been major changes
and it is secure to do so. So, if you wonder why the query may take a
second... there is a lot of work going on in the background to make sure
even a lying full node can't trick your client.
In a latter `guide on InterBlockchain Communication <ibc.md>`__, we'll
use these proofs to post transactions to other chains.
Accounts and Transactions
-------------------------
For a better understanding of how to further use the tools, it helps to
understand the underlying data structures.
Accounts
~~~~~~~~
The Basecoin state consists entirely of a set of accounts. Each account
contains a public key, a balance in many different coin denominations,
and a strictly increasing sequence number for replay protection. This
type of account was directly inspired by accounts in Ethereum, and is
unlike Bitcoin's use of Unspent Transaction Outputs (UTXOs). Note
Basecoin is a multi-asset cryptocurrency, so each account can have many
different kinds of tokens.
.. code:: golang
type Account struct {
PubKey crypto.PubKey `json:"pub_key"` // May be nil, if not known.
Sequence int `json:"sequence"`
Balance Coins `json:"coins"`
}
type Coins []Coin
type Coin struct {
Denom string `json:"denom"`
Amount int64 `json:"amount"`
}
If you want to add more coins to a blockchain, you can do so manually in
the ``~/.basecoin/genesis.json`` before you start the blockchain for the
first time.
Accounts are serialized and stored in a Merkle tree under the key
``base/a/<address>``, where ``<address>`` is the address of the account.
Typically, the address of the account is the 20-byte ``RIPEMD160`` hash
of the public key, but other formats are acceptable as well, as defined
in the `Tendermint crypto
library <https://github.com/tendermint/go-crypto>`__. The Merkle tree
used in Basecoin is a balanced, binary search tree, which we call an
`IAVL tree <https://github.com/tendermint/go-merkle>`__.
Transactions
~~~~~~~~~~~~
Basecoin defines a transaction type, the ``SendTx``, which allows tokens
to be sent to other accounts. The ``SendTx`` takes a list of inputs and
a list of outputs, and transfers all the tokens listed in the inputs
from their corresponding accounts to the accounts listed in the output.
The ``SendTx`` is structured as follows:
.. code:: golang
type SendTx struct {
Gas int64 `json:"gas"`
Fee Coin `json:"fee"`
Inputs []TxInput `json:"inputs"`
Outputs []TxOutput `json:"outputs"`
}
type TxInput struct {
Address []byte `json:"address"` // Hash of the PubKey
Coins Coins `json:"coins"` //
Sequence int `json:"sequence"` // Must be 1 greater than the last committed TxInput
Signature crypto.Signature `json:"signature"` // Depends on the PubKey type and the whole Tx
PubKey crypto.PubKey `json:"pub_key"` // Is present iff Sequence == 0
}
type TxOutput struct {
Address []byte `json:"address"` // Hash of the PubKey
Coins Coins `json:"coins"` //
}
Note the ``SendTx`` includes a field for ``Gas`` and ``Fee``. The
``Gas`` limits the total amount of computation that can be done by the
transaction, while the ``Fee`` refers to the total amount paid in fees.
This is slightly different from Ethereum's concept of ``Gas`` and
``GasPrice``, where ``Fee = Gas x GasPrice``. In Basecoin, the ``Gas``
and ``Fee`` are independent, and the ``GasPrice`` is implicit.
In Basecoin, the ``Fee`` is meant to be used by the validators to inform
the ordering of transactions, like in Bitcoin. And the ``Gas`` is meant
to be used by the application plugin to control its execution. There is
currently no means to pass ``Fee`` information to the Tendermint
validators, but it will come soon...
Note also that the ``PubKey`` only needs to be sent for
``Sequence == 0``. After that, it is stored under the account in the
Merkle tree and subsequent transactions can exclude it, using only the
``Address`` to refer to the sender. Ethereum does not require public
keys to be sent in transactions as it uses a different elliptic curve
scheme which enables the public key to be derived from the signature
itself.
Finally, note that the use of multiple inputs and multiple outputs
allows us to send many different types of tokens between many different
accounts at once in an atomic transaction. Thus, the ``SendTx`` can
serve as a basic unit of decentralized exchange. When using multiple
inputs and outputs, you must make sure that the sum of coins of the
inputs equals the sum of coins of the outputs (no creating money), and
that all accounts that provide inputs have signed the transaction.
Clean Up
--------
**WARNING:** Running these commands will wipe out any existing
information in both the ``~/.basecli`` and ``~/.basecoin`` directories,
including private keys.
To remove all the files created and refresh your environment (e.g., if
starting this tutorial again or trying something new), the following
commands are run:
.. code:: shelldown[end-of-tutorials]
basecli reset_all
rm -rf ~/.basecoin
Conclusion
----------
In this guide, we introduced the ``basecoin`` and ``basecli`` tools,
demonstrated how to start a new basecoin blockchain and how to send
tokens between accounts, and discussed the underlying data types for
accounts and transactions, specifically the ``Account`` and the
``SendTx``. In the `next guide <basecoin-plugins.md>`__, we introduce
the Basecoin plugin system, which uses a new transaction type, the
``AppTx``, to extend the functionality of the Basecoin system with
arbitrary logic.

276
docs/basecoin-plugins.rst Normal file
View File

@ -0,0 +1,276 @@
.. raw:: html
<!--- shelldown script template, see github.com/rigelrozanski/shelldown
#!/bin/bash
testTutorial_BasecoinPlugins() {
#Initialization
#shelldown[0][1]
#shelldown[0][2]
KEYPASS=qwertyuiop
#Making Keys
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[0][4])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[0][5])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
#shelldown[0][7] >/dev/null
assertTrue "Expected true for line $LINENO" $?
#shelldown[0][9] >>/dev/null 2>&1 &
sleep 5
PID_SERVER=$!
disown
RES=$((echo y) | #shelldown[1][0] $1)
assertTrue "Line $LINENO: Expected to contain validator, got $RES" '[[ $RES == *validator* ]]'
#shelldown[1][2]
assertTrue "Expected true for line $LINENO" $?
RES=$((echo $KEYPASS) | #shelldown[1][3] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$((echo $KEYPASS) | #shelldown[2][0])
assertTrue "Line $LINENO: Expected to contain Valid error, got $RES" \
'[[ $RES == *"Counter Tx marked invalid"* ]]'
RES=$((echo $KEYPASS) | #shelldown[2][1] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$(#shelldown[3][-1] | jq '.data.counter')
assertTrue "Line $LINENO: Expected Counter of 1, got $RES" '[[ $RES == 1 ]]'
RES=$((echo $KEYPASS) | #shelldown[4][0] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$(#shelldown[4][1])
RESCOUNT=$(printf "$RES" | jq '.data.counter')
RESFEE=$(printf "$RES" | jq '.data.total_fees[0].amount')
assertTrue "Line $LINENO: Expected Counter of 2, got $RES" '[[ $RESCOUNT == 2 ]]'
assertTrue "Line $LINENO: Expected TotalFees of 2, got $RES" '[[ $RESFEE == 2 ]]'
}
oneTimeTearDown() {
kill -9 $PID_SERVER >/dev/null 2>&1
sleep 1
}
# load and run these tests with shunit2!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" #get this files directory
. $DIR/shunit2
-->
Basecoin Plugins
================
In the `previous guide <basecoin-basics.md>`__, we saw how to use the
``basecoin`` tool to start a blockchain and the ``basecli`` tools to
send transactions. We also learned about ``Account`` and ``SendTx``, the
basic data types giving us a multi-asset cryptocurrency. Here, we will
demonstrate how to extend the tools to use another transaction type, the
``AppTx``, so we can send data to a custom plugin. In this example we
explore a simple plugin named ``counter``.
Example Plugin
--------------
The design of the ``basecoin`` tool makes it easy to extend for custom
functionality. The Counter plugin is bundled with basecoin, so if you
have already `installed basecoin <install.md>`__ and run
``make install`` then you should be able to run a full node with
``counter`` and the a light-client ``countercli`` from terminal. The
Counter plugin is just like the ``basecoin`` tool. They both use the
same library of commands, including one for signing and broadcasting
``SendTx``.
Counter transactions take two custom inputs, a boolean argument named
``valid``, and a coin amount named ``countfee``. The transaction is only
accepted if both ``valid`` is set to true and the transaction input
coins is greater than ``countfee`` that the user provides.
A new blockchain can be initialized and started just like in the
`previous guide <basecoin-basics.md>`__:
.. code:: shelldown[0]
# WARNING: this wipes out data - but counter is only for demos...
rm -rf ~/.counter
countercli reset_all
countercli keys new cool
countercli keys new friend
counter init $(countercli keys get cool | awk '{print $2}')
counter start
The default files are stored in ``~/.counter``. In another window we can
initialize the light-client and send a transaction:
.. code:: shelldown[1]
countercli init --node=tcp://localhost:46657 --genesis=$HOME/.counter/genesis.json
YOU=$(countercli keys get friend | awk '{print $2}')
countercli tx send --name=cool --amount=1000mycoin --to=$YOU --sequence=1
But the Counter has an additional command, ``countercli tx counter``,
which crafts an ``AppTx`` specifically for this plugin:
.. code:: shelldown[2]
countercli tx counter --name cool
countercli tx counter --name cool --valid
The first transaction is rejected by the plugin because it was not
marked as valid, while the second transaction passes. We can build
plugins that take many arguments of different types, and easily extend
the tool to accomodate them. Of course, we can also expose queries on
our plugin:
.. code:: shelldown[3]
countercli query counter
Tada! We can now see that our custom counter plugin transactions went
through. You should see a Counter value of 1 representing the number of
valid transactions. If we send another transaction, and then query
again, we will see the value increment. Note that we need the sequence
number here to send the coins (it didn't increment when we just pinged
the counter)
.. code:: shelldown[4]
countercli tx counter --name cool --countfee=2mycoin --sequence=2 --valid
countercli query counter
The Counter value should be 2, because we sent a second valid
transaction. And this time, since we sent a countfee (which must be less
than or equal to the total amount sent with the tx), it stores the
``TotalFees`` on the counter as well.
Keep it mind that, just like with ``basecli``, the ``countercli``
verifies a proof that the query response is correct and up-to-date.
Now, before we implement our own plugin and tooling, it helps to
understand the ``AppTx`` and the design of the plugin system.
AppTx
-----
The ``AppTx`` is similar to the ``SendTx``, but instead of sending coins
from inputs to outputs, it sends coins from one input to a plugin, and
can also send some data.
.. code:: golang
type AppTx struct {
Gas int64 `json:"gas"`
Fee Coin `json:"fee"`
Input TxInput `json:"input"`
Name string `json:"type"` // Name of the plugin
Data []byte `json:"data"` // Data for the plugin to process
}
The ``AppTx`` enables Basecoin to be extended with arbitrary additional
functionality through the use of plugins. The ``Name`` field in the
``AppTx`` refers to the particular plugin which should process the
transaction, and the ``Data`` field of the ``AppTx`` is the data to be
forwarded to the plugin for processing.
Note the ``AppTx`` also has a ``Gas`` and ``Fee``, with the same meaning
as for the ``SendTx``. It also includes a single ``TxInput``, which
specifies the sender of the transaction, and some coins that can be
forwarded to the plugin as well.
Plugins
-------
A plugin is simply a Go package that implements the ``Plugin``
interface:
.. code:: golang
type Plugin interface {
// Name of this plugin, should be short.
Name() string
// Run a transaction from ABCI DeliverTx
RunTx(store KVStore, ctx CallContext, txBytes []byte) (res abci.Result)
// Other ABCI message handlers
SetOption(store KVStore, key string, value string) (log string)
InitChain(store KVStore, vals []*abci.Validator)
BeginBlock(store KVStore, hash []byte, header *abci.Header)
EndBlock(store KVStore, height uint64) (res abci.ResponseEndBlock)
}
type CallContext struct {
CallerAddress []byte // Caller's Address (hash of PubKey)
CallerAccount *Account // Caller's Account, w/ fee & TxInputs deducted
Coins Coins // The coins that the caller wishes to spend, excluding fees
}
The workhorse of the plugin is ``RunTx``, which is called when an
``AppTx`` is processed. The ``Data`` from the ``AppTx`` is passed in as
the ``txBytes``, while the ``Input`` from the ``AppTx`` is used to
populate the ``CallContext``.
Note that ``RunTx`` also takes a ``KVStore`` - this is an abstraction
for the underlying Merkle tree which stores the account data. By passing
this to the plugin, we enable plugins to update accounts in the Basecoin
state directly, and also to store arbitrary other information in the
state. In this way, the functionality and state of a Basecoin-derived
cryptocurrency can be greatly extended. One could imagine going so far
as to implement the Ethereum Virtual Machine as a plugin!
For details on how to initialize the state using ``SetOption``, see the
`guide to using the basecoin tool <basecoin-tool.md#genesis>`__.
Implement your own
------------------
To implement your own plugin and tooling, make a copy of
``docs/guide/counter``, and modify the code accordingly. Here, we will
briefly describe the design and the changes to be made, but see the code
for more details.
First is the ``cmd/counter/main.go``, which drives the program. It can
be left alone, but you should change any occurrences of ``counter`` to
whatever your plugin tool is going to be called. You must also register
your plugin(s) with the basecoin app with ``RegisterStartPlugin``.
The light-client is located in ``cmd/countercli/main.go`` and allows for
transaction and query commands. This file can also be left mostly alone
besides replacing the application name and adding references to new
plugin commands.
Next is the custom commands in ``cmd/countercli/commands/``. These files
are where we extend the tool with any new commands and flags we need to
send transactions or queries to our plugin. You define custom ``tx`` and
``query`` subcommands, which are registered in ``main.go`` (avoiding
``init()`` auto-registration, for less magic and more control in the
main executable).
Finally is ``plugins/counter/counter.go``, where we provide an
implementation of the ``Plugin`` interface. The most important part of
the implementation is the ``RunTx`` method, which determines the meaning
of the data sent along in the ``AppTx``. In our example, we define a new
transaction type, the ``CounterTx``, which we expect to be encoded in
the ``AppTx.Data``, and thus to be decoded in the ``RunTx`` method, and
used to update the plugin state.
For more examples and inspiration, see our `repository of example
plugins <https://github.com/tendermint/basecoin-examples>`__.
Conclusion
----------
In this guide, we demonstrated how to create a new plugin and how to
extend the ``basecoin`` tool to start a blockchain with the plugin
enabled and send transactions to it. In the next guide, we introduce a
`plugin for Inter Blockchain Communication <ibc.md>`__, which allows us
to publish proofs of the state of one blockchain to another, and thus to
transfer tokens and data between them.

260
docs/basecoin-tool.rst Normal file
View File

@ -0,0 +1,260 @@
.. raw:: html
<!--- shelldown script template, see github.com/rigelrozanski/shelldown
#!/bin/bash
testTutorial_BasecoinTool() {
rm -rf ~/.basecoin
rm -rf ~/.basecli
rm -rf example-data
KEYPASS=qwertyuiop
(echo $KEYPASS; echo $KEYPASS) | #shelldown[0][0] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[0][1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[1][0] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[1][1] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[1][2] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER >/dev/null 2>&1 ; sleep 1
#shelldown[2][0] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[2][1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER >/dev/null 2>&1 ; sleep 1
#shelldown[3][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[4][-1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
#shelldown[5][-1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER2=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER $PID_SERVER2 >/dev/null 2>&1 ; sleep 1
#shelldown[4][-1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
#shelldown[6][0] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[6][1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER2=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER $PID_SERVER2 >/dev/null 2>&1 ; sleep 1
#shelldown[7][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[8][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
(echo $KEYPASS; echo $KEYPASS) | #shelldown[9][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[10][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[11][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#cleanup
rm -rf example-data
}
# load and run these tests with shunit2!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" #get this files directory
. $DIR/shunit2
-->
The Basecoin Tool
=================
In previous tutorials we learned the `basics of the Basecoin
CLI </docs/guide/basecoin-basics.md>`__ and `how to implement a
plugin </docs/guide/basecoin-plugins.md>`__. In this tutorial, we
provide more details on using the Basecoin tool.
Generate a Key
==============
Generate a key using the ``basecli`` tool:
.. code:: shelldown[0]
basecli keys new mykey
ME=$(basecli keys get mykey | awk '{print $2}')
Data Directory
==============
By default, ``basecoin`` works out of ``~/.basecoin``. To change this,
set the ``BCHOME`` environment variable:
.. code:: shelldown[1]
export BCHOME=~/.my_basecoin_data
basecoin init $ME
basecoin start
or
.. code:: shelldown[2]
BCHOME=~/.my_basecoin_data basecoin init $ME
BCHOME=~/.my_basecoin_data basecoin start
ABCI Server
===========
So far we have run Basecoin and Tendermint in a single process. However,
since we use ABCI, we can actually run them in different processes.
First, initialize them:
.. code:: shelldown[3]
basecoin init $ME
This will create a single ``genesis.json`` file in ``~/.basecoin`` with
the information for both Basecoin and Tendermint.
Now, In one window, run
.. code:: shelldown[4]
basecoin start --without-tendermint
and in another,
.. code:: shelldown[5]
TMROOT=~/.basecoin tendermint node
You should see Tendermint start making blocks!
Alternatively, you could ignore the Tendermint details in
``~/.basecoin/genesis.json`` and use a separate directory by running:
.. code:: shelldown[6]
tendermint init
tendermint node
For more details on using ``tendermint``, see `the
guide <https://tendermint.com/docs/guides/using-tendermint>`__.
Keys and Genesis
================
In previous tutorials we used ``basecoin init`` to initialize
``~/.basecoin`` with the default configuration. This command creates
files both for Tendermint and for Basecoin, and a single
``genesis.json`` file for both of them. For more information on these
files, see the `guide to using
Tendermint <https://tendermint.com/docs/guides/using-tendermint>`__.
Now let's make our own custom Basecoin data.
First, create a new directory:
.. code:: shelldown[7]
mkdir example-data
We can tell ``basecoin`` to use this directory by exporting the
``BCHOME`` environment variable:
.. code:: shelldown[8]
export BCHOME=$(pwd)/example-data
If you're going to be using multiple terminal windows, make sure to add
this variable to your shell startup scripts (eg. ``~/.bashrc``).
Now, let's create a new key:
.. code:: shelldown[9]
basecli keys new foobar
The key's info can be retrieved with
.. code:: shelldown[10]
basecli keys get foobar -o=json
You should get output which looks similar to the following:
.. code:: json
{
"name": "foobar",
"address": "404C5003A703C7DA888C96A2E901FCE65A6869D9",
"pubkey": {
"type": "ed25519",
"data": "8786B7812AB3B27892D8E14505EEFDBB609699E936F6A4871B1983F210736EEA"
}
}
Yours will look different - each key is randomly derived. Now we can
make a ``genesis.json`` file and add an account with our public key:
.. code:: json
{
"app_hash": "",
"chain_id": "example-chain",
"genesis_time": "0001-01-01T00:00:00.000Z",
"validators": [
{
"amount": 10,
"name": "",
"pub_key": {
"type": "ed25519",
"data": "7B90EA87E7DC0C7145C8C48C08992BE271C7234134343E8A8E8008E617DE7B30"
}
}
],
"app_options": {
"accounts": [
{
"pub_key": {
"type": "ed25519",
"data": "8786B7812AB3B27892D8E14505EEFDBB609699E936F6A4871B1983F210736EEA"
},
"coins": [
{
"denom": "gold",
"amount": 1000000000
}
]
}
]
}
}
Here we've granted ourselves ``1000000000`` units of the ``gold`` token.
Note that we've also set the ``chain-id`` to be ``example-chain``. All
transactions must therefore include the ``--chain-id example-chain`` in
order to make sure they are valid for this chain. Previously, we didn't
need this flag because we were using the default chain ID
("test\_chain\_id"). Now that we're using a custom chain, we need to
specify the chain explicitly on the command line.
Note we have also left out the details of the Tendermint genesis. These
are documented in the `Tendermint
guide <https://tendermint.com/docs/guides/using-tendermint>`__.
Reset
=====
You can reset all blockchain data by running:
.. code:: shelldown[11]
basecoin unsafe_reset_all
Similarly, you can reset client data by running:
.. code:: shelldown[12]
basecli reset_all
Genesis
=======
Any required plugin initialization should be constructed using
``SetOption`` on genesis. When starting a new chain for the first time,
``SetOption`` will be called for each item the genesis file. Within
genesis.json file entries are made in the format:
``"<plugin>/<key>", "<value>"``, where ``<plugin>`` is the plugin name,
and ``<key>`` and ``<value>`` are the strings passed into the plugin
SetOption function. This function is intended to be used to set plugin
specific information such as the plugin state.

View File

@ -1,296 +0,0 @@
# Glossary
This glossary defines many terms used throughout documentation of Quark. If
there is every a concept that seems unclear, check here. This is mainly to
provide a background and general understanding of the different words and
concepts that are used. Other documents will explain in more detail how to
combine these concepts to build a particular application.
## Transaction
A transaction is a packet of binary data that contains all information to
validate and perform an action on the blockchain. The only other data that it
interacts with is the current state of the chain (key-value store), and
it must have a deterministic action. The transaction is the main piece of one
request.
We currently make heavy use of [go-wire](https://github.com/tendermint/go-wire)
and [data](https://github.com/tendermint/go-wire/tree/master/data) to provide
binary and json encodings and decodings for `struct` or interface` objects.
Here, encoding and decoding operations are designed to operate with interfaces
nested any amount times (like an onion!). There is one public `TxMapper`
in the basecoin root package, and all modules can register their own transaction
types there. This allows us to deserialize the entire transaction in one location
(even with types defined in other repos), to easily embed an arbitrary transaction
inside another without specifying the type, and provide an automatic json
representation allowing for users (or apps) to inspect the chain.
Note how we can wrap any other transaction, add a fee level, and not worry
about the encoding in our module any more?
```golang
type Fee struct {
Fee coin.Coin `json:"fee"`
Payer basecoin.Actor `json:"payer"` // the address who pays the fee
Tx basecoin.Tx `json:"tx"`
}
```
## Context (ctx)
As a request passes through the system, it may pick up information such as the
authorization it has received from another middleware, or the block height the
request runs at. In order to carry this information between modules it is
saved to the context. Further, all information must be deterministic from
the context in which the request runs (based on the transaction and the block
it was included in) and can be used to validate the transaction.
## Data Store
In order to provide proofs to Tendermint, we keep all data in one key-value
(kv) store which is indexed with a merkle tree. This allows for the easy
generation of a root hash and proofs for queries without requiring complex
logic inside each module. Standardization of this process also allows powerful
light-client tooling as any store data may be verified on the fly.
The largest limitation of the current implemenation of the kv-store is that
interface that the application must use can only `Get` and `Set` single data
points. That said, there are some data structures like queues and range
queries that are available in `state` package. These provide higher-level
functionality in a standard format, but have not yet been integrated into the
kv-store interface.
## Isolation
One of the main arguments for blockchain is security. So while we encourage
the use of third-party modules, all developers must be vigilant against
security holes. If you use the
[stack](https://github.com/cosmos/cosmos-sdk/tree/master/stack)
package, it will provide two different types of compartmentalization security.
The first is to limit the working kv-store space of each module. When
`DeliverTx` is called for a module, it is never given the entire data store,
but rather only its own prefixed subset of the store. This is achieved by
prefixing all keys transparently with `<module name> + 0x0`, using the null
byte as a separator. Since the module name must be a string, no malicious
naming scheme can ever lead to a collision. Inside a module, we can
write using any key value we desire without the possibility that we
have modified data belonging to separate module.
The second is to add permissions to the transaction context. The transaction
context can specify that the tx has been signed by one or multiple specific
[actors](https://github.com/tendermint/basecoin/blob/unstable/context.go#L18).
A transactions will only be executed if the permission requirements have been
fulfilled. For example the sender of funds must have signed, or 2 out of 3
multi-signature actors must have signed a joint account. To prevent the
forgery of account signatures from unintended modules each permission
is associated with the module that granted it (in this case
[auth](https://github.com/cosmos/cosmos-sdk/tree/master/modules/auth)),
and if a module tries to add a permission for another module, it will
panic. There is also protection if a module creates a brand new fake
context to trick the downstream modules. Each context enforces
the rules on how to make child contexts, and the stack middleware builder
enforces that the context passed from one level to the next is a valid
child of the original one.
These security measures ensure that modules can confidently write to their
local section of the database and trust the permissions associated with the
context, without concern of interference from other modules. (Okay,
if you see a bunch of C-code in the module traversing through all the
memory space of the application, then get worried....)
## Handler
The ABCI interface is handled by `app`, which translates these data structures
into an internal format that is more convenient, but unable to travel over the
wire. The basic interface for any code that modifies state is the `Handler`
interface, which provides four methods:
```golang
Name() string
CheckTx(ctx Context, store state.KVStore, tx Tx) (Result, error)
DeliverTx(ctx Context, store state.KVStore, tx Tx) (Result, error)
SetOption(l log.Logger, store state.KVStore, module, key, value string) (string, error)
```
Note the `Context`, `KVStore`, and `Tx` as principal carriers of information.
And that Result is always success, and we have a second error return
for errors (which is much more standard golang that `res.IsErr()`)
The `Handler` interface is designed to be the basis for all modules that
execute transactions, and this can provide a large degree of code
interoperability, much like `http.Handler` does in golang web development.
## Middleware
Middleware is a series of processing steps that any request must travel through
before (and after) executing the registered `Handler`. Some examples are a
logger (that records the time before executing the transaction, then outputs
info - including duration - after the execution), of a signature checker (which
unwraps the transaction by one layer, verifies signatures, and adds the
permissions to the Context before passing the request along).
In keeping with the standardization of `http.Handler` and inspired by the
super minimal [negroni](https://github.com/urfave/negroni/blob/master/README.md)
package, we just provide one more `Middleware` interface, which has an extra
`next` parameter, and a `Stack` that can wire all the levels together (which
also gives us a place to perform isolation of each step).
```golang
Name() string
CheckTx(ctx Context, store state.KVStore, tx Tx, next Checker) (Result, error)
DeliverTx(ctx Context, store state.KVStore, tx Tx, next Deliver) (Result, error)
SetOption(l log.Logger, store state.KVStore, module, key, value string, next Optioner) (string, error)
```
## Modules
A module is a set of functionality which should be typically designed as
self-sufficient. Common elements of a module are:
* transaction types (either end transactions, or transaction wrappers)
* custom error codes
* data models (to persist in the kv-store)
* handler (to handle any end transactions)
* middleware (to handler any wrapper transactions)
To enable a module, you must add the appropriate middleware (if any) to the
stack in `main.go` for the client application (default:
`basecli/main.go`), as well as adding the handler (if any) to the dispatcher
(default: `app/app.go`). Once the stack is compiled into a `Handler`,
then each transaction is handled by the appropriate module.
## Dispatcher
We usually will want to have multiple modules working together, and need to
make sure the correct transactions get to the correct module. So we have
`coin` sending money, `roles` to create multi-sig accounts, and `ibc` for
following other chains all working together without interference.
After the chain of middleware, we can register a `Dispatcher`, which also
implements the `Handler` interface. We then register a list of modules with
the dispatcher. Every module has a unique `Name()`, which is used for
isolating its state space. We use this same name for routing transactions.
Each transaction implementation must be registed with go-wire via `TxMapper`,
so we just look at the registered name of this transaction, which should be
of the form `<module name>/xxx`. The dispatcher grabs the appropriate module
name from the tx name and routes it if the module is present.
This all seems like a bit of magic, but really we're just making use of go-wire
magic that we are already using, rather than add another layer. For all the
transactions to be properly routed, the only thing you need to remember is to
use the following pattern:
```golang
const (
NameCoin = "coin"
TypeSend = NameCoin + "/send"
)
```
## Inter-Plugin Communication (IPC)
But wait, there's more... since we have isolated all the modules from each
other, we need to allow some way for them to interact in a controlled fashion.
One example is the `fee` middleware, which wants to deduct coins from the
calling account and can be accomplished most easily with the `coin` module.
To make a call from the middleware, we the `next` Handler, which will execute
the rest of the stack. It can create a new SendTx and pass it down the
stack. If it returns success, do the rest of the processing (and send the
original transaction down the stack), otherwise abort.
However, if one `Handler` inside the `Dispatcher` wants to do this, it becomes
more complex. The solution is that the `Dispatcher` accepts not a `Handler`,
but a `Dispatchable`, which looks like a middleware, except that the `next`
argument is a callback to the dispatcher to execute a sub-transaction. If a
module doesn't want to use this functionality, it can just implement `Handler`
and call `stack.WrapHandler(h)` to convert it to a `Dispatchable` that never
uses the callback.
One example of this is the counter app, which can optionally accept a payment.
If the transaction contains a payment, it must create a SendTx and pass this
to the dispatcher to deduct the amount from the proper account. Take a look at
[counter plugin](https://github.com/cosmos/cosmos-sdk/blob/master/docs/guide/counter/plugins/counter/counter.go)for a better idea.
## Permissions
IPC requires a more complex permissioning system to allow the modules to have
limited access to each other and also to allow more types of permissions than
simple public key signatures. Rather than just use an address to identify
who is performing an action, we can use a more complex structure:
```golang
type Actor struct {
ChainID string `json:"chain"` // this is empty unless it comes from a different chain
App string `json:"app"` // the app that the actor belongs to
Address data.Bytes `json:"addr"` // arbitrary app-specific unique id
}
```
Here, the `Actor` abstracts any address that can authorize actions, hold funds,
or initiate any sort of transaction. It doesn't just have to be a pubkey on
this chain, it could stem from another app (such as multi-sig account), or even
another chain (via IBC)
`ChainID` is for IBC, discussed below. Let's focus on `App` and `Address`.
For a signature, the App is `auth`, and any modules can check to see if a
specific public key address signed like this `ctx.HasPermission(auth.SigPerm(addr))`.
However, we can also authorize a tx with `roles`, which handles multi-sig accounts,
it checks if there were enough signatures by checking as above, then it can add
the role permission like `ctx= ctx.WithPermissions(NewPerm(assume.Role))`
In addition to the permissions schema, the Actors are addresses just like public key
addresses. So one can create a mulit-sig role, then send coin there, which can
only be moved upon meeting the authorization requirements from that module.
`coin` doesn't even know the existence of `roles` and one could build any other
sort of module to provide permissions (like bind the outcome of an election to
move coins or to modify the accounts on a role).
One idea - not yet implemented - is to provide scopes on the permissions.
Currently, if I sign a transaction to one module, it can pass it on to any other
module over IPC with the same permissions. It could move coins, vote in an election,
or anything else. Ideally, when signing, one could also specify the scope(s) that
this signature authorizes. The [oauth protocol](https://api.slack.com/docs/oauth-scopes)
also has to deal with a similar problem, and maybe could provide some inspiration.
## Replay Protection
In order to prevent [replay
attacks](https://en.wikipedia.org/wiki/Replay_attack) a multi account nonce system
has been constructed as a module, which can be found in
`modules/nonce`. By adding the nonce module to the stack, each
transaction is verified for authenticity against replay attacks. This is
achieved by requiring that a new signed copy of the sequence number which must
be exactly 1 greater than the sequence number of the previous transaction. A
distinct sequence number is assigned per chain-id, application, and group of
signers. Each sequence number is tracked as a nonce-store entry where the key
is the marshaled list of actors after having been sorted by chain, app, and
address.
```golang
// Tx - Nonce transaction structure, contains list of signers and current sequence number
type Tx struct {
Sequence uint32 `json:"sequence"`
Signers []basecoin.Actor `json:"signers"`
Tx basecoin.Tx `json:"tx"`
}
```
By distinguishing sequence numbers across groups of Signers, multi-signature
Actors need not lock up use of their Address while waiting for all the members
of a multi-sig transaction to occur. Instead only the multi-sig account will
be locked, while other accounts belonging to that signer can be used and signed
with other sequence numbers.
By abstracting out the nonce module in the stack, entire series of transactions
can occur without needing to verify the nonce for each member of the series. An
common example is a stack which will send coins and charge a fee. Within the SDK
this can be achieved using separate modules in a stack, one to send the coins
and the other to charge the fee, however both modules do not need to check the
nonce. This can occur as a separate module earlier in the stack.
## IBC (Inter-Blockchain Communication)
Stay tuned!

334
docs/glossary.rst Normal file
View File

@ -0,0 +1,334 @@
Glossary
========
This glossary defines many terms used throughout documentation of Quark.
If there is every a concept that seems unclear, check here. This is
mainly to provide a background and general understanding of the
different words and concepts that are used. Other documents will explain
in more detail how to combine these concepts to build a particular
application.
Transaction
-----------
A transaction is a packet of binary data that contains all information
to validate and perform an action on the blockchain. The only other data
that it interacts with is the current state of the chain (key-value
store), and it must have a deterministic action. The transaction is the
main piece of one request.
We currently make heavy use of
`go-wire <https://github.com/tendermint/go-wire>`__ and
`data <https://github.com/tendermint/go-wire/tree/master/data>`__ to
provide binary and json encodings and decodings for ``struct`` or
interface\ ``objects. Here, encoding and decoding operations are designed to operate with interfaces nested any amount times (like an onion!). There is one public``\ TxMapper\`
in the basecoin root package, and all modules can register their own
transaction types there. This allows us to deserialize the entire
transaction in one location (even with types defined in other repos), to
easily embed an arbitrary transaction inside another without specifying
the type, and provide an automatic json representation allowing for
users (or apps) to inspect the chain.
Note how we can wrap any other transaction, add a fee level, and not
worry about the encoding in our module any more?
.. code:: golang
type Fee struct {
Fee coin.Coin `json:"fee"`
Payer basecoin.Actor `json:"payer"` // the address who pays the fee
Tx basecoin.Tx `json:"tx"`
}
Context (ctx)
-------------
As a request passes through the system, it may pick up information such
as the authorization it has received from another middleware, or the
block height the request runs at. In order to carry this information
between modules it is saved to the context. Further, all information
must be deterministic from the context in which the request runs (based
on the transaction and the block it was included in) and can be used to
validate the transaction.
Data Store
----------
In order to provide proofs to Tendermint, we keep all data in one
key-value (kv) store which is indexed with a merkle tree. This allows
for the easy generation of a root hash and proofs for queries without
requiring complex logic inside each module. Standardization of this
process also allows powerful light-client tooling as any store data may
be verified on the fly.
The largest limitation of the current implemenation of the kv-store is
that interface that the application must use can only ``Get`` and
``Set`` single data points. That said, there are some data structures
like queues and range queries that are available in ``state`` package.
These provide higher-level functionality in a standard format, but have
not yet been integrated into the kv-store interface.
Isolation
---------
One of the main arguments for blockchain is security. So while we
encourage the use of third-party modules, all developers must be
vigilant against security holes. If you use the
`stack <https://github.com/cosmos/cosmos-sdk/tree/master/stack>`__
package, it will provide two different types of compartmentalization
security.
The first is to limit the working kv-store space of each module. When
``DeliverTx`` is called for a module, it is never given the entire data
store, but rather only its own prefixed subset of the store. This is
achieved by prefixing all keys transparently with
``<module name> + 0x0``, using the null byte as a separator. Since the
module name must be a string, no malicious naming scheme can ever lead
to a collision. Inside a module, we can write using any key value we
desire without the possibility that we have modified data belonging to
separate module.
The second is to add permissions to the transaction context. The
transaction context can specify that the tx has been signed by one or
multiple specific
`actors <https://github.com/tendermint/basecoin/blob/unstable/context.go#L18>`__.
A transactions will only be executed if the permission requirements have
been fulfilled. For example the sender of funds must have signed, or 2
out of 3 multi-signature actors must have signed a joint account. To
prevent the forgery of account signatures from unintended modules each
permission is associated with the module that granted it (in this case
`auth <https://github.com/cosmos/cosmos-sdk/tree/master/modules/auth>`__),
and if a module tries to add a permission for another module, it will
panic. There is also protection if a module creates a brand new fake
context to trick the downstream modules. Each context enforces the rules
on how to make child contexts, and the stack middleware builder enforces
that the context passed from one level to the next is a valid child of
the original one.
These security measures ensure that modules can confidently write to
their local section of the database and trust the permissions associated
with the context, without concern of interference from other modules.
(Okay, if you see a bunch of C-code in the module traversing through all
the memory space of the application, then get worried....)
Handler
-------
The ABCI interface is handled by ``app``, which translates these data
structures into an internal format that is more convenient, but unable
to travel over the wire. The basic interface for any code that modifies
state is the ``Handler`` interface, which provides four methods:
.. code:: golang
Name() string
CheckTx(ctx Context, store state.KVStore, tx Tx) (Result, error)
DeliverTx(ctx Context, store state.KVStore, tx Tx) (Result, error)
SetOption(l log.Logger, store state.KVStore, module, key, value string) (string, error)
Note the ``Context``, ``KVStore``, and ``Tx`` as principal carriers of
information. And that Result is always success, and we have a second
error return for errors (which is much more standard golang that
``res.IsErr()``)
The ``Handler`` interface is designed to be the basis for all modules
that execute transactions, and this can provide a large degree of code
interoperability, much like ``http.Handler`` does in golang web
development.
Middleware
----------
Middleware is a series of processing steps that any request must travel
through before (and after) executing the registered ``Handler``. Some
examples are a logger (that records the time before executing the
transaction, then outputs info - including duration - after the
execution), of a signature checker (which unwraps the transaction by one
layer, verifies signatures, and adds the permissions to the Context
before passing the request along).
In keeping with the standardization of ``http.Handler`` and inspired by
the super minimal
`negroni <https://github.com/urfave/negroni/blob/master/README.md>`__
package, we just provide one more ``Middleware`` interface, which has an
extra ``next`` parameter, and a ``Stack`` that can wire all the levels
together (which also gives us a place to perform isolation of each
step).
.. code:: golang
Name() string
CheckTx(ctx Context, store state.KVStore, tx Tx, next Checker) (Result, error)
DeliverTx(ctx Context, store state.KVStore, tx Tx, next Deliver) (Result, error)
SetOption(l log.Logger, store state.KVStore, module, key, value string, next Optioner) (string, error)
Modules
-------
A module is a set of functionality which should be typically designed as
self-sufficient. Common elements of a module are:
- transaction types (either end transactions, or transaction wrappers)
- custom error codes
- data models (to persist in the kv-store)
- handler (to handle any end transactions)
- middleware (to handler any wrapper transactions)
To enable a module, you must add the appropriate middleware (if any) to
the stack in ``main.go`` for the client application (default:
``basecli/main.go``), as well as adding the handler (if any) to the
dispatcher (default: ``app/app.go``). Once the stack is compiled into a
``Handler``, then each transaction is handled by the appropriate module.
Dispatcher
----------
We usually will want to have multiple modules working together, and need
to make sure the correct transactions get to the correct module. So we
have ``coin`` sending money, ``roles`` to create multi-sig accounts, and
``ibc`` for following other chains all working together without
interference.
After the chain of middleware, we can register a ``Dispatcher``, which
also implements the ``Handler`` interface. We then register a list of
modules with the dispatcher. Every module has a unique ``Name()``, which
is used for isolating its state space. We use this same name for routing
transactions. Each transaction implementation must be registed with
go-wire via ``TxMapper``, so we just look at the registered name of this
transaction, which should be of the form ``<module name>/xxx``. The
dispatcher grabs the appropriate module name from the tx name and routes
it if the module is present.
This all seems like a bit of magic, but really we're just making use of
go-wire magic that we are already using, rather than add another layer.
For all the transactions to be properly routed, the only thing you need
to remember is to use the following pattern:
.. code:: golang
const (
NameCoin = "coin"
TypeSend = NameCoin + "/send"
)
Inter-Plugin Communication (IPC)
--------------------------------
But wait, there's more... since we have isolated all the modules from
each other, we need to allow some way for them to interact in a
controlled fashion. One example is the ``fee`` middleware, which wants
to deduct coins from the calling account and can be accomplished most
easily with the ``coin`` module.
To make a call from the middleware, we the ``next`` Handler, which will
execute the rest of the stack. It can create a new SendTx and pass it
down the stack. If it returns success, do the rest of the processing
(and send the original transaction down the stack), otherwise abort.
However, if one ``Handler`` inside the ``Dispatcher`` wants to do this,
it becomes more complex. The solution is that the ``Dispatcher`` accepts
not a ``Handler``, but a ``Dispatchable``, which looks like a
middleware, except that the ``next`` argument is a callback to the
dispatcher to execute a sub-transaction. If a module doesn't want to use
this functionality, it can just implement ``Handler`` and call
``stack.WrapHandler(h)`` to convert it to a ``Dispatchable`` that never
uses the callback.
One example of this is the counter app, which can optionally accept a
payment. If the transaction contains a payment, it must create a SendTx
and pass this to the dispatcher to deduct the amount from the proper
account. Take a look at `counter
plugin <https://github.com/cosmos/cosmos-sdk/blob/master/docs/guide/counter/plugins/counter/counter.go>`__\ for
a better idea.
Permissions
-----------
IPC requires a more complex permissioning system to allow the modules to
have limited access to each other and also to allow more types of
permissions than simple public key signatures. Rather than just use an
address to identify who is performing an action, we can use a more
complex structure:
.. code:: golang
type Actor struct {
ChainID string `json:"chain"` // this is empty unless it comes from a different chain
App string `json:"app"` // the app that the actor belongs to
Address data.Bytes `json:"addr"` // arbitrary app-specific unique id
}
Here, the ``Actor`` abstracts any address that can authorize actions,
hold funds, or initiate any sort of transaction. It doesn't just have to
be a pubkey on this chain, it could stem from another app (such as
multi-sig account), or even another chain (via IBC)
``ChainID`` is for IBC, discussed below. Let's focus on ``App`` and
``Address``. For a signature, the App is ``auth``, and any modules can
check to see if a specific public key address signed like this
``ctx.HasPermission(auth.SigPerm(addr))``. However, we can also
authorize a tx with ``roles``, which handles multi-sig accounts, it
checks if there were enough signatures by checking as above, then it can
add the role permission like
``ctx= ctx.WithPermissions(NewPerm(assume.Role))``
In addition to the permissions schema, the Actors are addresses just
like public key addresses. So one can create a mulit-sig role, then send
coin there, which can only be moved upon meeting the authorization
requirements from that module. ``coin`` doesn't even know the existence
of ``roles`` and one could build any other sort of module to provide
permissions (like bind the outcome of an election to move coins or to
modify the accounts on a role).
One idea - not yet implemented - is to provide scopes on the
permissions. Currently, if I sign a transaction to one module, it can
pass it on to any other module over IPC with the same permissions. It
could move coins, vote in an election, or anything else. Ideally, when
signing, one could also specify the scope(s) that this signature
authorizes. The `oauth
protocol <https://api.slack.com/docs/oauth-scopes>`__ also has to deal
with a similar problem, and maybe could provide some inspiration.
Replay Protection
-----------------
In order to prevent `replay
attacks <https://en.wikipedia.org/wiki/Replay_attack>`__ a multi account
nonce system has been constructed as a module, which can be found in
``modules/nonce``. By adding the nonce module to the stack, each
transaction is verified for authenticity against replay attacks. This is
achieved by requiring that a new signed copy of the sequence number
which must be exactly 1 greater than the sequence number of the previous
transaction. A distinct sequence number is assigned per chain-id,
application, and group of signers. Each sequence number is tracked as a
nonce-store entry where the key is the marshaled list of actors after
having been sorted by chain, app, and address.
.. code:: golang
// Tx - Nonce transaction structure, contains list of signers and current sequence number
type Tx struct {
Sequence uint32 `json:"sequence"`
Signers []basecoin.Actor `json:"signers"`
Tx basecoin.Tx `json:"tx"`
}
By distinguishing sequence numbers across groups of Signers,
multi-signature Actors need not lock up use of their Address while
waiting for all the members of a multi-sig transaction to occur. Instead
only the multi-sig account will be locked, while other accounts
belonging to that signer can be used and signed with other sequence
numbers.
By abstracting out the nonce module in the stack, entire series of
transactions can occur without needing to verify the nonce for each
member of the series. An common example is a stack which will send coins
and charge a fee. Within the SDK this can be achieved using separate
modules in a stack, one to send the coins and the other to charge the
fee, however both modules do not need to check the nonce. This can occur
as a separate module earlier in the stack.
IBC (Inter-Blockchain Communication)
------------------------------------
Stay tuned!

View File

@ -1,333 +0,0 @@
<!--- shelldown script template, see github.com/rigelrozanski/shelldown
#!/bin/bash
testTutorial_BasecoinBasics() {
#shelldown[1][3] >/dev/null
#shelldown[1][4] >/dev/null
KEYPASS=qwertyuiop
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[1][6])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[1][7])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
#shelldown[3][-1]
assertTrue "Expected true for line $LINENO" $?
#shelldown[4][-1] >>/dev/null 2>&1 &
sleep 5
PID_SERVER=$!
disown
RES=$((echo y) | #shelldown[5][-1] $1)
assertTrue "Line $LINENO: Expected to contain validator, got $RES" '[[ $RES == *validator* ]]'
#shelldown[6][0]
#shelldown[6][1]
RES=$(#shelldown[6][2] | jq '.data.coins[0].denom' | tr -d '"')
assertTrue "Line $LINENO: Expected to have mycoins, got $RES" '[[ $RES == mycoin ]]'
RES="$(#shelldown[6][3] 2>&1)"
assertTrue "Line $LINENO: Expected to contain ERROR, got $RES" '[[ $RES == *ERROR* ]]'
RES=$((echo $KEYPASS) | #shelldown[7][-1] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$(#shelldown[8][-1] | jq '.data.coins[0].amount')
assertTrue "Line $LINENO: Expected to contain 1000 mycoin, got $RES" '[[ $RES == 1000 ]]'
RES=$((echo $KEYPASS) | #shelldown[9][-1] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$((echo $KEYPASS) | #shelldown[10][-1])
assertTrue "Line $LINENO: Expected to contain insufficient funds error, got $RES" \
'[[ $RES == *"Insufficient Funds"* ]]'
#perform a substitution within the final tests
HASH=$((echo $KEYPASS) | #shelldown[11][-1] | jq '.hash' | tr -d '"')
PRESUB="#shelldown[12][-1]"
RES=$(eval ${PRESUB/<HASH>/$HASH})
assertTrue "Line $LINENO: Expected to not contain Error, got $RES" '[[ $RES != *Error* ]]'
}
oneTimeTearDown() {
kill -9 $PID_SERVER >/dev/null 2>&1
sleep 1
}
# load and run these tests with shunit2!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" #get this files directory
. $DIR/shunit2
-->
# Basecoin Basics
Here we explain how to get started with a basic Basecoin blockchain,
how to send transactions between accounts using the `basecoin` tool,
and what is happening under the hood.
## Install
With go, it's one command:
```shelldown[0]
go get -u github.com/tendermint/basecoin/cmd/...
```
If you have trouble, see the [installation guide](install.md).
Note the above command installs two binaries: `basecoin` and `basecli`.
The former is the running node. The latter is a command-line light-client.
This tutorial assumes you have a 'fresh' working environment. See [how to clean up, below](#clean-up).
## Generate some keys
Let's generate two keys, one to receive an initial allocation of coins,
and one to send some coins to later:
```shelldown[1]
basecli keys new cool
basecli keys new friend
```
You'll need to enter passwords. You can view your key names and addresses with
`basecli keys list`, or see a particular key's address with `basecli keys get
<NAME>`.
## Initialize Basecoin
To initialize a new Basecoin blockchain, run:
```shelldown[2]
basecoin init <ADDRESS>
```
If you prefer not to copy-paste, you can provide the address programatically:
```shelldown[3]
basecoin init $(basecli keys get cool | awk '{print $2}')
```
This will create the necessary files for a Basecoin blockchain with one
validator and one account (corresponding to your key) in `~/.basecoin`. For
more options on setup, see the [guide to using the Basecoin
tool](/docs/guide/basecoin-tool.md).
If you like, you can manually add some more accounts to the blockchain by
generating keys and editing the `~/.basecoin/genesis.json`.
## Start
Now we can start Basecoin:
```shelldown[4]
basecoin start
```
You should see blocks start streaming in!
## Initialize Light-Client
Now that Basecoin is running we can initialize `basecli`, the light-client
utility. Basecli is used for sending transactions and querying the state.
Leave Basecoin running and open a new terminal window. Here run:
```shelldown[5]
basecli init --node=tcp://localhost:46657 --genesis=$HOME/.basecoin/genesis.json
```
If you provide the genesis file to basecli, it can calculate the proper chainID
and validator hash. Basecli needs to get this information from some trusted
source, so all queries done with `basecli` can be cryptographically proven to
be correct according to a known validator set.
Note: that --genesis only works if there have been no validator set changes
since genesis. If there are validator set changes, you need to find the current
set through some other method.
## Send transactions
Now we are ready to send some transactions. First Let's check the balance of
the two accounts we setup earlier:
```shelldown[6]
ME=$(basecli keys get cool | awk '{print $2}')
YOU=$(basecli keys get friend | awk '{print $2}')
basecli query account $ME
basecli query account $YOU
```
The first account is flush with cash, while the second account doesn't exist.
Let's send funds from the first account to the second:
```shelldown[7]
basecli tx send --name=cool --amount=1000mycoin --to=$YOU --sequence=1
```
Now if we check the second account, it should have `1000` 'mycoin' coins!
```shelldown[8]
basecli query account $YOU
```
We can send some of these coins back like so:
```shelldown[9]
basecli tx send --name=friend --amount=500mycoin --to=$ME --sequence=1
```
Note how we use the `--name` flag to select a different account to send from.
If we try to send too much, we'll get an error:
```shelldown[10]
basecli tx send --name=friend --amount=500000mycoin --to=$ME --sequence=2
```
Let's send another transaction:
```shelldown[11]
basecli tx send --name=cool --amount=2345mycoin --to=$YOU --sequence=2
```
Note the `hash` value in the response - this is the hash of the transaction.
We can query for the transaction by this hash:
```shelldown[12]
basecli query tx <HASH>
```
See `basecli tx send --help` for additional details.
## Proof
Even if you don't see it in the UI, the result of every query comes with a
proof. This is a Merkle proof that the result of the query is actually
contained in the state. And the state's Merkle root is contained in a recent
block header. Behind the scenes, `countercli` will not only verify that this
state matches the header, but also that the header is properly signed by the
known validator set. It will even update the validator set as needed, so long
as there have not been major changes and it is secure to do so. So, if you
wonder why the query may take a second... there is a lot of work going on in
the background to make sure even a lying full node can't trick your client.
In a latter [guide on InterBlockchain Communication](ibc.md), we'll use these
proofs to post transactions to other chains.
## Accounts and Transactions
For a better understanding of how to further use the tools, it helps to
understand the underlying data structures.
### Accounts
The Basecoin state consists entirely of a set of accounts. Each account
contains a public key, a balance in many different coin denominations, and a
strictly increasing sequence number for replay protection. This type of
account was directly inspired by accounts in Ethereum, and is unlike Bitcoin's
use of Unspent Transaction Outputs (UTXOs). Note Basecoin is a multi-asset
cryptocurrency, so each account can have many different kinds of tokens.
```golang
type Account struct {
PubKey crypto.PubKey `json:"pub_key"` // May be nil, if not known.
Sequence int `json:"sequence"`
Balance Coins `json:"coins"`
}
type Coins []Coin
type Coin struct {
Denom string `json:"denom"`
Amount int64 `json:"amount"`
}
```
If you want to add more coins to a blockchain, you can do so manually in the
`~/.basecoin/genesis.json` before you start the blockchain for the first time.
Accounts are serialized and stored in a Merkle tree under the key
`base/a/<address>`, where `<address>` is the address of the account.
Typically, the address of the account is the 20-byte `RIPEMD160` hash of the
public key, but other formats are acceptable as well, as defined in the
[Tendermint crypto library](https://github.com/tendermint/go-crypto). The
Merkle tree used in Basecoin is a balanced, binary search tree, which we call
an [IAVL tree](https://github.com/tendermint/go-merkle).
### Transactions
Basecoin defines a transaction type, the `SendTx`, which allows tokens
to be sent to other accounts. The `SendTx` takes a list of inputs and a list
of outputs, and transfers all the tokens listed in the inputs from their
corresponding accounts to the accounts listed in the output. The `SendTx` is
structured as follows:
```golang
type SendTx struct {
Gas int64 `json:"gas"`
Fee Coin `json:"fee"`
Inputs []TxInput `json:"inputs"`
Outputs []TxOutput `json:"outputs"`
}
type TxInput struct {
Address []byte `json:"address"` // Hash of the PubKey
Coins Coins `json:"coins"` //
Sequence int `json:"sequence"` // Must be 1 greater than the last committed TxInput
Signature crypto.Signature `json:"signature"` // Depends on the PubKey type and the whole Tx
PubKey crypto.PubKey `json:"pub_key"` // Is present iff Sequence == 0
}
type TxOutput struct {
Address []byte `json:"address"` // Hash of the PubKey
Coins Coins `json:"coins"` //
}
```
Note the `SendTx` includes a field for `Gas` and `Fee`. The `Gas` limits the
total amount of computation that can be done by the transaction, while the
`Fee` refers to the total amount paid in fees. This is slightly different from
Ethereum's concept of `Gas` and `GasPrice`, where `Fee = Gas x GasPrice`. In
Basecoin, the `Gas` and `Fee` are independent, and the `GasPrice` is implicit.
In Basecoin, the `Fee` is meant to be used by the validators to inform the
ordering of transactions, like in Bitcoin. And the `Gas` is meant to be used
by the application plugin to control its execution. There is currently no
means to pass `Fee` information to the Tendermint validators, but it will come
soon...
Note also that the `PubKey` only needs to be sent for `Sequence == 0`. After
that, it is stored under the account in the Merkle tree and subsequent
transactions can exclude it, using only the `Address` to refer to the sender.
Ethereum does not require public keys to be sent in transactions as it uses a
different elliptic curve scheme which enables the public key to be derived from
the signature itself.
Finally, note that the use of multiple inputs and multiple outputs allows us to
send many different types of tokens between many different accounts at once in
an atomic transaction. Thus, the `SendTx` can serve as a basic unit of
decentralized exchange. When using multiple inputs and outputs, you must make
sure that the sum of coins of the inputs equals the sum of coins of the outputs
(no creating money), and that all accounts that provide inputs have signed the
transaction.
## Clean Up
**WARNING:** Running these commands will wipe out any existing information in both the `~/.basecli` and `~/.basecoin` directories, including private keys.
To remove all the files created and refresh your environment (e.g., if starting this tutorial again or trying something new), the following commands are run:
```shelldown[end-of-tutorials]
basecli reset_all
rm -rf ~/.basecoin
```
## Conclusion
In this guide, we introduced the `basecoin` and `basecli` tools, demonstrated
how to start a new basecoin blockchain and how to send tokens between accounts,
and discussed the underlying data types for accounts and transactions,
specifically the `Account` and the `SendTx`. In the [next
guide](basecoin-plugins.md), we introduce the Basecoin plugin system, which
uses a new transaction type, the `AppTx`, to extend the functionality of the
Basecoin system with arbitrary logic.

View File

@ -1,258 +0,0 @@
<!--- shelldown script template, see github.com/rigelrozanski/shelldown
#!/bin/bash
testTutorial_BasecoinPlugins() {
#Initialization
#shelldown[0][1]
#shelldown[0][2]
KEYPASS=qwertyuiop
#Making Keys
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[0][4])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
RES=$((echo $KEYPASS; echo $KEYPASS) | #shelldown[0][5])
assertTrue "Line $LINENO: Expected to contain safe, got $RES" '[[ $RES == *safe* ]]'
#shelldown[0][7] >/dev/null
assertTrue "Expected true for line $LINENO" $?
#shelldown[0][9] >>/dev/null 2>&1 &
sleep 5
PID_SERVER=$!
disown
RES=$((echo y) | #shelldown[1][0] $1)
assertTrue "Line $LINENO: Expected to contain validator, got $RES" '[[ $RES == *validator* ]]'
#shelldown[1][2]
assertTrue "Expected true for line $LINENO" $?
RES=$((echo $KEYPASS) | #shelldown[1][3] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$((echo $KEYPASS) | #shelldown[2][0])
assertTrue "Line $LINENO: Expected to contain Valid error, got $RES" \
'[[ $RES == *"Counter Tx marked invalid"* ]]'
RES=$((echo $KEYPASS) | #shelldown[2][1] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$(#shelldown[3][-1] | jq '.data.counter')
assertTrue "Line $LINENO: Expected Counter of 1, got $RES" '[[ $RES == 1 ]]'
RES=$((echo $KEYPASS) | #shelldown[4][0] | jq '.deliver_tx.code')
assertTrue "Line $LINENO: Expected 0 code deliver_tx, got $RES" '[[ $RES == 0 ]]'
RES=$(#shelldown[4][1])
RESCOUNT=$(printf "$RES" | jq '.data.counter')
RESFEE=$(printf "$RES" | jq '.data.total_fees[0].amount')
assertTrue "Line $LINENO: Expected Counter of 2, got $RES" '[[ $RESCOUNT == 2 ]]'
assertTrue "Line $LINENO: Expected TotalFees of 2, got $RES" '[[ $RESFEE == 2 ]]'
}
oneTimeTearDown() {
kill -9 $PID_SERVER >/dev/null 2>&1
sleep 1
}
# load and run these tests with shunit2!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" #get this files directory
. $DIR/shunit2
-->
# Basecoin Plugins
In the [previous guide](basecoin-basics.md), we saw how to use the `basecoin`
tool to start a blockchain and the `basecli` tools to send transactions. We
also learned about `Account` and `SendTx`, the basic data types giving us a
multi-asset cryptocurrency. Here, we will demonstrate how to extend the tools
to use another transaction type, the `AppTx`, so we can send data to a custom
plugin. In this example we explore a simple plugin named `counter`.
## Example Plugin
The design of the `basecoin` tool makes it easy to extend for custom
functionality. The Counter plugin is bundled with basecoin, so if you have
already [installed basecoin](install.md) and run `make install` then you should
be able to run a full node with `counter` and the a light-client `countercli`
from terminal. The Counter plugin is just like the `basecoin` tool. They
both use the same library of commands, including one for signing and
broadcasting `SendTx`.
Counter transactions take two custom inputs, a boolean argument named `valid`,
and a coin amount named `countfee`. The transaction is only accepted if both
`valid` is set to true and the transaction input coins is greater than
`countfee` that the user provides.
A new blockchain can be initialized and started just like in the [previous
guide](basecoin-basics.md):
```shelldown[0]
# WARNING: this wipes out data - but counter is only for demos...
rm -rf ~/.counter
countercli reset_all
countercli keys new cool
countercli keys new friend
counter init $(countercli keys get cool | awk '{print $2}')
counter start
```
The default files are stored in `~/.counter`. In another window we can
initialize the light-client and send a transaction:
```shelldown[1]
countercli init --node=tcp://localhost:46657 --genesis=$HOME/.counter/genesis.json
YOU=$(countercli keys get friend | awk '{print $2}')
countercli tx send --name=cool --amount=1000mycoin --to=$YOU --sequence=1
```
But the Counter has an additional command, `countercli tx counter`, which
crafts an `AppTx` specifically for this plugin:
```shelldown[2]
countercli tx counter --name cool
countercli tx counter --name cool --valid
```
The first transaction is rejected by the plugin because it was not marked as
valid, while the second transaction passes. We can build plugins that take
many arguments of different types, and easily extend the tool to accomodate
them. Of course, we can also expose queries on our plugin:
```shelldown[3]
countercli query counter
```
Tada! We can now see that our custom counter plugin transactions went through.
You should see a Counter value of 1 representing the number of valid
transactions. If we send another transaction, and then query again, we will
see the value increment. Note that we need the sequence number here to send the
coins (it didn't increment when we just pinged the counter)
```shelldown[4]
countercli tx counter --name cool --countfee=2mycoin --sequence=2 --valid
countercli query counter
```
The Counter value should be 2, because we sent a second valid transaction.
And this time, since we sent a countfee (which must be less than or equal to the
total amount sent with the tx), it stores the `TotalFees` on the counter as well.
Keep it mind that, just like with `basecli`, the `countercli` verifies a proof
that the query response is correct and up-to-date.
Now, before we implement our own plugin and tooling, it helps to understand the
`AppTx` and the design of the plugin system.
## AppTx
The `AppTx` is similar to the `SendTx`, but instead of sending coins from
inputs to outputs, it sends coins from one input to a plugin, and can also send
some data.
```golang
type AppTx struct {
Gas int64 `json:"gas"`
Fee Coin `json:"fee"`
Input TxInput `json:"input"`
Name string `json:"type"` // Name of the plugin
Data []byte `json:"data"` // Data for the plugin to process
}
```
The `AppTx` enables Basecoin to be extended with arbitrary additional
functionality through the use of plugins. The `Name` field in the `AppTx`
refers to the particular plugin which should process the transaction, and the
`Data` field of the `AppTx` is the data to be forwarded to the plugin for
processing.
Note the `AppTx` also has a `Gas` and `Fee`, with the same meaning as for the
`SendTx`. It also includes a single `TxInput`, which specifies the sender of
the transaction, and some coins that can be forwarded to the plugin as well.
## Plugins
A plugin is simply a Go package that implements the `Plugin` interface:
```golang
type Plugin interface {
// Name of this plugin, should be short.
Name() string
// Run a transaction from ABCI DeliverTx
RunTx(store KVStore, ctx CallContext, txBytes []byte) (res abci.Result)
// Other ABCI message handlers
SetOption(store KVStore, key string, value string) (log string)
InitChain(store KVStore, vals []*abci.Validator)
BeginBlock(store KVStore, hash []byte, header *abci.Header)
EndBlock(store KVStore, height uint64) (res abci.ResponseEndBlock)
}
type CallContext struct {
CallerAddress []byte // Caller's Address (hash of PubKey)
CallerAccount *Account // Caller's Account, w/ fee & TxInputs deducted
Coins Coins // The coins that the caller wishes to spend, excluding fees
}
```
The workhorse of the plugin is `RunTx`, which is called when an `AppTx` is
processed. The `Data` from the `AppTx` is passed in as the `txBytes`, while
the `Input` from the `AppTx` is used to populate the `CallContext`.
Note that `RunTx` also takes a `KVStore` - this is an abstraction for the
underlying Merkle tree which stores the account data. By passing this to the
plugin, we enable plugins to update accounts in the Basecoin state directly,
and also to store arbitrary other information in the state. In this way, the
functionality and state of a Basecoin-derived cryptocurrency can be greatly
extended. One could imagine going so far as to implement the Ethereum Virtual
Machine as a plugin!
For details on how to initialize the state using `SetOption`, see the [guide to
using the basecoin tool](basecoin-tool.md#genesis).
## Implement your own
To implement your own plugin and tooling, make a copy of
`docs/guide/counter`, and modify the code accordingly. Here, we will
briefly describe the design and the changes to be made, but see the code for
more details.
First is the `cmd/counter/main.go`, which drives the program. It can be left
alone, but you should change any occurrences of `counter` to whatever your
plugin tool is going to be called. You must also register your plugin(s) with
the basecoin app with `RegisterStartPlugin`.
The light-client is located in `cmd/countercli/main.go` and allows for
transaction and query commands. This file can also be left mostly alone besides replacing the application name and adding
references to new plugin commands.
Next is the custom commands in `cmd/countercli/commands/`. These files are
where we extend the tool with any new commands and flags we need to send
transactions or queries to our plugin. You define custom `tx` and `query`
subcommands, which are registered in `main.go` (avoiding `init()`
auto-registration, for less magic and more control in the main executable).
Finally is `plugins/counter/counter.go`, where we provide an implementation of
the `Plugin` interface. The most important part of the implementation is the
`RunTx` method, which determines the meaning of the data sent along in the
`AppTx`. In our example, we define a new transaction type, the `CounterTx`,
which we expect to be encoded in the `AppTx.Data`, and thus to be decoded in
the `RunTx` method, and used to update the plugin state.
For more examples and inspiration, see our [repository of example
plugins](https://github.com/tendermint/basecoin-examples).
## Conclusion
In this guide, we demonstrated how to create a new plugin and how to extend the
`basecoin` tool to start a blockchain with the plugin enabled and send
transactions to it. In the next guide, we introduce a [plugin for Inter
Blockchain Communication](ibc.md), which allows us to publish proofs of the
state of one blockchain to another, and thus to transfer tokens and data
between them.

View File

@ -1,249 +0,0 @@
<!--- shelldown script template, see github.com/rigelrozanski/shelldown
#!/bin/bash
testTutorial_BasecoinTool() {
rm -rf ~/.basecoin
rm -rf ~/.basecli
rm -rf example-data
KEYPASS=qwertyuiop
(echo $KEYPASS; echo $KEYPASS) | #shelldown[0][0] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[0][1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[1][0] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[1][1] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[1][2] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER >/dev/null 2>&1 ; sleep 1
#shelldown[2][0] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[2][1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER >/dev/null 2>&1 ; sleep 1
#shelldown[3][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[4][-1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
#shelldown[5][-1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER2=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER $PID_SERVER2 >/dev/null 2>&1 ; sleep 1
#shelldown[4][-1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
#shelldown[6][0] ; assertTrue "Expected true for line $LINENO" $?
#shelldown[6][1] >>/dev/null 2>&1 &
sleep 5 ; PID_SERVER2=$! ; disown ; assertTrue "Expected true for line $LINENO" $?
kill -9 $PID_SERVER $PID_SERVER2 >/dev/null 2>&1 ; sleep 1
#shelldown[7][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[8][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
(echo $KEYPASS; echo $KEYPASS) | #shelldown[9][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[10][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#shelldown[11][-1] >/dev/null ; assertTrue "Expected true for line $LINENO" $?
#cleanup
rm -rf example-data
}
# load and run these tests with shunit2!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" #get this files directory
. $DIR/shunit2
-->
# The Basecoin Tool
In previous tutorials we learned the [basics of the Basecoin
CLI](/docs/guide/basecoin-basics.md) and [how to implement a
plugin](/docs/guide/basecoin-plugins.md). In this tutorial, we provide more
details on using the Basecoin tool.
# Generate a Key
Generate a key using the `basecli` tool:
```shelldown[0]
basecli keys new mykey
ME=$(basecli keys get mykey | awk '{print $2}')
```
# Data Directory
By default, `basecoin` works out of `~/.basecoin`. To change this, set the
`BCHOME` environment variable:
```shelldown[1]
export BCHOME=~/.my_basecoin_data
basecoin init $ME
basecoin start
```
or
```shelldown[2]
BCHOME=~/.my_basecoin_data basecoin init $ME
BCHOME=~/.my_basecoin_data basecoin start
```
# ABCI Server
So far we have run Basecoin and Tendermint in a single process. However, since
we use ABCI, we can actually run them in different processes. First,
initialize them:
```shelldown[3]
basecoin init $ME
```
This will create a single `genesis.json` file in `~/.basecoin` with the
information for both Basecoin and Tendermint.
Now, In one window, run
```shelldown[4]
basecoin start --without-tendermint
```
and in another,
```shelldown[5]
TMROOT=~/.basecoin tendermint node
```
You should see Tendermint start making blocks!
Alternatively, you could ignore the Tendermint details in
`~/.basecoin/genesis.json` and use a separate directory by running:
```shelldown[6]
tendermint init
tendermint node
```
For more details on using `tendermint`, see [the guide](https://tendermint.com/docs/guides/using-tendermint).
# Keys and Genesis
In previous tutorials we used `basecoin init` to initialize `~/.basecoin` with
the default configuration. This command creates files both for Tendermint and
for Basecoin, and a single `genesis.json` file for both of them. For more
information on these files, see the [guide to using
Tendermint](https://tendermint.com/docs/guides/using-tendermint).
Now let's make our own custom Basecoin data.
First, create a new directory:
```shelldown[7]
mkdir example-data
```
We can tell `basecoin` to use this directory by exporting the `BCHOME`
environment variable:
```shelldown[8]
export BCHOME=$(pwd)/example-data
```
If you're going to be using multiple terminal windows, make sure to add this
variable to your shell startup scripts (eg. `~/.bashrc`).
Now, let's create a new key:
```shelldown[9]
basecli keys new foobar
```
The key's info can be retrieved with
```shelldown[10]
basecli keys get foobar -o=json
```
You should get output which looks similar to the following:
```json
{
"name": "foobar",
"address": "404C5003A703C7DA888C96A2E901FCE65A6869D9",
"pubkey": {
"type": "ed25519",
"data": "8786B7812AB3B27892D8E14505EEFDBB609699E936F6A4871B1983F210736EEA"
}
}
```
Yours will look different - each key is randomly derived. Now we can make a
`genesis.json` file and add an account with our public key:
```json
{
"app_hash": "",
"chain_id": "example-chain",
"genesis_time": "0001-01-01T00:00:00.000Z",
"validators": [
{
"amount": 10,
"name": "",
"pub_key": {
"type": "ed25519",
"data": "7B90EA87E7DC0C7145C8C48C08992BE271C7234134343E8A8E8008E617DE7B30"
}
}
],
"app_options": {
"accounts": [
{
"pub_key": {
"type": "ed25519",
"data": "8786B7812AB3B27892D8E14505EEFDBB609699E936F6A4871B1983F210736EEA"
},
"coins": [
{
"denom": "gold",
"amount": 1000000000
}
]
}
]
}
}
```
Here we've granted ourselves `1000000000` units of the `gold` token. Note that
we've also set the `chain-id` to be `example-chain`. All transactions must
therefore include the `--chain-id example-chain` in order to make sure they are
valid for this chain. Previously, we didn't need this flag because we were
using the default chain ID ("test_chain_id"). Now that we're using a custom
chain, we need to specify the chain explicitly on the command line.
Note we have also left out the details of the Tendermint genesis. These are
documented in the [Tendermint
guide](https://tendermint.com/docs/guides/using-tendermint).
# Reset
You can reset all blockchain data by running:
```shelldown[11]
basecoin unsafe_reset_all
```
Similarly, you can reset client data by running:
```shelldown[12]
basecli reset_all
```
# Genesis
Any required plugin initialization should be constructed using `SetOption` on
genesis. When starting a new chain for the first time, `SetOption` will be
called for each item the genesis file. Within genesis.json file entries are
made in the format: `"<plugin>/<key>", "<value>"`, where `<plugin>` is the
plugin name, and `<key>` and `<value>` are the strings passed into the plugin
SetOption function. This function is intended to be used to set plugin
specific information such as the plugin state.

View File

@ -1,398 +0,0 @@
# InterBlockchain Communication with Basecoin
One of the most exciting elements of the Cosmos Network is the InterBlockchain
Communication (IBC) protocol, which enables interoperability across different
blockchains. We implemented IBC as a basecoin plugin, and we'll show you how to
use it to send tokens across blockchains!
Please note, this tutorial assumes you are familiar with [Basecoin
plugins](/docs/guide/basecoin-plugins.md), but we'll explain how IBC works. You
may also want to see [our repository of example
plugins](https://github.com/tendermint/basecoin-examples).
The IBC plugin defines a new set of transactions as subtypes of the `AppTx`.
The plugin's functionality is accessed by setting the `AppTx.Name` field to
`"IBC"`, and setting the `Data` field to the serialized IBC transaction type.
We'll demonstrate exactly how this works below.
## IBC
Let's review the IBC protocol. The purpose of IBC is to enable one blockchain
to function as a light-client of another. Since we are using a classical
Byzantine Fault Tolerant consensus algorithm, light-client verification is
cheap and easy: all we have to do is check validator signatures on the latest
block, and verify a Merkle proof of the state.
In Tendermint, validators agree on a block before processing it. This means
that the signatures and state root for that block aren't included until the
next block. Thus, each block contains a field called `LastCommit`, which
contains the votes responsible for committing the previous block, and a field
in the block header called `AppHash`, which refers to the Merkle root hash of
the application after processing the transactions from the previous block. So,
if we want to verify the `AppHash` from height H, we need the signatures from
`LastCommit` at height H+1. (And remember that this `AppHash` only contains the
results from all transactions up to and including block H-1)
Unlike Proof-of-Work, the light-client protocol does not need to download and
check all the headers in the blockchain - the client can always jump straight
to the latest header available, so long as the validator set has not changed
much. If the validator set is changing, the client needs to track these
changes, which requires downloading headers for each block in which there is a
significant change. Here, we will assume the validator set is constant, and
postpone handling validator set changes for another time.
Now we can describe exactly how IBC works. Suppose we have two blockchains,
`chain1` and `chain2`, and we want to send some data from `chain1` to `chain2`.
We need to do the following:
1. Register the details (ie. chain ID and genesis configuration) of `chain1`
on `chain2`
2. Within `chain1`, broadcast a transaction that creates an outgoing IBC
packet destined for `chain2`
3. Broadcast a transaction to `chain2` informing it of the latest state (ie.
header and commit signatures) of `chain1`
4. Post the outgoing packet from `chain1` to `chain2`, including the proof
that it was indeed committed on `chain1`. Note `chain2` can only verify
this proof because it has a recent header and commit.
Each of these steps involves a separate IBC transaction type. Let's take them
up in turn.
### IBCRegisterChainTx
The `IBCRegisterChainTx` is used to register one chain on another. It contains
the chain ID and genesis configuration of the chain to register:
```golang
type IBCRegisterChainTx struct { BlockchainGenesis }
type BlockchainGenesis struct { ChainID string Genesis string }
```
This transaction should only be sent once for a given chain ID, and successive
sends will return an error.
### IBCUpdateChainTx
The `IBCUpdateChainTx` is used to update the state of one chain on another. It
contains the header and commit signatures for some block in the chain:
```golang
type IBCUpdateChainTx struct {
Header tm.Header
Commit tm.Commit
}
```
In the future, it needs to be updated to include changes to the validator set
as well. Anyone can relay an `IBCUpdateChainTx`, and they only need to do so
as frequently as packets are being sent or the validator set is changing.
### IBCPacketCreateTx
The `IBCPacketCreateTx` is used to create an outgoing packet on one chain. The
packet itself contains the source and destination chain IDs, a sequence number
(i.e. an integer that increments with every message sent between this pair of
chains), a packet type (e.g. coin, data, etc.), and a payload.
```golang
type IBCPacketCreateTx struct {
Packet
}
type Packet struct {
SrcChainID string
DstChainID string
Sequence uint64
Type string
Payload []byte
}
```
We have yet to define the format for the payload, so, for now, it's just
arbitrary bytes.
One way to think about this is that `chain2` has an account on `chain1`. With
a `IBCPacketCreateTx` on `chain1`, we send funds to that account. Then we can
prove to `chain2` that there are funds locked up for it in it's account on
`chain1`. Those funds can only be unlocked with corresponding IBC messages
back from `chain2` to `chain1` sending the locked funds to another account on
`chain1`.
### IBCPacketPostTx
The `IBCPacketPostTx` is used to post an outgoing packet from one chain to
another. It contains the packet and a proof that the packet was committed into
the state of the sending chain:
```golang
type IBCPacketPostTx struct {
FromChainID string // The immediate source of the packet, not always Packet.SrcChainID
FromChainHeight uint64 // The block height in which Packet was committed, to check Proof Packet
Proof *merkle.IAVLProof
}
```
The proof is a Merkle proof in an IAVL tree, our implementation of a balanced,
Merklized binary search tree. It contains a list of nodes in the tree, which
can be hashed together to get the Merkle root hash. This hash must match the
`AppHash` contained in the header at `FromChainHeight + 1`
- note the `+ 1` is necessary since `FromChainHeight` is the height in which
the packet was committed, and the resulting state root is not included until
the next block.
### IBC State
Now that we've seen all the transaction types, let's talk about the state.
Each chain stores some IBC state in its Merkle tree. For each chain being
tracked by our chain, we store:
- Genesis configuration
- Latest state
- Headers for recent heights
We also store all incoming (ingress) and outgoing (egress) packets.
The state of a chain is updated every time an `IBCUpdateChainTx` is committed.
New packets are added to the egress state upon `IBCPacketCreateTx`. New
packets are added to the ingress state upon `IBCPacketPostTx`, assuming the
proof checks out.
## Merkle Queries
The Basecoin application uses a single Merkle tree that is shared across all
its state, including the built-in accounts state and all plugin state. For this
reason, it's important to use explicit key names and/or hashes to ensure there
are no collisions.
We can query the Merkle tree using the ABCI Query method. If we pass in the
correct key, it will return the corresponding value, as well as a proof that
the key and value are contained in the Merkle tree.
The results of a query can thus be used as proof in an `IBCPacketPostTx`.
## Relay
While we need all these packet types internally to keep track of all the proofs
on both chains in a secure manner, for the normal work-flow, we can run a relay
node that handles the cross-chain interaction.
In this case, there are only two steps. First `basecoin relay init`, which
must be run once to register each chain with the other one, and make sure they
are ready to send and recieve. And then `basecoin relay start`, which is a
long-running process polling the queue on each side, and relaying all new
message to the other block.
This requires that the relay has access to accounts with some funds on both
chains to pay for all the ibc packets it will be forwarding.
## Try it out
Now that we have all the background knowledge, let's actually walk through the
tutorial.
Make sure you have installed [basecoin and basecli](/docs/guide/install.md).
Basecoin is a framework for creating new cryptocurrency applications. It comes
with an `IBC` plugin enabled by default.
You will also want to install the [jq](https://stedolan.github.io/jq/) for
handling JSON at the command line.
If you have any trouble with this, you can also look at the [test
scripts](/tests/cli/ibc.sh) or just run `make test_cli` in basecoin repo.
Otherwise, open up 5 (yes 5!) terminal tabs....
### Preliminaries
```
# first, clean up any old garbage for a fresh slate...
rm -rf ~/.ibcdemo/
```
Let's start by setting up some environment variables and aliases:
```
export BCHOME1_CLIENT=~/.ibcdemo/chain1/client
export BCHOME1_SERVER=~/.ibcdemo/chain1/server
export BCHOME2_CLIENT=~/.ibcdemo/chain2/client
export BCHOME2_SERVER=~/.ibcdemo/chain2/server
alias basecli1="basecli --home $BCHOME1_CLIENT"
alias basecli2="basecli --home $BCHOME2_CLIENT"
alias basecoin1="basecoin --home $BCHOME1_SERVER"
alias basecoin2="basecoin --home $BCHOME2_SERVER"
```
This will give us some new commands to use instead of raw `basecli` and
`basecoin` to ensure we're using the right configuration for the chain we want
to talk to.
We also want to set some chain IDs:
```
export CHAINID1="test-chain-1"
export CHAINID2="test-chain-2"
```
And since we will run two different chains on one machine, we need to maintain
different sets of ports:
```
export PORT_PREFIX1=1234
export PORT_PREFIX2=2345
export RPC_PORT1=${PORT_PREFIX1}7
export RPC_PORT2=${PORT_PREFIX2}7
```
### Setup Chain 1
Now, let's create some keys that we can use for accounts on test-chain-1:
```
basecli1 keys new money
basecli1 keys new gotnone
export MONEY=$(basecli1 keys get money | awk '{print $2}')
export GOTNONE=$(basecli1 keys get gotnone | awk '{print $2}')
```
and create an initial configuration giving lots of coins to the $MONEY key:
```
basecoin1 init --chain-id $CHAINID1 $MONEY
```
Now start basecoin:
```
sed -ie "s/4665/$PORT_PREFIX1/" $BCHOME1_SERVER/config.toml
basecoin1 start &> basecoin1.log &
```
Note the `sed` command to replace the ports in the config file.
You can follow the logs with `tail -f basecoin1.log`
Now we can attach the client to the chain and verify the state.
The first account should have money, the second none:
```
basecli1 init --node=tcp://localhost:${RPC_PORT1} --genesis=${BCHOME1_SERVER}/genesis.json
basecli1 query account $MONEY
basecli1 query account $GOTNONE
```
### Setup Chain 2
This is the same as above, except with `basecli2`, `basecoin2`, and
`$CHAINID2`. We will also need to change the ports, since we're running
another chain on the same local machine.
Let's create new keys for test-chain-2:
```
basecli2 keys new moremoney
basecli2 keys new broke
MOREMONEY=$(basecli2 keys get moremoney | awk '{print $2}')
BROKE=$(basecli2 keys get broke | awk '{print $2}')
```
And prepare the genesis block, and start the server:
```
basecoin2 init --chain-id $CHAINID2 $(basecli2 keys get moremoney | awk '{print $2}')
sed -ie "s/4665/$PORT_PREFIX2/" $BCHOME2_SERVER/config.toml
basecoin2 start &> basecoin2.log &
```
Now attach the client to the chain and verify the state.
The first account should have money, the second none:
```
basecli2 init --node=tcp://localhost:${RPC_PORT2} --genesis=${BCHOME2_SERVER}/genesis.json
basecli2 query account $MOREMONEY
basecli2 query account $BROKE
```
### Connect these chains
OK! So we have two chains running on your local machine, with different keys on
each. Let's hook them up together by starting a relay process to forward
messages from one chain to the other.
The relay account needs some money in it to pay for the ibc messages, so for
now, we have to transfer some cash from the rich accounts before we start the
actual relay.
```
# note that this key.json file is a hardcoded demo for all chains, this will
# be updated in a future release
RELAY_KEY=$BCHOME1_SERVER/key.json
RELAY_ADDR=$(cat $RELAY_KEY | jq .address | tr -d \")
basecli1 tx send --amount=100000mycoin --sequence=1 --to=$RELAY_ADDR--name=money
basecli1 query account $RELAY_ADDR
basecli2 tx send --amount=100000mycoin --sequence=1 --to=$RELAY_ADDR --name=moremoney
basecli2 query account $RELAY_ADDR
```
Now we can start the relay process.
```
basecoin relay init --chain1-id=$CHAINID1 --chain2-id=$CHAINID2 \
--chain1-addr=tcp://localhost:${RPC_PORT1} --chain2-addr=tcp://localhost:${RPC_PORT2} \
--genesis1=${BCHOME1_SERVER}/genesis.json --genesis2=${BCHOME2_SERVER}/genesis.json \
--from=$RELAY_KEY
basecoin relay start --chain1-id=$CHAINID1 --chain2-id=$CHAINID2 \
--chain1-addr=tcp://localhost:${RPC_PORT1} --chain2-addr=tcp://localhost:${RPC_PORT2} \
--from=$RELAY_KEY &> relay.log &
```
This should start up the relay, and assuming no error messages came out,
the two chains are now fully connected over IBC. Let's use this to send
our first tx accross the chains...
### Sending cross-chain payments
The hard part is over, we set up two blockchains, a few private keys, and
a secure relay between them. Now we can enjoy the fruits of our labor...
```
# Here's an empty account on test-chain-2
basecli2 query account $BROKE
```
```
# Let's send some funds from test-chain-1
basecli1 tx send --amount=12345mycoin --sequence=2 --to=test-chain-2/$BROKE --name=money
```
```
# give it time to arrive...
sleep 2
# now you should see 12345 coins!
basecli2 query account $BROKE
```
You're no longer broke! Cool, huh?
Now have fun exploring and sending coins across the chains.
And making more accounts as you want to.
## Conclusion
In this tutorial we explained how IBC works, and demonstrated how to use it to
communicate between two chains. We did the simplest communciation possible: a
one way transfer of data from chain1 to chain2. The most important part was
that we updated chain2 with the latest state (i.e. header and commit) of
chain1, and then were able to post a proof to chain2 that a packet was
committed to the outgoing state of chain1.
In a future tutorial, we will demonstrate how to use IBC to actually transfer
tokens between two blockchains, but we'll do it with real testnets deployed
across multiple nodes on the network. Stay tuned!

View File

@ -1,32 +0,0 @@
# Install
If you aren't used to compile go programs and just want the released
version of the code, please head to our [downloads](https://tendermint.com/download)
page to get a pre-compiled binary for your platform.
Usually, Cosmos SDK can be installed like a normal Go program:
```
go get -u github.com/cosmos/cosmos-sdk/cmd/...
```
If the dependencies have been updated with breaking changes,
or if another branch is required, `glide` is used for dependency management.
Thus, assuming you've already run `go get` or otherwise cloned the repo,
the correct way to install is:
```
cd $GOPATH/src/github.com/tendermint/basecoin
git pull origin master
make all
```
This will create the `basecoin` binary in `$GOPATH/bin`.
`make all` implies `make get_vendor_deps` and uses `glide` to install the
correct version of all dependencies. It also tests the code, including
some cli tests to make sure your binary behaves properly.
If you need another branch, make sure to run `git checkout <branch>`
before `make all`. And if you switch branches a lot, especially
touching other tendermint repos, you may need to `make fresh` sometimes
so glide doesn't get confused with all the branches and versions lying around.

View File

@ -1,184 +0,0 @@
# Key Management
Here we explain a bit how to work with your keys, using the `basecli keys` subcommand.
**Note:** This keys tooling is not considered production ready and is for dev only.
We'll look at what you can do using the six sub-commands of `basecli keys`:
```
new
list
get
delete
recover
update
```
## Create keys
`basecli keys new` has two inputs (name, password) and two outputs (address, seed).
First, we name our key:
```shelldown
basecli keys new alice
```
This will prompt (10 character minimum) password entry which must be re-typed.
You'll see:
```
Enter a passphrase:
Repeat the passphrase:
alice A159C96AE911F68913E715ED889D211C02EC7D70
**Important** write this seed phrase in a safe place.
It is the only way to recover your account if you ever forget your password.
pelican amateur empower assist awkward claim brave process cliff save album pigeon intact asset
```
which shows the address of your key named `alice`, and its recovery seed. We'll use these shortly.
Adding the `--output json` flag to the above command would give this output:
```
Enter a passphrase:
Repeat the passphrase:
{
"key": {
"name": "alice",
"address": "A159C96AE911F68913E715ED889D211C02EC7D70",
"pubkey": {
"type": "ed25519",
"data": "4BF22554B0F0BF2181187E5E5456E3BF3D96DB4C416A91F07F03A9C36F712B77"
}
},
"seed": "pelican amateur empower assist awkward claim brave process cliff save album pigeon intact asset"
}
```
To avoid the prompt, it's possible to pipe the password into the command, e.g.:
```
echo 1234567890 | basecli keys new fred --output json
```
After trying each of the three ways to create a key, look at them, use:
```
basecli keys list
```
to list all the keys:
```
All keys:
alice 6FEA9C99E2565B44FCC3C539A293A1378CDA7609
bob A159C96AE911F68913E715ED889D211C02EC7D70
charlie 784D623E0C15DE79043C126FA6449B68311339E5
```
Again, we can use the `--output json` flag:
```
[
{
"name": "alice",
"address": "6FEA9C99E2565B44FCC3C539A293A1378CDA7609",
"pubkey": {
"type": "ed25519",
"data": "878B297F1E863CC30CAD71E04A8B3C23DB71C18F449F39E35B954EDB2276D32D"
}
},
{
"name": "bob",
"address": "A159C96AE911F68913E715ED889D211C02EC7D70",
"pubkey": {
"type": "ed25519",
"data": "2127CAAB96C08E3042C5B33C8B5A820079AAE8DD50642DCFCC1E8B74821B2BB9"
}
},
{
"name": "charlie",
"address": "784D623E0C15DE79043C126FA6449B68311339E5",
"pubkey": {
"type": "ed25519",
"data": "4BF22554B0F0BF2181187E5E5456E3BF3D96DB4C416A91F07F03A9C36F712B77"
}
},
]
```
to get machine readable output.
If we want information about one specific key, then:
```
basecli keys get charlie --output json
```
will, for example, return the info for only the "charlie" key returned from the previous `basecoin keys list` command.
The keys tooling can support different types of keys with a flag:
```
basecli keys new bit --type secp256k1
```
and you'll see the difference in the `"type": field from `basecli keys get`
Before moving on, let's set an enviroment variable to make `--output json` the default.
Either run or put in your `~/.bash_profile` the following line:
```
export BC_OUTPUT=json
```
## Recover a key
Let's say, for whatever reason, you lose a key or forget the password. On creation, you were given a seed. We'll use it to recover a lost key.
First, let's simulate the loss by deleting a key:
```
basecli keys delete alice
```
which prompts for your current password, now rendered obsolete, and gives a warning message. The only way you can recover your key now is using the 12 word seed given on initial creation of the key. Let's try it:
```
basecli keys recover alice-again
```
which prompts for a new password then the seed:
```
Enter the new passphrase:
Enter your recovery seed phrase:
strike alien praise vendor term left market practice junior better deputy divert front calm
alice-again CBF5D9CE6DDCC32806162979495D07B851C53451
```
and voila! You've recovered your key. Note that the seed can be typed our, pasted in, or piped into the command alongside the password.
To change the password of a key, we can:
```
basecli keys update alice-again
```
and follow the prompts.
That covers most features of the keys sub command.
<!-- use later in a test script, or more advance tutorial?
SEED=$(echo 1234567890 | basecli keys new fred -o json | jq .seed | tr -d \")
echo $SEED
(echo qwertyuiop; echo $SEED stamp) | basecli keys recover oops
(echo qwertyuiop; echo $SEED) | basecli keys recover derf
basecli keys get fred -o json
basecli keys get derf -o json
```
-->

View File

@ -1,269 +0,0 @@
This guide uses the roles functionality provided by `basecli` to create a multi-sig wallet. It builds upon the basecoin basics and key management guides. You should have `basecoin` started with blocks streaming in, and three accounts: `rich, poor, igor` where `rich` was the account used on `basecoin init`, _and_ run `basecli init` with the appropriate flags. Review the intro guides for more information.
In this example, `rich` will create the role and send it some coins (i.e., fill the multi-sig wallet). Then, `poor` will prepare a transaction to withdraw coins, which will be approved by `igor`. Let's look at our keys:
```
basecli keys list
```
```
All keys:
igor 5E4CB7A4E729BA0A8B18DE99E21409B6D706D0F1
poor 65D406E028319289A0706E294F3B764F44EBA3CF
rich CB76F4092D1B13475272B36585EBD15D22A2848D
```
Using the `basecli query account` command, you'll see that `rich` has plenty of coins:
```
{
"height": 81,
"data": {
"coins": [
{
"denom": "mycoin",
"amount": 9007199254740992
}
],
"credit": []
}
}
```
whereas `poor` and `igor` have no coins (in fact, the chain doesn't know about them yet):
```
ERROR: Account bytes are empty for address 65D406E028319289A0706E294F3B764F44EBA3CF
```
## Create Role
This first step defines the parameters of a new role, which will have control of any coins sent to it, and only release them if correct conditions are met. In this example, we are going to make a 2/3 multi-sig wallet. Let's look a the command and dissect it below:
```
basecli tx create-role --role=10CAFE4E --min-sigs=2 --members=5E4CB7A4E729BA0A8B18DE99E21409B6D706D0F1,65D406E028319289A0706E294F3B764F44EBA3CF,CB76F4092D1B13475272B36585EBD15D22A2848D --sequence=1 --name=rich
```
In the first part we are sending a transaction that creates a role, rather than transfering coins. The `--role` flag is the name of the role (in hex only) and must be in double quotes. The `--min-sigs` and `--members` define your multi-sig parameters. Here, we require a minimum of 2 signatures out of 3 members but we could easily say 3 of 5 or 9 of 10, or whatever your application requires. The `--members` flag requires a comma-seperated list of addresses that will be signatories on the role. Then we set the `--sequence` number for the transaction, which will start at 1 and must be incremented by 1 for every transaction from an account. Finally, we use the name of the key/account that will be used to create the role, in this case the account `rich`.
Remember that `rich`'s address was used on `basecoin init` and is included in the `--members` list. The command above will prompt for a password (which can also be piped into the command if desired) then - if executed correctly - return some data:
```
{
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 0,
"data": "",
"log": ""
},
"hash": "4849DA762E19CE599460B9882DD42C7F19655DC1",
"height": 321
}
```
showing the block height at which the transaction was committed and its hash. A quick review of what we did: 1) created a role, essentially an account, that requires a minimum of two (2) signatures from three (3) accounts (members). And since it was the account named `rich`'s first transaction, the sequence was set to 1.
Let's look at the balance of the role that we've created:
```
basecli query account role:10CAFE4E
```
and it should be empty:
```
ERROR: Account bytes are empty for address role:10CAFE4E
```
Next, we want to send coins _to_ that role. Notice that because this is the second transaction being sent by rich, we need to increase `--sequence` to `2`:
```
basecli tx send --fee=90mycoin --amount=10000mycoin --to=role:10CAFE4E --sequence=2 --name=rich
```
We need to pay a transaction fee to the validators, in this case 90 `mycoin` to send 10000 `mycoin` Notice that for the `--to` flag, to specify that we are sending to a role instead of an account, the `role:` prefix is added before the role. Because it's `rich`'s second transaction, we've incremented the sequence. The output will be nearly identical to the output from `create-role` above.
Now the role has coins (think of it like a bank).
Double check with:
```
basecli query account role:10CAFE4E
```
and this time you'll see the coins in the role's account:
```
{
"height": 2453,
"data": {
"coins": [
{
"denom": "mycoin",
"amount": 10000
}
],
"credit": []
}
}
```
`Poor` decides to initiate a multi-sig transaction to himself from the role's account. First, it must be prepared like so:
```
basecli tx send --amount=6000mycoin --from=role:10CAFE4E --to=65D406E028319289A0706E294F3B764F44EBA3CF --sequence=1 --assume-role=10CAFE4E --name=poor --multi --prepare=tx.json
```
you'll be prompted for `poor`'s password and there won't be any `stdout` to the terminal. Note that the address in the `--to` flag matches the address of `poor`'s account from the beginning of the tutorial. The main output is the `tx.json` file that has just been created. In the above command, the `--assume-role` flag is used to evaluate account permissions on the transaction, while the `--multi` flag is used in combination with `--prepare`, to specify the file that is prepared for a multi-sig transaction.
The `tx.json` file will look like this:
```
{
"type": "sigs/multi",
"data": {
"tx": {
"type": "chain/tx",
"data": {
"chain_id": "test_chain_id",
"expires_at": 0,
"tx": {
"type": "nonce",
"data": {
"sequence": 1,
"signers": [
{
"chain": "",
"app": "sigs",
"addr": "65D406E028319289A0706E294F3B764F44EBA3CF"
}
],
"tx": {
"type": "role/assume",
"data": {
"role": "10CAFE4E",
"tx": {
"type": "coin/send",
"data": {
"inputs": [
{
"address": {
"chain": "",
"app": "role",
"addr": "10CAFE4E"
},
"coins": [
{
"denom": "mycoin",
"amount": 6000
}
]
}
],
"outputs": [
{
"address": {
"chain": "",
"app": "sigs",
"addr": "65D406E028319289A0706E294F3B764F44EBA3CF"
},
"coins": [
{
"denom": "mycoin",
"amount": 6000
}
]
}
]
}
}
}
}
}
}
}
},
"signatures": [
{
"Sig": {
"type": "ed25519",
"data": "A38F73BF2D109015E4B0B6782C84875292D5FAA75F0E3362C9BD29B16CB15D57FDF0553205E7A33C740319397A434B7C31CBB10BE7F8270C9984C5567D2DC002"
},
"Pubkey": {
"type": "ed25519",
"data": "6ED38C7453148DD90DFC41D9339CE45BEFA5EB505FD7E93D85E71DFFDAFD9B8F"
}
}
]
}
}
```
and it is loaded by the next command.
With the transaction prepared, but not sent, we'll have `igor` sign and send the prepared transaction:
```
basecli tx --in=tx.json --name=igor
```
which will give output similar to:
```
{
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 0,
"data": "",
"log": ""
},
"hash": "E345BDDED9517EB2CAAF5E30AFF3AB38A1172833",
"height": 2673
}
```
and voila! That's the basics for creating roles and sending multi-sig transactions. For 3 of 3, you'd add an intermediate transactions like:
```
basecli tx --in=tx.json --name=igor --prepare=tx2.json
```
before having rich sign and send the transaction. The `--prepare` flag writes files to disk rather than sending the transaction and can be used to chain together multiple transactions.
We can check the balance of the role:
```
basecli query account role:10CAFE4E
```
and get the result:
```
{
"height": 2683,
"data": {
"coins": [
{
"denom": "mycoin",
"amount": 4000
}
],
"credit": []
}
}
```
and see that `poor` now has 6000 `mycoin`:
```
basecli query account 65D406E028319289A0706E294F3B764F44EBA3CF
```
to confirm that everything worked as expected.

425
docs/ibc.rst Normal file
View File

@ -0,0 +1,425 @@
InterBlockchain Communication with Basecoin
===========================================
One of the most exciting elements of the Cosmos Network is the
InterBlockchain Communication (IBC) protocol, which enables
interoperability across different blockchains. We implemented IBC as a
basecoin plugin, and we'll show you how to use it to send tokens across
blockchains!
Please note, this tutorial assumes you are familiar with `Basecoin
plugins </docs/guide/basecoin-plugins.md>`__, but we'll explain how IBC
works. You may also want to see `our repository of example
plugins <https://github.com/tendermint/basecoin-examples>`__.
The IBC plugin defines a new set of transactions as subtypes of the
``AppTx``. The plugin's functionality is accessed by setting the
``AppTx.Name`` field to ``"IBC"``, and setting the ``Data`` field to the
serialized IBC transaction type.
We'll demonstrate exactly how this works below.
IBC
---
Let's review the IBC protocol. The purpose of IBC is to enable one
blockchain to function as a light-client of another. Since we are using
a classical Byzantine Fault Tolerant consensus algorithm, light-client
verification is cheap and easy: all we have to do is check validator
signatures on the latest block, and verify a Merkle proof of the state.
In Tendermint, validators agree on a block before processing it. This
means that the signatures and state root for that block aren't included
until the next block. Thus, each block contains a field called
``LastCommit``, which contains the votes responsible for committing the
previous block, and a field in the block header called ``AppHash``,
which refers to the Merkle root hash of the application after processing
the transactions from the previous block. So, if we want to verify the
``AppHash`` from height H, we need the signatures from ``LastCommit`` at
height H+1. (And remember that this ``AppHash`` only contains the
results from all transactions up to and including block H-1)
Unlike Proof-of-Work, the light-client protocol does not need to
download and check all the headers in the blockchain - the client can
always jump straight to the latest header available, so long as the
validator set has not changed much. If the validator set is changing,
the client needs to track these changes, which requires downloading
headers for each block in which there is a significant change. Here, we
will assume the validator set is constant, and postpone handling
validator set changes for another time.
Now we can describe exactly how IBC works. Suppose we have two
blockchains, ``chain1`` and ``chain2``, and we want to send some data
from ``chain1`` to ``chain2``. We need to do the following: 1. Register
the details (ie. chain ID and genesis configuration) of ``chain1`` on
``chain2`` 2. Within ``chain1``, broadcast a transaction that creates an
outgoing IBC packet destined for ``chain2`` 3. Broadcast a transaction
to ``chain2`` informing it of the latest state (ie. header and commit
signatures) of ``chain1`` 4. Post the outgoing packet from ``chain1`` to
``chain2``, including the proof that it was indeed committed on
``chain1``. Note ``chain2`` can only verify this proof because it has a
recent header and commit.
Each of these steps involves a separate IBC transaction type. Let's take
them up in turn.
IBCRegisterChainTx
~~~~~~~~~~~~~~~~~~
The ``IBCRegisterChainTx`` is used to register one chain on another. It
contains the chain ID and genesis configuration of the chain to
register:
.. code:: golang
type IBCRegisterChainTx struct { BlockchainGenesis }
type BlockchainGenesis struct { ChainID string Genesis string }
This transaction should only be sent once for a given chain ID, and
successive sends will return an error.
IBCUpdateChainTx
~~~~~~~~~~~~~~~~
The ``IBCUpdateChainTx`` is used to update the state of one chain on
another. It contains the header and commit signatures for some block in
the chain:
.. code:: golang
type IBCUpdateChainTx struct {
Header tm.Header
Commit tm.Commit
}
In the future, it needs to be updated to include changes to the
validator set as well. Anyone can relay an ``IBCUpdateChainTx``, and
they only need to do so as frequently as packets are being sent or the
validator set is changing.
IBCPacketCreateTx
~~~~~~~~~~~~~~~~~
The ``IBCPacketCreateTx`` is used to create an outgoing packet on one
chain. The packet itself contains the source and destination chain IDs,
a sequence number (i.e. an integer that increments with every message
sent between this pair of chains), a packet type (e.g. coin, data,
etc.), and a payload.
.. code:: golang
type IBCPacketCreateTx struct {
Packet
}
type Packet struct {
SrcChainID string
DstChainID string
Sequence uint64
Type string
Payload []byte
}
We have yet to define the format for the payload, so, for now, it's just
arbitrary bytes.
One way to think about this is that ``chain2`` has an account on
``chain1``. With a ``IBCPacketCreateTx`` on ``chain1``, we send funds to
that account. Then we can prove to ``chain2`` that there are funds
locked up for it in it's account on ``chain1``. Those funds can only be
unlocked with corresponding IBC messages back from ``chain2`` to
``chain1`` sending the locked funds to another account on ``chain1``.
IBCPacketPostTx
~~~~~~~~~~~~~~~
The ``IBCPacketPostTx`` is used to post an outgoing packet from one
chain to another. It contains the packet and a proof that the packet was
committed into the state of the sending chain:
.. code:: golang
type IBCPacketPostTx struct {
FromChainID string // The immediate source of the packet, not always Packet.SrcChainID
FromChainHeight uint64 // The block height in which Packet was committed, to check Proof Packet
Proof *merkle.IAVLProof
}
The proof is a Merkle proof in an IAVL tree, our implementation of a
balanced, Merklized binary search tree. It contains a list of nodes in
the tree, which can be hashed together to get the Merkle root hash. This
hash must match the ``AppHash`` contained in the header at
``FromChainHeight + 1``
- note the ``+ 1`` is necessary since ``FromChainHeight`` is the height
in which the packet was committed, and the resulting state root is
not included until the next block.
IBC State
~~~~~~~~~
Now that we've seen all the transaction types, let's talk about the
state. Each chain stores some IBC state in its Merkle tree. For each
chain being tracked by our chain, we store:
- Genesis configuration
- Latest state
- Headers for recent heights
We also store all incoming (ingress) and outgoing (egress) packets.
The state of a chain is updated every time an ``IBCUpdateChainTx`` is
committed. New packets are added to the egress state upon
``IBCPacketCreateTx``. New packets are added to the ingress state upon
``IBCPacketPostTx``, assuming the proof checks out.
Merkle Queries
--------------
The Basecoin application uses a single Merkle tree that is shared across
all its state, including the built-in accounts state and all plugin
state. For this reason, it's important to use explicit key names and/or
hashes to ensure there are no collisions.
We can query the Merkle tree using the ABCI Query method. If we pass in
the correct key, it will return the corresponding value, as well as a
proof that the key and value are contained in the Merkle tree.
The results of a query can thus be used as proof in an
``IBCPacketPostTx``.
Relay
-----
While we need all these packet types internally to keep track of all the
proofs on both chains in a secure manner, for the normal work-flow, we
can run a relay node that handles the cross-chain interaction.
In this case, there are only two steps. First ``basecoin relay init``,
which must be run once to register each chain with the other one, and
make sure they are ready to send and recieve. And then
``basecoin relay start``, which is a long-running process polling the
queue on each side, and relaying all new message to the other block.
This requires that the relay has access to accounts with some funds on
both chains to pay for all the ibc packets it will be forwarding.
Try it out
----------
Now that we have all the background knowledge, let's actually walk
through the tutorial.
Make sure you have installed `basecoin and
basecli </docs/guide/install.md>`__.
Basecoin is a framework for creating new cryptocurrency applications. It
comes with an ``IBC`` plugin enabled by default.
You will also want to install the
`jq <https://stedolan.github.io/jq/>`__ for handling JSON at the command
line.
If you have any trouble with this, you can also look at the `test
scripts </tests/cli/ibc.sh>`__ or just run ``make test_cli`` in basecoin
repo. Otherwise, open up 5 (yes 5!) terminal tabs....
Preliminaries
~~~~~~~~~~~~~
::
# first, clean up any old garbage for a fresh slate...
rm -rf ~/.ibcdemo/
Let's start by setting up some environment variables and aliases:
::
export BCHOME1_CLIENT=~/.ibcdemo/chain1/client
export BCHOME1_SERVER=~/.ibcdemo/chain1/server
export BCHOME2_CLIENT=~/.ibcdemo/chain2/client
export BCHOME2_SERVER=~/.ibcdemo/chain2/server
alias basecli1="basecli --home $BCHOME1_CLIENT"
alias basecli2="basecli --home $BCHOME2_CLIENT"
alias basecoin1="basecoin --home $BCHOME1_SERVER"
alias basecoin2="basecoin --home $BCHOME2_SERVER"
This will give us some new commands to use instead of raw ``basecli``
and ``basecoin`` to ensure we're using the right configuration for the
chain we want to talk to.
We also want to set some chain IDs:
::
export CHAINID1="test-chain-1"
export CHAINID2="test-chain-2"
And since we will run two different chains on one machine, we need to
maintain different sets of ports:
::
export PORT_PREFIX1=1234
export PORT_PREFIX2=2345
export RPC_PORT1=${PORT_PREFIX1}7
export RPC_PORT2=${PORT_PREFIX2}7
Setup Chain 1
~~~~~~~~~~~~~
Now, let's create some keys that we can use for accounts on
test-chain-1:
::
basecli1 keys new money
basecli1 keys new gotnone
export MONEY=$(basecli1 keys get money | awk '{print $2}')
export GOTNONE=$(basecli1 keys get gotnone | awk '{print $2}')
and create an initial configuration giving lots of coins to the $MONEY
key:
::
basecoin1 init --chain-id $CHAINID1 $MONEY
Now start basecoin:
::
sed -ie "s/4665/$PORT_PREFIX1/" $BCHOME1_SERVER/config.toml
basecoin1 start &> basecoin1.log &
Note the ``sed`` command to replace the ports in the config file. You
can follow the logs with ``tail -f basecoin1.log``
Now we can attach the client to the chain and verify the state. The
first account should have money, the second none:
::
basecli1 init --node=tcp://localhost:${RPC_PORT1} --genesis=${BCHOME1_SERVER}/genesis.json
basecli1 query account $MONEY
basecli1 query account $GOTNONE
Setup Chain 2
~~~~~~~~~~~~~
This is the same as above, except with ``basecli2``, ``basecoin2``, and
``$CHAINID2``. We will also need to change the ports, since we're
running another chain on the same local machine.
Let's create new keys for test-chain-2:
::
basecli2 keys new moremoney
basecli2 keys new broke
MOREMONEY=$(basecli2 keys get moremoney | awk '{print $2}')
BROKE=$(basecli2 keys get broke | awk '{print $2}')
And prepare the genesis block, and start the server:
::
basecoin2 init --chain-id $CHAINID2 $(basecli2 keys get moremoney | awk '{print $2}')
sed -ie "s/4665/$PORT_PREFIX2/" $BCHOME2_SERVER/config.toml
basecoin2 start &> basecoin2.log &
Now attach the client to the chain and verify the state. The first
account should have money, the second none:
::
basecli2 init --node=tcp://localhost:${RPC_PORT2} --genesis=${BCHOME2_SERVER}/genesis.json
basecli2 query account $MOREMONEY
basecli2 query account $BROKE
Connect these chains
~~~~~~~~~~~~~~~~~~~~
OK! So we have two chains running on your local machine, with different
keys on each. Let's hook them up together by starting a relay process to
forward messages from one chain to the other.
The relay account needs some money in it to pay for the ibc messages, so
for now, we have to transfer some cash from the rich accounts before we
start the actual relay.
::
# note that this key.json file is a hardcoded demo for all chains, this will
# be updated in a future release
RELAY_KEY=$BCHOME1_SERVER/key.json
RELAY_ADDR=$(cat $RELAY_KEY | jq .address | tr -d \")
basecli1 tx send --amount=100000mycoin --sequence=1 --to=$RELAY_ADDR--name=money
basecli1 query account $RELAY_ADDR
basecli2 tx send --amount=100000mycoin --sequence=1 --to=$RELAY_ADDR --name=moremoney
basecli2 query account $RELAY_ADDR
Now we can start the relay process.
::
basecoin relay init --chain1-id=$CHAINID1 --chain2-id=$CHAINID2 \
--chain1-addr=tcp://localhost:${RPC_PORT1} --chain2-addr=tcp://localhost:${RPC_PORT2} \
--genesis1=${BCHOME1_SERVER}/genesis.json --genesis2=${BCHOME2_SERVER}/genesis.json \
--from=$RELAY_KEY
basecoin relay start --chain1-id=$CHAINID1 --chain2-id=$CHAINID2 \
--chain1-addr=tcp://localhost:${RPC_PORT1} --chain2-addr=tcp://localhost:${RPC_PORT2} \
--from=$RELAY_KEY &> relay.log &
This should start up the relay, and assuming no error messages came out,
the two chains are now fully connected over IBC. Let's use this to send
our first tx accross the chains...
Sending cross-chain payments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The hard part is over, we set up two blockchains, a few private keys,
and a secure relay between them. Now we can enjoy the fruits of our
labor...
::
# Here's an empty account on test-chain-2
basecli2 query account $BROKE
::
# Let's send some funds from test-chain-1
basecli1 tx send --amount=12345mycoin --sequence=2 --to=test-chain-2/$BROKE --name=money
::
# give it time to arrive...
sleep 2
# now you should see 12345 coins!
basecli2 query account $BROKE
You're no longer broke! Cool, huh? Now have fun exploring and sending
coins across the chains. And making more accounts as you want to.
Conclusion
----------
In this tutorial we explained how IBC works, and demonstrated how to use
it to communicate between two chains. We did the simplest communciation
possible: a one way transfer of data from chain1 to chain2. The most
important part was that we updated chain2 with the latest state (i.e.
header and commit) of chain1, and then were able to post a proof to
chain2 that a packet was committed to the outgoing state of chain1.
In a future tutorial, we will demonstrate how to use IBC to actually
transfer tokens between two blockchains, but we'll do it with real
testnets deployed across multiple nodes on the network. Stay tuned!

35
docs/install.rst Normal file
View File

@ -0,0 +1,35 @@
Install
=======
If you aren't used to compile go programs and just want the released
version of the code, please head to our
`downloads <https://tendermint.com/download>`__ page to get a
pre-compiled binary for your platform.
Usually, Cosmos SDK can be installed like a normal Go program:
::
go get -u github.com/cosmos/cosmos-sdk/cmd/...
If the dependencies have been updated with breaking changes, or if
another branch is required, ``glide`` is used for dependency management.
Thus, assuming you've already run ``go get`` or otherwise cloned the
repo, the correct way to install is:
::
cd $GOPATH/src/github.com/tendermint/basecoin
git pull origin master
make all
This will create the ``basecoin`` binary in ``$GOPATH/bin``.
``make all`` implies ``make get_vendor_deps`` and uses ``glide`` to
install the correct version of all dependencies. It also tests the code,
including some cli tests to make sure your binary behaves properly.
If you need another branch, make sure to run ``git checkout <branch>``
before ``make all``. And if you switch branches a lot, especially
touching other tendermint repos, you may need to ``make fresh``
sometimes so glide doesn't get confused with all the branches and
versions lying around.

204
docs/key-management.rst Normal file
View File

@ -0,0 +1,204 @@
Key Management
==============
Here we explain a bit how to work with your keys, using the
``basecli keys`` subcommand.
**Note:** This keys tooling is not considered production ready and is
for dev only.
We'll look at what you can do using the six sub-commands of
``basecli keys``:
::
new
list
get
delete
recover
update
Create keys
-----------
``basecli keys new`` has two inputs (name, password) and two outputs
(address, seed).
First, we name our key:
.. code:: shelldown
basecli keys new alice
This will prompt (10 character minimum) password entry which must be
re-typed. You'll see:
::
Enter a passphrase:
Repeat the passphrase:
alice A159C96AE911F68913E715ED889D211C02EC7D70
**Important** write this seed phrase in a safe place.
It is the only way to recover your account if you ever forget your password.
pelican amateur empower assist awkward claim brave process cliff save album pigeon intact asset
which shows the address of your key named ``alice``, and its recovery
seed. We'll use these shortly.
Adding the ``--output json`` flag to the above command would give this
output:
::
Enter a passphrase:
Repeat the passphrase:
{
"key": {
"name": "alice",
"address": "A159C96AE911F68913E715ED889D211C02EC7D70",
"pubkey": {
"type": "ed25519",
"data": "4BF22554B0F0BF2181187E5E5456E3BF3D96DB4C416A91F07F03A9C36F712B77"
}
},
"seed": "pelican amateur empower assist awkward claim brave process cliff save album pigeon intact asset"
}
To avoid the prompt, it's possible to pipe the password into the
command, e.g.:
::
echo 1234567890 | basecli keys new fred --output json
After trying each of the three ways to create a key, look at them, use:
::
basecli keys list
to list all the keys:
::
All keys:
alice 6FEA9C99E2565B44FCC3C539A293A1378CDA7609
bob A159C96AE911F68913E715ED889D211C02EC7D70
charlie 784D623E0C15DE79043C126FA6449B68311339E5
Again, we can use the ``--output json`` flag:
::
[
{
"name": "alice",
"address": "6FEA9C99E2565B44FCC3C539A293A1378CDA7609",
"pubkey": {
"type": "ed25519",
"data": "878B297F1E863CC30CAD71E04A8B3C23DB71C18F449F39E35B954EDB2276D32D"
}
},
{
"name": "bob",
"address": "A159C96AE911F68913E715ED889D211C02EC7D70",
"pubkey": {
"type": "ed25519",
"data": "2127CAAB96C08E3042C5B33C8B5A820079AAE8DD50642DCFCC1E8B74821B2BB9"
}
},
{
"name": "charlie",
"address": "784D623E0C15DE79043C126FA6449B68311339E5",
"pubkey": {
"type": "ed25519",
"data": "4BF22554B0F0BF2181187E5E5456E3BF3D96DB4C416A91F07F03A9C36F712B77"
}
},
]
to get machine readable output.
If we want information about one specific key, then:
::
basecli keys get charlie --output json
will, for example, return the info for only the "charlie" key returned
from the previous ``basecoin keys list`` command.
The keys tooling can support different types of keys with a flag:
::
basecli keys new bit --type secp256k1
and you'll see the difference in the ``"type": field from``\ basecli
keys get\`
Before moving on, let's set an enviroment variable to make
``--output json`` the default.
Either run or put in your ``~/.bash_profile`` the following line:
::
export BC_OUTPUT=json
Recover a key
-------------
Let's say, for whatever reason, you lose a key or forget the password.
On creation, you were given a seed. We'll use it to recover a lost key.
First, let's simulate the loss by deleting a key:
::
basecli keys delete alice
which prompts for your current password, now rendered obsolete, and
gives a warning message. The only way you can recover your key now is
using the 12 word seed given on initial creation of the key. Let's try
it:
::
basecli keys recover alice-again
which prompts for a new password then the seed:
::
Enter the new passphrase:
Enter your recovery seed phrase:
strike alien praise vendor term left market practice junior better deputy divert front calm
alice-again CBF5D9CE6DDCC32806162979495D07B851C53451
and voila! You've recovered your key. Note that the seed can be typed
our, pasted in, or piped into the command alongside the password.
To change the password of a key, we can:
::
basecli keys update alice-again
and follow the prompts.
That covers most features of the keys sub command.
.. raw:: html
<!-- use later in a test script, or more advance tutorial?
SEED=$(echo 1234567890 | basecli keys new fred -o json | jq .seed | tr -d \")
echo $SEED
(echo qwertyuiop; echo $SEED stamp) | basecli keys recover oops
(echo qwertyuiop; echo $SEED) | basecli keys recover derf
basecli keys get fred -o json
basecli keys get derf -o json
```
-->

View File

@ -1,103 +0,0 @@
# Quark Overview
The quark middleware design optimizes flexibility and security. The framework
is designed around a modular execution stack which allows applications to mix
and match modular elements as desired. Along side, all modules are permissioned
and sandboxed to isolate modules for greater application security.
For more explanation please see the [standard
library](stdlib.md)
and
[glossary](glossary.md)
documentation.
For a more interconnected schematics see these
[framework](graphics/overview-framework.png)
and
[security](graphics/overview-security.png)
overviews.
## Framework Overview
### Transactions (tx)
Each transaction passes through the middleware stack which can be defined
uniquely by each application. From the multiple layers of transaction, each
middleware may strip off one level, like an onion. As such, the transaction
must be constructed to mirror the execution stack, and each middleware module
should allow an arbitrary transaction to be embedded for the next layer in
the stack.
<img src="graphics/tx.png" width=250>
### Execution Stack
Middleware components allow for code reusability and integrability. A standard
set of middleware are provided and can be mix-and-matched with custom
middleware. Some of the [standard library](stdlib.md)
middlewares provided in this package include:
- Logging
- Recovery
- Signatures
- Chain
- Nonce
- Fees
- Roles
- Inter-Blockchain-Communication (IBC)
As a part of stack execution the state space provided to each middleware is
isolated (see [Data Store](overview.md#data-store)). When executing the stack,
state-recovery checkpoints can be assigned for stack execution of `CheckTx`
or `DeliverTx`. This means, that all state changes will be reverted to the
checkpoint state on failure when either being run as a part of `CheckTx`
or `DeliverTx`. Example usage of the checkpoints is when we may want to deduct
a fee even if the end business logic fails; under this situation we would add
the `DeliverTx` checkpoint after the fee middleware but before the business
logic. This diagram displays a typical process flow through an execution stack.
<img src="graphics/middleware.png" width=500>
### Dispatcher
The dispatcher handler aims to allow for reusable business logic. As a
transaction is passed to the end handler, the dispatcher routes the logic to
the correct module. To use the dispatcher tool, all transaction types must
first be registered with the dispatcher. Once registered the middleware stack
or any other handler can call the dispatcher to execute a transaction.
Similarly to the execution stack, when executing a transaction the dispatcher
isolates the state space available to the designated module (see [Data
Store](overview.md#data-store)).
<img src="graphics/dispatcher.png" width=600>
## Security Overview
### Permission
Each application is run in a sandbox to isolate security risks. When
interfacing between applications, if one of those applications is compromised
the entire network should still be secure. This is achieved through actor
permissioning whereby each chain, account, or application can provided a
designated permission for the transaction context to perform a specific action.
Context is passed through the middleware and dispatcher, allowing one to add
permissions on this app-space, and check current permissions.
<img src="graphics/permission.png" width=500>
### Data Store
The entire merkle tree can access all data. When we call a module (or
middleware), we give them access to a subtree corresponding to their app. This
is achieved through the use of unique prefix assigned to each module. From the
module's perspective it is no different, the module need-not have regard for
the prefix as it is assigned outside of the modules scope. For example, if a
module named `foo` wanted to write to the store it could save records under the
key `bar`, however, the dispatcher would register that record in the persistent
state under `foo/bar`. Next time the `foo` app was called that record would be
accessible to it under the assigned key `bar`. This effectively makes app
prefixing invisible to each module while preventing each module from affecting
each other module. Under this model no two registered modules are permitted to
have the same namespace.
<img src="graphics/datastore.png" width=500>

97
docs/overview.rst Normal file
View File

@ -0,0 +1,97 @@
Quark Overview
==============
The quark middleware design optimizes flexibility and security. The
framework is designed around a modular execution stack which allows
applications to mix and match modular elements as desired. Along side,
all modules are permissioned and sandboxed to isolate modules for
greater application security.
For more explanation please see the `standard library <stdlib.md>`__ and
`glossary <glossary.md>`__ documentation.
For a more interconnected schematics see these
`framework <graphics/overview-framework.png>`__ and
`security <graphics/overview-security.png>`__ overviews.
Framework Overview
------------------
Transactions (tx)
~~~~~~~~~~~~~~~~~
Each transaction passes through the middleware stack which can be
defined uniquely by each application. From the multiple layers of
transaction, each middleware may strip off one level, like an onion. As
such, the transaction must be constructed to mirror the execution stack,
and each middleware module should allow an arbitrary transaction to be
embedded for the next layer in the stack.
Execution Stack
~~~~~~~~~~~~~~~
Middleware components allow for code reusability and integrability. A
standard set of middleware are provided and can be mix-and-matched with
custom middleware. Some of the `standard library <stdlib.md>`__
middlewares provided in this package include: - Logging - Recovery -
Signatures - Chain - Nonce - Fees - Roles -
Inter-Blockchain-Communication (IBC)
As a part of stack execution the state space provided to each middleware
is isolated (see `Data Store <overview.md#data-store>`__). When
executing the stack, state-recovery checkpoints can be assigned for
stack execution of ``CheckTx`` or ``DeliverTx``. This means, that all
state changes will be reverted to the checkpoint state on failure when
either being run as a part of ``CheckTx`` or ``DeliverTx``. Example
usage of the checkpoints is when we may want to deduct a fee even if the
end business logic fails; under this situation we would add the
``DeliverTx`` checkpoint after the fee middleware but before the
business logic. This diagram displays a typical process flow through an
execution stack.
Dispatcher
~~~~~~~~~~
The dispatcher handler aims to allow for reusable business logic. As a
transaction is passed to the end handler, the dispatcher routes the
logic to the correct module. To use the dispatcher tool, all transaction
types must first be registered with the dispatcher. Once registered the
middleware stack or any other handler can call the dispatcher to execute
a transaction. Similarly to the execution stack, when executing a
transaction the dispatcher isolates the state space available to the
designated module (see `Data Store <overview.md#data-store>`__).
Security Overview
-----------------
Permission
~~~~~~~~~~
Each application is run in a sandbox to isolate security risks. When
interfacing between applications, if one of those applications is
compromised the entire network should still be secure. This is achieved
through actor permissioning whereby each chain, account, or application
can provided a designated permission for the transaction context to
perform a specific action.
Context is passed through the middleware and dispatcher, allowing one to
add permissions on this app-space, and check current permissions.
Data Store
~~~~~~~~~~
The entire merkle tree can access all data. When we call a module (or
middleware), we give them access to a subtree corresponding to their
app. This is achieved through the use of unique prefix assigned to each
module. From the module's perspective it is no different, the module
need-not have regard for the prefix as it is assigned outside of the
modules scope. For example, if a module named ``foo`` wanted to write to
the store it could save records under the key ``bar``, however, the
dispatcher would register that record in the persistent state under
``foo/bar``. Next time the ``foo`` app was called that record would be
accessible to it under the assigned key ``bar``. This effectively makes
app prefixing invisible to each module while preventing each module from
affecting each other module. Under this model no two registered modules
are permitted to have the same namespace.

View File

@ -0,0 +1,322 @@
This guide uses the roles functionality provided by ``basecli`` to
create a multi-sig wallet. It builds upon the basecoin basics and key
management guides. You should have ``basecoin`` started with blocks
streaming in, and three accounts: ``rich, poor, igor`` where ``rich``
was the account used on ``basecoin init``, *and* run ``basecli init``
with the appropriate flags. Review the intro guides for more
information.
In this example, ``rich`` will create the role and send it some coins
(i.e., fill the multi-sig wallet). Then, ``poor`` will prepare a
transaction to withdraw coins, which will be approved by ``igor``. Let's
look at our keys:
::
basecli keys list
::
All keys:
igor 5E4CB7A4E729BA0A8B18DE99E21409B6D706D0F1
poor 65D406E028319289A0706E294F3B764F44EBA3CF
rich CB76F4092D1B13475272B36585EBD15D22A2848D
Using the ``basecli query account`` command, you'll see that ``rich``
has plenty of coins:
::
{
"height": 81,
"data": {
"coins": [
{
"denom": "mycoin",
"amount": 9007199254740992
}
],
"credit": []
}
}
whereas ``poor`` and ``igor`` have no coins (in fact, the chain doesn't
know about them yet):
::
ERROR: Account bytes are empty for address 65D406E028319289A0706E294F3B764F44EBA3CF
Create Role
-----------
This first step defines the parameters of a new role, which will have
control of any coins sent to it, and only release them if correct
conditions are met. In this example, we are going to make a 2/3
multi-sig wallet. Let's look a the command and dissect it below:
::
basecli tx create-role --role=10CAFE4E --min-sigs=2 --members=5E4CB7A4E729BA0A8B18DE99E21409B6D706D0F1,65D406E028319289A0706E294F3B764F44EBA3CF,CB76F4092D1B13475272B36585EBD15D22A2848D --sequence=1 --name=rich
In the first part we are sending a transaction that creates a role,
rather than transfering coins. The ``--role`` flag is the name of the
role (in hex only) and must be in double quotes. The ``--min-sigs`` and
``--members`` define your multi-sig parameters. Here, we require a
minimum of 2 signatures out of 3 members but we could easily say 3 of 5
or 9 of 10, or whatever your application requires. The ``--members``
flag requires a comma-seperated list of addresses that will be
signatories on the role. Then we set the ``--sequence`` number for the
transaction, which will start at 1 and must be incremented by 1 for
every transaction from an account. Finally, we use the name of the
key/account that will be used to create the role, in this case the
account ``rich``.
Remember that ``rich``'s address was used on ``basecoin init`` and is
included in the ``--members`` list. The command above will prompt for a
password (which can also be piped into the command if desired) then - if
executed correctly - return some data:
::
{
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 0,
"data": "",
"log": ""
},
"hash": "4849DA762E19CE599460B9882DD42C7F19655DC1",
"height": 321
}
showing the block height at which the transaction was committed and its
hash. A quick review of what we did: 1) created a role, essentially an
account, that requires a minimum of two (2) signatures from three (3)
accounts (members). And since it was the account named ``rich``'s first
transaction, the sequence was set to 1.
Let's look at the balance of the role that we've created:
::
basecli query account role:10CAFE4E
and it should be empty:
::
ERROR: Account bytes are empty for address role:10CAFE4E
Next, we want to send coins *to* that role. Notice that because this is
the second transaction being sent by rich, we need to increase
``--sequence`` to ``2``:
::
basecli tx send --fee=90mycoin --amount=10000mycoin --to=role:10CAFE4E --sequence=2 --name=rich
We need to pay a transaction fee to the validators, in this case 90
``mycoin`` to send 10000 ``mycoin`` Notice that for the ``--to`` flag,
to specify that we are sending to a role instead of an account, the
``role:`` prefix is added before the role. Because it's ``rich``'s
second transaction, we've incremented the sequence. The output will be
nearly identical to the output from ``create-role`` above.
Now the role has coins (think of it like a bank).
Double check with:
::
basecli query account role:10CAFE4E
and this time you'll see the coins in the role's account:
::
{
"height": 2453,
"data": {
"coins": [
{
"denom": "mycoin",
"amount": 10000
}
],
"credit": []
}
}
``Poor`` decides to initiate a multi-sig transaction to himself from the
role's account. First, it must be prepared like so:
::
basecli tx send --amount=6000mycoin --from=role:10CAFE4E --to=65D406E028319289A0706E294F3B764F44EBA3CF --sequence=1 --assume-role=10CAFE4E --name=poor --multi --prepare=tx.json
you'll be prompted for ``poor``'s password and there won't be any
``stdout`` to the terminal. Note that the address in the ``--to`` flag
matches the address of ``poor``'s account from the beginning of the
tutorial. The main output is the ``tx.json`` file that has just been
created. In the above command, the ``--assume-role`` flag is used to
evaluate account permissions on the transaction, while the ``--multi``
flag is used in combination with ``--prepare``, to specify the file that
is prepared for a multi-sig transaction.
The ``tx.json`` file will look like this:
::
{
"type": "sigs/multi",
"data": {
"tx": {
"type": "chain/tx",
"data": {
"chain_id": "test_chain_id",
"expires_at": 0,
"tx": {
"type": "nonce",
"data": {
"sequence": 1,
"signers": [
{
"chain": "",
"app": "sigs",
"addr": "65D406E028319289A0706E294F3B764F44EBA3CF"
}
],
"tx": {
"type": "role/assume",
"data": {
"role": "10CAFE4E",
"tx": {
"type": "coin/send",
"data": {
"inputs": [
{
"address": {
"chain": "",
"app": "role",
"addr": "10CAFE4E"
},
"coins": [
{
"denom": "mycoin",
"amount": 6000
}
]
}
],
"outputs": [
{
"address": {
"chain": "",
"app": "sigs",
"addr": "65D406E028319289A0706E294F3B764F44EBA3CF"
},
"coins": [
{
"denom": "mycoin",
"amount": 6000
}
]
}
]
}
}
}
}
}
}
}
},
"signatures": [
{
"Sig": {
"type": "ed25519",
"data": "A38F73BF2D109015E4B0B6782C84875292D5FAA75F0E3362C9BD29B16CB15D57FDF0553205E7A33C740319397A434B7C31CBB10BE7F8270C9984C5567D2DC002"
},
"Pubkey": {
"type": "ed25519",
"data": "6ED38C7453148DD90DFC41D9339CE45BEFA5EB505FD7E93D85E71DFFDAFD9B8F"
}
}
]
}
}
and it is loaded by the next command.
With the transaction prepared, but not sent, we'll have ``igor`` sign
and send the prepared transaction:
::
basecli tx --in=tx.json --name=igor
which will give output similar to:
::
{
"check_tx": {
"code": 0,
"data": "",
"log": ""
},
"deliver_tx": {
"code": 0,
"data": "",
"log": ""
},
"hash": "E345BDDED9517EB2CAAF5E30AFF3AB38A1172833",
"height": 2673
}
and voila! That's the basics for creating roles and sending multi-sig
transactions. For 3 of 3, you'd add an intermediate transactions like:
::
basecli tx --in=tx.json --name=igor --prepare=tx2.json
before having rich sign and send the transaction. The ``--prepare`` flag
writes files to disk rather than sending the transaction and can be used
to chain together multiple transactions.
We can check the balance of the role:
::
basecli query account role:10CAFE4E
and get the result:
::
{
"height": 2683,
"data": {
"coins": [
{
"denom": "mycoin",
"amount": 4000
}
],
"credit": []
}
}
and see that ``poor`` now has 6000 ``mycoin``:
::
basecli query account 65D406E028319289A0706E294F3B764F44EBA3CF
to confirm that everything worked as expected.

View File

@ -1,128 +0,0 @@
# Standard Library
The Cosmos-SDK comes bundled with a number of standard modules that
provide common functionality useful across a wide variety of applications.
See examples below. It is recommended to investigate if desired
functionality is already provided before developing new modules.
## Basic Middleware
### Logging
`modules.base.Logger` is a middleware that records basic info on `CheckTx`,
`DeliverTx`, and `SetOption`, along with timing in microseconds. It can be
installed standard at the top of all middleware stacks, or replaced with your
own middleware if you want to record custom information with each request.
### Recovery
To avoid accidental panics (e.g. bad go-wire decoding) killing the ABCI app,
wrap the stack with `stack.Recovery`, which catches all panics and returns
them as errors, so they can be handled normally.
### Signatures
The first layer of the transaction contains the signatures to authorize it.
This is then verified by `modules.auth.Signatures`. All transactions may
have one or multiple signatures which are then processed and verified by this
middleware and then passed down the stack.
### Chain
The next layer of a transaction (in the standard stack) binds the transaction
to a specific chain with a block height that has an optional expiration. This
keeps the transactions from being replayed on a fork or other such chain, as
well as a partially signed multi-sig being delayed months before being
committed to the chain. This functionality is provided in `modules.base.Chain`
### Nonce
To avoid replay attacks, a nonce can be associated with each actor. A separate
nonce is used for each distinct group signers required for a transaction as
well as for each separate application and chain-id. This creates replay
protection cross-IBC and cross-plugins and also allows signing parties to not
be bound to waiting for a particular transaction to be completed before being
able to sign a separate transaction.
Rather than force each module to implement its own replay protection, a
transaction stack may contain a nonce wrap and the account it belongs to. The
nonce must contain a signed sequence number which is incremented one higher
than the last request or the request is rejected. This is implemented in
`modules.nonce.ReplayCheck`.
If you're interested checkout this [design
discussion](https://github.com/cosmos/cosmos-sdk/issues/160).
### Fees
An optional - but useful - feature on many chains, is charging transaction fees.
A simple implementation of this is provided in `modules.fee.SimpleFeeMiddleware`.
A fee currency and minimum amount are defined in the constructor (eg. in code).
If the minimum amount is 0, then the fee is optional. If it is above 0, then
every transaction with insufficient fee is rejected. This fee is deducted from the
payers account before executing any other transaction.
This module is dependent on the `coin` module.
## Other Apps
### Coin
What would a crypto-currency be without tokens? The `SendTx` logic from earlier
implementations of basecoin was extracted into one module, which is now
optional, meaning most of the other functionality will also work in a system
with no built-in tokens, such as a private network that provides other access
control mechanisms.
`modules.coin.Handler` defines a Handler that maintains a number of accounts
along with a set of various tokens, supporting multiple token denominations.
The main access is `SendTx`, which can support any type of actor (other apps as
well as public key addresses) and is a building block for any other app that
requires some payment solution, like fees or trader.
### Roles
Roles encapsulate what are typically called N-of-M multi-signatures accounts
in the crypto world. However, I view this as a type of role or group, which can
be the basis for building a permission system. For example, a set of people
could be called registrars, which can authorize a new IBC chain, and need eg. 2
out of 7 signatures to approve it.
Currently, one can create a role with `modules.roles.Handler`, and assume one
of those roles by wrapping another transaction with `AssumeRoleTx`, which is
processed by `modules.roles.Middleware`. Updating the set of actors in
a role is planned in the near future.
### Inter-Blockchain Communication (IBC)
IBC, is the cornerstone of The Cosmos Network, and is built into the Cosmos-SDK
framework as a basic primitive. To fully grasp these concepts requires
a much longer explanation, but in short, the chain works as a light-client to
another chain and maintains input and output queue to send packets with that
chain. This mechanism allows blockchains to prove the state of their respective
blockchains to each other ultimately invoke inter-blockchain transactions.
Most functionality is implemented in `modules.ibc.Handler`. Registering a chain
is a seed of trust that requires verification of the proper seed (or genesis
block), and this generally requires approval of an authorized registrar (which
may be a multi-sig role). Updating a registered chain can be done by anyone,
as the new header can be completely verified by the existing knowledge of the
chain. Also, modules can initiate an outgoing IBC message to another chain
by calling `CreatePacketTx` over IPC (inter-plugin communication) with a
transaction that belongs to their module. (This must be explicitly authorized
by the same module, so only the eg. coin module can authorize a `SendTx` to
another chain).
`PostPacketTx` can post a transaction that was created on another chain along
with the merkle proof, which must match an already registered header. If this
chain can verify the authenticity, it will accept the packet, along with all
the permissions from the other chain, and execute it on this stack. This is the
only way to get permissions that belong to another chain.
These various pieces can be combined in a relay, which polls for new packets
on one chain, and then posts the packets along with the new headers on the
other chain.
## Example Apps
See the [Cosmos Academy](https://github.com/cosmos/cosmos-academy) for example applications.

150
docs/stdlib.rst Normal file
View File

@ -0,0 +1,150 @@
Standard Library
================
The Cosmos-SDK comes bundled with a number of standard modules that
provide common functionality useful across a wide variety of
applications. See examples below. It is recommended to investigate if
desired functionality is already provided before developing new modules.
Basic Middleware
----------------
Logging
~~~~~~~
``modules.base.Logger`` is a middleware that records basic info on
``CheckTx``, ``DeliverTx``, and ``SetOption``, along with timing in
microseconds. It can be installed standard at the top of all middleware
stacks, or replaced with your own middleware if you want to record
custom information with each request.
Recovery
~~~~~~~~
To avoid accidental panics (e.g. bad go-wire decoding) killing the ABCI
app, wrap the stack with ``stack.Recovery``, which catches all panics
and returns them as errors, so they can be handled normally.
Signatures
~~~~~~~~~~
The first layer of the transaction contains the signatures to authorize
it. This is then verified by ``modules.auth.Signatures``. All
transactions may have one or multiple signatures which are then
processed and verified by this middleware and then passed down the
stack.
Chain
~~~~~
The next layer of a transaction (in the standard stack) binds the
transaction to a specific chain with a block height that has an optional
expiration. This keeps the transactions from being replayed on a fork or
other such chain, as well as a partially signed multi-sig being delayed
months before being committed to the chain. This functionality is
provided in ``modules.base.Chain``
Nonce
~~~~~
To avoid replay attacks, a nonce can be associated with each actor. A
separate nonce is used for each distinct group signers required for a
transaction as well as for each separate application and chain-id. This
creates replay protection cross-IBC and cross-plugins and also allows
signing parties to not be bound to waiting for a particular transaction
to be completed before being able to sign a separate transaction.
Rather than force each module to implement its own replay protection, a
transaction stack may contain a nonce wrap and the account it belongs
to. The nonce must contain a signed sequence number which is incremented
one higher than the last request or the request is rejected. This is
implemented in ``modules.nonce.ReplayCheck``.
If you're interested checkout this `design
discussion <https://github.com/cosmos/cosmos-sdk/issues/160>`__.
Fees
~~~~
An optional - but useful - feature on many chains, is charging
transaction fees. A simple implementation of this is provided in
``modules.fee.SimpleFeeMiddleware``. A fee currency and minimum amount
are defined in the constructor (eg. in code). If the minimum amount is
0, then the fee is optional. If it is above 0, then every transaction
with insufficient fee is rejected. This fee is deducted from the payers
account before executing any other transaction.
This module is dependent on the ``coin`` module.
Other Apps
----------
Coin
~~~~
What would a crypto-currency be without tokens? The ``SendTx`` logic
from earlier implementations of basecoin was extracted into one module,
which is now optional, meaning most of the other functionality will also
work in a system with no built-in tokens, such as a private network that
provides other access control mechanisms.
``modules.coin.Handler`` defines a Handler that maintains a number of
accounts along with a set of various tokens, supporting multiple token
denominations. The main access is ``SendTx``, which can support any type
of actor (other apps as well as public key addresses) and is a building
block for any other app that requires some payment solution, like fees
or trader.
Roles
~~~~~
Roles encapsulate what are typically called N-of-M multi-signatures
accounts in the crypto world. However, I view this as a type of role or
group, which can be the basis for building a permission system. For
example, a set of people could be called registrars, which can authorize
a new IBC chain, and need eg. 2 out of 7 signatures to approve it.
Currently, one can create a role with ``modules.roles.Handler``, and
assume one of those roles by wrapping another transaction with
``AssumeRoleTx``, which is processed by ``modules.roles.Middleware``.
Updating the set of actors in a role is planned in the near future.
Inter-Blockchain Communication (IBC)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IBC, is the cornerstone of The Cosmos Network, and is built into the
Cosmos-SDK framework as a basic primitive. To fully grasp these concepts
requires a much longer explanation, but in short, the chain works as a
light-client to another chain and maintains input and output queue to
send packets with that chain. This mechanism allows blockchains to prove
the state of their respective blockchains to each other ultimately
invoke inter-blockchain transactions.
Most functionality is implemented in ``modules.ibc.Handler``.
Registering a chain is a seed of trust that requires verification of the
proper seed (or genesis block), and this generally requires approval of
an authorized registrar (which may be a multi-sig role). Updating a
registered chain can be done by anyone, as the new header can be
completely verified by the existing knowledge of the chain. Also,
modules can initiate an outgoing IBC message to another chain by calling
``CreatePacketTx`` over IPC (inter-plugin communication) with a
transaction that belongs to their module. (This must be explicitly
authorized by the same module, so only the eg. coin module can authorize
a ``SendTx`` to another chain).
``PostPacketTx`` can post a transaction that was created on another
chain along with the merkle proof, which must match an already
registered header. If this chain can verify the authenticity, it will
accept the packet, along with all the permissions from the other chain,
and execute it on this stack. This is the only way to get permissions
that belong to another chain.
These various pieces can be combined in a relay, which polls for new
packets on one chain, and then posts the packets along with the new
headers on the other chain.
Example Apps
------------
See the `Cosmos Academy <https://github.com/cosmos/cosmos-academy>`__
for example applications.