Move from gitbook to docusaurus, build docs in Travis CI (#10970)

* fix: ignore unknown fields in more RPC responses

* Remove mdbook infrastructure

* Delete gitattributes and other theme related items

Move all docs to /docs folder to support Docusaurus

* all docs need to be moved to /docs

* can be changed in the future

Add Docusaurus infrastructure

* initialize docusaurus repo

Remove trailing whitespace, add support for eslint

Change Docusaurus configuration to support `src`

* No need to rename the folder! Change a setting and we're all good to
go.

* Fixing rebase items

* Remove unneccessary markdown file, fix type

* Some fonts are hard to read. Others, not so much. Rubik, you've been
sidelined. Roboto, into the limelight!

* As much as we all love tutorials, I think we all can navigate around a
markdown file. Say goodbye, `mdx.md`.

* Setup deployment infrastructure

* Move docs job from buildkite to travic

* Fix travis config

* Add vercel token to travis config

* Only deploy docs after merge

* Docker rust env

* Revert "Docker rust env"

This reverts commit f84bc208e807aab1c0d97c7588bbfada1fedfa7c.

* Build CLI usage from docker

* Pacify shellcheck

* Run job on PR and new commits for publication

* Update README

* Fix svg image building

* shellcheck

Co-authored-by: Michael Vines <mvines@gmail.com>
Co-authored-by: Ryan Shea <rmshea@users.noreply.github.com>
Co-authored-by: publish-docs.sh <maintainers@solana.com>
This commit is contained in:
Dan Albert 2020-07-10 23:11:07 -06:00 committed by GitHub
parent 4046f87134
commit ffeac298a2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
172 changed files with 2862 additions and 3429 deletions

View File

@ -106,3 +106,25 @@ jobs:
script:
- ../.travis/commitlint.sh
- source .travis/script.sh
# docs pull request
- name: "docs"
if: type IN (push, pull_request)
language: node_js
node_js:
- "node"
services:
- docker
cache:
directories:
- ~/.npm
before_install:
- .travis/affects.sh docs/ .travis || travis_terminate 0
- cd docs/
- source .travis/before_install.sh
script:
- source .travis/script.sh

View File

@ -211,12 +211,7 @@ pull_or_push_steps() {
all_test_steps
fi
# doc/ changes:
if affects ^docs/; then
command_step docs ". ci/rust-version.sh; ci/docker-run.sh \$\$rust_nightly_docker_image docs/build.sh" 5
fi
# web3.js and explorer changes run on Travis...
# web3.js, explorer and docs changes run on Travis...
}

21
docs/.eslintrc Normal file
View File

@ -0,0 +1,21 @@
{
"env": {
"browser": true,
"node": true
},
"parser": "babel-eslint",
"rules": {
"strict": 0,
"no-unused-vars": ["error", { "argsIgnorePattern": "^_" }],
"no-trailing-spaces": ["error", { "skipBlankLines": true }]
},
"settings": {
"react": {
"version": "detect", // React version. "detect" automatically picks the version you have installed.
}
},
"extends": [
"eslint:recommended",
"plugin:react/recommended"
]
}

1
docs/.gitattributes vendored
View File

@ -1 +0,0 @@
theme/highlight.js binary

22
docs/.gitignore vendored Normal file
View File

@ -0,0 +1,22 @@
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
.vercel
/static/img/*.svg
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@ -0,0 +1,9 @@
# |source| this file
curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
sudo apt install -y nodejs
npm install --global docusaurus-init
docusaurus-init
npm install --global vercel

4
docs/.travis/script.sh Normal file
View File

@ -0,0 +1,4 @@
# |source| this file
set -ex
./build.sh

View File

@ -1,31 +1,38 @@
Building the Solana Docs
---
# Docs Readme
Install dependencies, build, and test the docs:
Solana's Docs are built using [Docusaurus 2](https://v2.docusaurus.io/) with `npm`.
Static content delivery is handled using `vercel`.
```bash
$ brew install coreutils
$ brew install mscgen
$ cargo install svgbob_cli
$ cargo install mdbook-linkcheck
$ cargo install mdbook
$ ./build.sh
### Installation
```
$ npm install
```
Run any Rust tests in the markdown:
### Local Development
```bash
$ make test
```
$ npm run start
```
Render markdown as HTML:
This command starts a local development server and open up a browser window. Most changes are reflected live without having to restart the server.
```bash
$ make build
### Build
#### Local Build Testing
```
$ npm run build
```
Render and view the docs:
This command generates static content into the `build` directory and can be
served using any static contents hosting service.
```bash
$ make open
```
#### CI Build Flow
The docs are built and published in Travis CI with the `docs/build.sh` script.
On each PR, the docs are built, but not published.
In each post-commit build, docs are built and published using `vercel` to their
respective domain depending on the build branch.
- Master branch docs are published to `edge.docs.solana.com`
- Beta branch docs are published to `beta.docs.solana.com`
- Latest release tag docs are published to `docs.solana.com`

3
docs/babel.config.js Normal file
View File

@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
};

View File

@ -1,15 +0,0 @@
[book]
title = "Solana: Blockchain Rebuilt for Scale"
authors = ["The Solana Team"]
[build]
build-dir = "html"
create-missing = false
[output.html]
theme = "theme"
[output.linkcheck]
# Exclude some special links and `README.md` which causes false-positive errors
# Also, crates.io returns 404 for correct links accessed from curl and linkcheck
exclude = [ 'http://192\.168\.1\.88', 'http://localhost', 'LATEST_SOLANA_RELEASE_VERSION', 'README\.md', 'https://crates\.io' ]

View File

@ -3,6 +3,9 @@ set -e
cd "$(dirname "$0")"
# shellcheck source=ci/rust-version.sh
source ../ci/rust-version.sh stable
: "${rust_stable:=}" # Pacify shellcheck
usage=$(cargo +"$rust_stable" -q run -p solana-cli -- -C ~/.foo --help | sed -e 's|'"$HOME"'|~|g' -e 's/[[:space:]]\+$//')

View File

@ -1,5 +1,8 @@
#!/usr/bin/env bash
set -e
set -ex
# shellcheck source=ci/env.sh
source ../ci/env.sh
cd "$(dirname "$0")"
@ -12,6 +15,31 @@ find src -name '*.md' -a \! -name SUMMARY.md |
fi
done
mdbook --version
mdbook-linkcheck --version
make -j"$(nproc)" test
: "${rust_stable_docker_image:=}" # Pacify shellcheck
# shellcheck source=ci/rust-version.sh
source ../ci/rust-version.sh
../ci/docker-run.sh "$rust_stable_docker_image" docs/build-cli-usage.sh
../ci/docker-run.sh "$rust_stable_docker_image" docs/convert-ascii-to-svg.sh
./set-solana-release-tag.sh
# Build from /src into /build
npm run build
# Deploy the /build content using vercel
if [[ -d .vercel ]]; then
rm -r .vercel
fi
./set-vercel-project-name.sh
if [[ -n $CI ]]; then
if [[ -z $CI_PULL_REQUEST ]]; then
[[ -n $VERCEL_TOKEN ]] || {
echo "VERCEL_TOKEN is undefined. Needed for Vercel authentication."
exit 1
}
vercel deploy . --local-config=vercel.json --confirm --token "$VERCEL_TOKEN" --prod
fi
else
vercel deploy . --local-config=vercel.json
fi

21
docs/convert-ascii-to-svg.sh Executable file
View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
# Convert .bob and .msc files in docs/art to .svg files located where the
# site build will find them.
set -e
cd "$(dirname "$0")"
output_dir=static/img
mkdir -p "$output_dir"
while read -r bob_file; do
svg_file=$(basename "${bob_file%.*}".svg)
svgbob "$bob_file" --output "$output_dir/$svg_file"
done < <(find art/*.bob)
while read -r msc_file; do
svg_file=$(basename "${msc_file%.*}".svg)
mscgen -T svg -o "$output_dir/$svg_file" -i "$msc_file"
done < <(find art/*.msc)

113
docs/docusaurus.config.js Normal file
View File

@ -0,0 +1,113 @@
module.exports = {
title: "Solana Docs",
tagline:
"Solana is an open source project implementing a new, high-performance, permissionless blockchain.",
url: "https://docs.solana.com",
baseUrl: "/",
favicon: "img/favicon.ico",
organizationName: "solana-labs", // Usually your GitHub org/user name.
projectName: "solana", // Usually your repo name.
themeConfig: {
navbar: {
logo: {
alt: "Solana Logo",
src: "img/logo-horizontal.svg",
srcDark: "img/logo-horizontal-dark.svg",
},
links: [
{
to: "docs/",
activeBasePath: "docs",
label: "Docs",
position: "left",
},
{
to: "docs/apps/README",
activeBasePath: "docs2",
label: "Developers",
position: "left",
},
{
to: "docs/running-validator/README",
activeBasePath: "docs2",
label: "Validators",
position: "left",
},
{
href: "https://discordapp.com/invite/pquxPsq",
label: "Chat",
position: "right",
},
{
href: "https://github.com/solana-labs/solana",
label: "GitHub",
position: "right",
},
],
},
footer: {
style: "dark",
links: [
{
title: "Docs",
items: [
{
label: "Introduction",
to: "docs/introduction",
},
{
label: "Tour de SOL",
to: "docs/tour-de-sol/README",
},
],
},
{
title: "Community",
items: [
{
label: "Discord",
href: "https://discordapp.com/invite/pquxPsq",
},
{
label: "Twitter",
href: "https://twitter.com/solana",
},
{
label: "Forums",
href: "https://forums.solana.com",
},
],
},
{
title: "More",
items: [
{
label: "GitHub",
href: "https://github.com/solana-labs/solana",
},
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} Solana Foundation`,
},
},
presets: [
[
"@docusaurus/preset-classic",
{
docs: {
path: "src",
// It is recommended to set document id as docs home page (`docs/` path).
homePageId: "introduction",
sidebarPath: require.resolve("./sidebars.js"),
// Please change this to your repo.
editUrl: "https://github.com/solana-labs/solana/edit/master/docs/",
},
theme: {
customCss: require.resolve("./src/css/custom.css"),
},
},
],
],
};

View File

@ -1,51 +0,0 @@
BOB_SRCS=$(wildcard art/*.bob)
MSC_SRCS=$(wildcard art/*.msc)
MD_SRCS=$(wildcard src/*.md src/*/*.md) src/cli/usage.md
SVG_IMGS=$(BOB_SRCS:art/%.bob=src/.gitbook/assets/%.svg) $(MSC_SRCS:art/%.msc=src/.gitbook/assets/%.svg)
TARGET=html/index.html
TEST_STAMP=src/tests.ok
all: $(TARGET)
svg: $(SVG_IMGS)
test: $(TEST_STAMP)
open: $(TEST_STAMP)
mdbook build --open
./set-solana-release-tag.sh
watch: $(SVG_IMGS)
mdbook watch
src/.gitbook/assets/%.svg: art/%.bob
@mkdir -p $(@D)
svgbob < $< > $@
src/.gitbook/assets/%.svg: art/%.msc
@mkdir -p $(@D)
mscgen -T svg -i $< -o $@
../target/debug/solana:
cd ../cli && cargo build
src/cli/usage.md: build-cli-usage.sh ../target/debug/solana
./$<
src/%.md: %.md
@mkdir -p $(@D)
@cp $< $@
$(TEST_STAMP): $(TARGET)
mdbook test
touch $@
$(TARGET): $(SVG_IMGS) $(MD_SRCS)
mdbook build
./set-solana-release-tag.sh
clean:
rm -f $(SVG_IMGS) src/tests.ok
rm -rf html

39
docs/package.json Normal file
View File

@ -0,0 +1,39 @@
{
"name": "solana-docs",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "docusaurus start",
"build": "docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"format": "prettier --check \"**/*.{js,jsx,json,md,scss}\"",
"format:fix": "prettier --write \"**/*.{js,jsx,json,md,scss}\"",
"lint": "set -ex; eslint .",
"lint:fix": "npm run lint -- --fix"
},
"dependencies": {
"@docusaurus/core": "^2.0.0-alpha.58",
"@docusaurus/preset-classic": "^2.0.0-alpha.58",
"@docusaurus/theme-search-algolia": "^2.0.0-alpha.32",
"babel-eslint": "^10.1.0",
"clsx": "^1.1.1",
"eslint": "^7.3.1",
"eslint-plugin-react": "^7.20.0",
"prettier": "^2.0.5",
"react": "^16.8.4",
"react-dom": "^16.8.4"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
if [[ -n $CI_TAG ]]; then
@ -23,7 +23,6 @@ if [[ -z "$LATEST_SOLANA_RELEASE_VERSION" ]]; then
fi
set -x
find html/ -name \*.html -exec sed -i "s/LATEST_SOLANA_RELEASE_VERSION/$LATEST_SOLANA_RELEASE_VERSION/g" {} \;
if [[ -n $CI ]]; then
find src/ -name \*.md -exec sed -i "s/LATEST_SOLANA_RELEASE_VERSION/$LATEST_SOLANA_RELEASE_VERSION/g" {} \;
fi

25
docs/set-vercel-project-name.sh Executable file
View File

@ -0,0 +1,25 @@
#!/usr/bin/env bash
# Replaces the PROJECT_NAME value in vercel.json commit based on channel or tag
# so we push the updated docs to the right domain
set -e
if [[ -n $CI_TAG ]]; then
NAME=docs-solana-com
else
eval "$(../ci/channel-info.sh)"
case $CHANNEL in
edge)
NAME=edge-docs-solana-com
;;
beta)
NAME=beta-docs-solana-com
;;
*)
NAME=docs
;;
esac
fi
sed -i s/PROJECT_NAME/$NAME/g vercel.json

173
docs/sidebars.js Normal file
View File

@ -0,0 +1,173 @@
module.exports = {
docs: {
Introduction: ["introduction"],
"Wallet Guide": [
"wallet-guide/README",
{
type: "category",
label: "App Wallets",
items: [
"wallet-guide/apps",
"wallet-guide/trust-wallet",
"wallet-guide/ledger-live",
],
},
{
type: "category",
label: "Command-line Wallets",
items: [
"wallet-guide/cli",
{
type: "category",
label: "Paper Wallets",
items: ["paper-wallet/README", "paper-wallet/paper-wallet-usage"],
},
{
type: "category",
label: "Hardware Wallets",
items: ["hardware-wallets/README", "hardware-wallets/ledger"],
},
"file-system-wallet/README",
],
},
"wallet-guide/support",
],
"Command Line Guide": [
"cli/README",
"cli/install-solana-cli-tools",
"cli/conventions",
"cli/choose-a-cluster",
"cli/transfer-tokens",
"cli/manage-stake-accounts",
"offline-signing/README",
"offline-signing/durable-nonce",
],
"Solana Clusters": ["clusters"],
"Develop Applications": [
"apps/README",
"apps/rent",
"apps/webwallet",
"apps/tictactoe",
"apps/drones",
"transaction",
"apps/jsonrpc-api",
"apps/javascript-api",
"apps/builtins/README",
],
"Integration Guides": ["integrations/exchange"],
"Run a Validator": [
"running-validator/README",
"running-validator/validator-reqs",
"running-validator/validator-start",
"running-validator/validator-stake",
"running-validator/validator-monitor",
"running-validator/validator-info",
"running-validator/validator-troubleshoot",
],
"Tour de SOL": [
"tour-de-sol/README",
"tour-de-sol/useful-links",
{
type: "category",
label: "Registration",
items: [
"tour-de-sol/registration/how-to-register",
"tour-de-sol/registration/terms-of-participation",
"tour-de-sol/registration/rewards",
"tour-de-sol/registration/confidentiality",
"tour-de-sol/registration/validator-registration-and-rewards-faq",
],
},
{
type: "category",
label: "Participation",
items: [
"tour-de-sol/participation/validator-technical-requirements",
"tour-de-sol/participation/validator-public-key-registration",
"tour-de-sol/participation/steps-to-create-a-validator",
],
},
"tour-de-sol/submitting-bugs",
],
"Benchmark a Cluster": ["cluster/bench-tps", "cluster/performance-metrics"],
"Solana's Architecture": [
"cluster/README",
"cluster/synchronization",
"cluster/leader-rotation",
"cluster/fork-generation",
"cluster/managing-forks",
"cluster/turbine-block-propagation",
"cluster/vote-signing",
"cluster/stake-delegation-and-rewards",
],
"Anatomy of a Validator": [
"validator/README",
"validator/tpu",
"validator/tvu",
"validator/blockstore",
"validator/gossip",
"validator/runtime",
],
Terminology: ["terminology"],
History: ["history"],
"Implemented Design Proposals": [
{
type: "category",
label: "Economic Design",
items: [
"implemented-proposals/ed_overview/README",
{
type: "category",
label: "Validation Client Economics",
items: [
"implemented-proposals/ed_overview/ed_validation_client_economics/README",
"implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards",
"implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_transaction_fees",
"implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_validation_stake_delegation",
],
},
"implemented-proposals/ed_overview/ed_storage_rent_economics",
"implemented-proposals/ed_overview/ed_economic_sustainability",
"implemented-proposals/ed_overview/ed_mvp",
"implemented-proposals/ed_overview/ed_references",
],
},
"implemented-proposals/transaction-fees",
"implemented-proposals/tower-bft",
"implemented-proposals/leader-leader-transition",
"implemented-proposals/leader-validator-transition",
"implemented-proposals/persistent-account-storage",
"implemented-proposals/reliable-vote-transmission",
"implemented-proposals/repair-service",
"implemented-proposals/testing-programs",
"implemented-proposals/readonly-accounts",
"implemented-proposals/embedding-move",
"implemented-proposals/staking-rewards",
"implemented-proposals/rent",
"implemented-proposals/durable-tx-nonces",
"implemented-proposals/validator-timestamp-oracle",
"implemented-proposals/commitment",
"implemented-proposals/snapshot-verification",
"implemented-proposals/cross-program-invocation",
"implemented-proposals/program-derived-addresses",
"implemented-proposals/abi-management",
],
"Accepted Design Proposals": [
"proposals/README",
"proposals/ledger-replication-to-implement",
"proposals/optimistic-confirmation-and-slashing",
"proposals/vote-signing-to-implement",
"proposals/cluster-test-framework",
"proposals/validator-proposal",
"proposals/simple-payment-and-state-verification",
"proposals/interchain-transaction-verification",
"proposals/snapshot-verification",
"proposals/bankless-leader",
"proposals/slashing",
"proposals/tick-verification",
"proposals/block-confirmation",
"proposals/rust-clients",
"proposals/optimistic_confirmation",
],
},
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 195 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 245 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 542 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 256 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 269 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 372 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

View File

@ -1,4 +1,6 @@
# Table of contents
---
title: Table of contents
---
* [Introduction](introduction.md)
* [Wallet Guide](wallet-guide/README.md)

View File

@ -1,4 +1,6 @@
# Programming Model
---
title: Programming Model
---
An _app_ interacts with a Solana cluster by sending it _transactions_ with one or more _instructions_. The Solana _runtime_ passes those instructions to _programs_ deployed by app developers beforehand. An instruction might, for example, tell a program to transfer _lamports_ from one _account_ to another or create an interactive contract that governs how lamports are transferred. Instructions are executed sequentially and atomically for each transaction. If any instruction is invalid, all account changes in the transaction are discarded.
@ -18,7 +20,7 @@ Each instruction specifies a single program account \(which must be marked execu
## Deploying Programs to a Cluster
![SDK tools](../.gitbook/assets/sdk-tools.svg)
![SDK tools](/img/sdk-tools.svg)
As shown in the diagram above, a program author creates a program and compiles it to an ELF shared object containing BPF bytecode and uploads it to the Solana cluster with a special _deploy_ transaction. The cluster makes it available to clients via a _program ID_. The program ID is a _address_ specified when deploying and is used to reference the program in subsequent transactions.

View File

@ -1,4 +1,6 @@
# Builtin Programs
---
title: Builtin Programs
---
Solana contains a small handful of builtin programs, which are required to run
validator nodes. Unlike third-party programs, the builtin programs are part of
@ -18,15 +20,15 @@ programs, as well include instructions from third-party programs.
Create accounts and transfer lamports between them
* Program ID: `11111111111111111111111111111111`
* Instructions: [SystemInstruction](https://docs.rs/solana-sdk/LATEST_SOLANA_RELEASE_VERSION/solana_sdk/system_instruction/enum.SystemInstruction.html)
- Program ID: `11111111111111111111111111111111`
- Instructions: [SystemInstruction](https://docs.rs/solana-sdk/LATEST_SOLANA_RELEASE_VERSION/solana_sdk/system_instruction/enum.SystemInstruction.html)
## Config Program
Add configuration data to the chain and the list of public keys that are permitted to modify it
* Program ID: `Config1111111111111111111111111111111111111`
* Instructions: [config_instruction](https://docs.rs/solana-config-program/LATEST_SOLANA_RELEASE_VERSION/solana_config_program/config_instruction/index.html)
- Program ID: `Config1111111111111111111111111111111111111`
- Instructions: [config_instruction](https://docs.rs/solana-config-program/LATEST_SOLANA_RELEASE_VERSION/solana_config_program/config_instruction/index.html)
Unlike the other programs, the Config program does not define any individual
instructions. It has just one implicit instruction, a "store" instruction. Its
@ -37,25 +39,25 @@ data to store in it.
Create stake accounts and delegate it to validators
* Program ID: `Stake11111111111111111111111111111111111111`
* Instructions: [StakeInstruction](https://docs.rs/solana-stake-program/LATEST_SOLANA_RELEASE_VERSION/solana_stake_program/stake_instruction/enum.StakeInstruction.html)
- Program ID: `Stake11111111111111111111111111111111111111`
- Instructions: [StakeInstruction](https://docs.rs/solana-stake-program/LATEST_SOLANA_RELEASE_VERSION/solana_stake_program/stake_instruction/enum.StakeInstruction.html)
## Vote Program
Create vote accounts and vote on blocks
* Program ID: `Vote111111111111111111111111111111111111111`
* Instructions: [VoteInstruction](https://docs.rs/solana-vote-program/LATEST_SOLANA_RELEASE_VERSION/solana_vote_program/vote_instruction/enum.VoteInstruction.html)
- Program ID: `Vote111111111111111111111111111111111111111`
- Instructions: [VoteInstruction](https://docs.rs/solana-vote-program/LATEST_SOLANA_RELEASE_VERSION/solana_vote_program/vote_instruction/enum.VoteInstruction.html)
## BPF Loader
Add programs to the chain.
* Program ID: `BPFLoader1111111111111111111111111111111111`
* Instructions: [LoaderInstruction](https://docs.rs/solana-sdk/LATEST_SOLANA_RELEASE_VERSION/solana_sdk/loader_instruction/enum.LoaderInstruction.html)
- Program ID: `BPFLoader1111111111111111111111111111111111`
- Instructions: [LoaderInstruction](https://docs.rs/solana-sdk/LATEST_SOLANA_RELEASE_VERSION/solana_sdk/loader_instruction/enum.LoaderInstruction.html)
The BPF Loader marks itself as its "owner" of the executable account it
creates to store your program. When a user invokes an instruction via a
program ID, the Solana runtime will load both your executable account and its
owner, the BPF Loader. The runtime then passes your program to the BPF Loader
to process the instruction.
to process the instruction.

View File

@ -1,4 +1,6 @@
# Drones
---
title: Drones
---
This section defines an off-chain service called a _drone_, which acts as custodian of a user's private key. In its simplest form, it can be used to create _airdrop_ transactions, a token transfer from the drone's account to a client's account.
@ -20,7 +22,7 @@ Note: the Solana cluster will not parallelize transactions funded by the same fe
## Attack vectors
### Invalid recent\_blockhash
### Invalid recent_blockhash
The drone may prefer its airdrops only target a particular Solana cluster. To do that, it listens to the cluster for new entry IDs and ensure any requests reference a recent one.
@ -41,4 +43,3 @@ A client may request multiple airdrops before the first has been submitted to th
If the transaction data size is smaller than the size of the returned signature \(or descriptive error\), a single client can flood the network. Considering that a simple `Transfer` operation requires two public keys \(each 32 bytes\) and a `fee` field, and that the returned signature is 64 bytes \(and a byte to indicate `Ok`\), consideration for this attack may not be required.
In the current design, the drone accepts TCP connections. This allows clients to DoS the service by simply opening lots of idle connections. Switching to UDP may be preferred. The transaction data will be smaller than a UDP packet since the transaction sent to the Solana cluster is already pinned to using UDP.

View File

@ -1,4 +1,5 @@
# JavaScript API
---
title: JavaScript API
---
See [solana-web3](https://solana-labs.github.io/solana-web3.js/).

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,6 @@
# Storage Rent for Accounts
---
title: Storage Rent for Accounts
---
Keeping accounts alive on Solana incurs a storage cost called _rent_ because the cluster must actively maintain the data to process any future transactions on it. This is different from Bitcoin and Ethereum, where storing accounts doesn't incur any costs.

View File

@ -1,4 +1,6 @@
# Example: Tic-Tac-Toe
---
title: "Example: Tic-Tac-Toe"
---
[Click here to play Tic-Tac-Toe](https://solana-example-tictactoe.herokuapp.com/) on the Solana testnet. Open the link and wait for another player to join, or open the link in a second browser tab to play against yourself. You will see that every move a player makes stores a transaction on the ledger.
@ -19,4 +21,3 @@ Next, follow the steps in the git repository's [README](https://github.com/solan
## Getting lamports to users
You may have noticed you interacted with the Solana cluster without first needing to acquire lamports to pay transaction fees. Under the hood, the web app creates a new ephemeral identity and sends a request to an off-chain service for a signed transaction authorizing a user to start a new game. The service is called a _drone_. When the app sends the signed transaction to the Solana cluster, the drone's lamports are spent to pay the transaction fee and start the game. In a real world app, the drone might request the user watch an ad or pass a CAPTCHA before signing over its lamports.

View File

@ -1,4 +1,6 @@
# Example Client: Web Wallet
---
title: "Example Client: Web Wallet"
---
## Build and run a web wallet locally
@ -13,4 +15,3 @@ $ git checkout $TAG
```
Next, follow the steps in the git repository's [README](https://github.com/solana-labs/example-webwallet/blob/master/README.md).

View File

@ -1,7 +1,9 @@
# Command-line Guide
---
title: Command-line Guide
---
In this section, we will describe how to use the Solana command-line tools to
create a *wallet*, to send and receive SOL tokens, and to participate in
create a _wallet_, to send and receive SOL tokens, and to participate in
the cluster by delegating stake.
To interact with a Solana cluster, we will use its command-line interface, also
@ -11,8 +13,10 @@ necessarily the easiest to use, but it provides the most direct, flexible, and
secure access to your Solana accounts.
## Getting Started
To get started using the Solana Command Line (CLI) tools:
- [Install the Solana Tools](install-solana-cli-tools.md)
- [Choose a Cluster](choose-a-cluster.md)
- [Create a Wallet](../wallet-guide/cli.md)
- [Check out our CLI conventions](conventions.md)
- [Install the Solana Tools](install-solana-cli-tools.md)
- [Choose a Cluster](choose-a-cluster.md)
- [Create a Wallet](../wallet-guide/cli.md)
- [Check out our CLI conventions](conventions.md)

View File

@ -1,8 +1,12 @@
# Connecting to a Cluster
---
title: Connecting to a Cluster
---
See [Solana Clusters](../clusters.md) for general information about the
available clusters.
## Configure the command-line tool
You can check what cluster the Solana command-line tool (CLI) is currently targeting by
running the following command:
@ -10,11 +14,12 @@ running the following command:
solana config get
```
Use `solana config set` command to target a particular cluster. After setting
Use `solana config set` command to target a particular cluster. After setting
a cluster target, any future subcommands will send/receive information from that
cluster.
For example to target the Devnet cluster, run:
```bash
solana config set --url https://devnet.solana.com
```

View File

@ -1,4 +1,6 @@
# Using Solana CLI
---
title: Using Solana CLI
---
Before running any Solana CLI commands, let's go over some conventions that
you will see across all commands. First, the Solana CLI is actually a collection
@ -19,7 +21,7 @@ where you replace the text `<COMMAND>` with the name of the command you want
to learn more about.
The command's usage message will typically contain words such as `<AMOUNT>`,
`<ACCOUNT_ADDRESS>` or `<KEYPAIR>`. Each word is a placeholder for the *type* of
`<ACCOUNT_ADDRESS>` or `<KEYPAIR>`. Each word is a placeholder for the _type_ of
text you can execute the command with. For example, you can replace `<AMOUNT>`
with a number such as `42` or `100.42`. You can replace `<ACCOUNT_ADDRESS>` with
the base58 encoding of your public key, such as
@ -27,12 +29,13 @@ the base58 encoding of your public key, such as
## Keypair conventions
Many commands using the CLI tools require a value for a `<KEYPAIR>`. The value
Many commands using the CLI tools require a value for a `<KEYPAIR>`. The value
you should use for the keypair depend on what type of
[command line wallet you created](../wallet-guide/cli.md).
For example, the way to display any wallet's address
(also known as the keypair's pubkey), the CLI help document shows:
```bash
solana-keygen pubkey <KEYPAIR>
```
@ -49,9 +52,11 @@ enter the word `ASK` and the program will prompt you to enter your seed words
when you run the command.
To display the wallet address of a Paper Wallet:
```bash
solana-keygen pubkey ASK
```
#### File System Wallet
With a file system wallet, the keypair is stored in a file on your computer.
@ -59,6 +64,7 @@ Replace `<KEYPAIR>` with the complete file path to the keypair file.
For example, if the file system keypair file location is
`/home/solana/my_wallet.json`, to display the address, do:
```bash
solana-keygen pubkey /home/solana/my_wallet.json
```
@ -68,6 +74,7 @@ solana-keygen pubkey /home/solana/my_wallet.json
If you chose a hardware wallet, use your
[keypair URL](../hardware-wallets/README.md#specify-a-hardware-wallet-key),
such as `usb://ledger?key=0`.
```bash
solana-keygen pubkey usb://ledger?key=0
```
```

View File

@ -1,8 +1,14 @@
# Delegate Stake
This page describes the workflow and commands needed to create and manage stake
accounts, and to delegate your stake accounts to a validator using the Solana
command-line tools. The [stake accounts](../staking/stake-accounts.md)
document provides an overview of stake account features and concepts.
---
title: Delegate Stake
---
After you have [received SOL](transfer-tokens.md), you might consider putting
it to use by delegating _stake_ to a validator. Stake is what we call tokens
in a _stake account_. Solana weights validator votes by the amount of stake
delegated to them, which gives those validators more influence in determining
then next valid block of transactions in the blockchain. Solana then generates
new SOL periodically to reward stakers and validators. You earn more rewards
the more stake you delegate.
## Create a Stake Account
To delegate stake, you will need to transfer some tokens into a stake account.
@ -87,8 +93,7 @@ solana create-stake-account --from <KEYPAIR> <STAKE_ACCOUNT_KEYPAIR> --seed <STR
number corresponding to which derived account this is. The first account might
be "0", then "1", and so on. The public key of `<STAKE_ACCOUNT_KEYPAIR>` acts
as the base address. The command derives a new address from the base address
and seed string. To see what stake address the command will derive, use `solana
create-address-with-seed`:
and seed string. To see what stake address the command will derive, use `solana create-address-with-seed`:
```bash
solana create-address-with-seed --from <PUBKEY> <SEED_STRING> STAKE
@ -190,6 +195,6 @@ keypair for the new account, and `<AMOUNT>` is the number of tokens to transfer
to the new account.
To split a stake account into a derived account address, use the `--seed`
option. See
option. See
[Derive Stake Account Addresses](#advanced-derive-stake-account-addresses)
for details.

View File

@ -1,28 +1,31 @@
# Install the Solana Tool Suite
---
title: Install the Solana Tool Suite
---
There are multiple ways to install the Solana tools on your computer
depending on your preferred workflow:
- [Use Solana's Install Tool (Simplest option)](#use-solanas-install-tool)
- [Download Prebuilt Binaries](#download-prebuilt-binaries)
- [Build from Source](#build-from-source)
- [Use Solana's Install Tool (Simplest option)](#use-solanas-install-tool)
- [Download Prebuilt Binaries](#download-prebuilt-binaries)
- [Build from Source](#build-from-source)
## Use Solana's Install Tool
### MacOS & Linux
- Open your favorite Terminal application
- Open your favorite Terminal application
- Install the Solana release
[LATEST_SOLANA_RELEASE_VERSION](https://github.com/solana-labs/solana/releases/tag/LATEST_SOLANA_RELEASE_VERSION) on your
machine by running:
- Install the Solana release
[LATEST_SOLANA_RELEASE_VERSION](https://github.com/solana-labs/solana/releases/tag/LATEST_SOLANA_RELEASE_VERSION) on your
machine by running:
```bash
curl -sSf https://raw.githubusercontent.com/solana-labs/solana/LATEST_SOLANA_RELEASE_VERSION/install/solana-install-init.sh | sh -s - LATEST_SOLANA_RELEASE_VERSION
```
- If you are connecting to a different testnet, you can replace `LATEST_SOLANA_RELEASE_VERSION` with the
release tag matching the software version of your desired testnet, or replace it
with the named channel `stable`, `beta`, or `edge`.
- If you are connecting to a different testnet, you can replace `LATEST_SOLANA_RELEASE_VERSION` with the
release tag matching the software version of your desired testnet, or replace it
with the named channel `stable`, `beta`, or `edge`.
- The following output indicates a successful update:
@ -36,59 +39,64 @@ Active release directory: /home/solana/.local/share/solana/install/active_releas
Update successful
```
- Depending on your system, the end of the installer messaging may prompt you
to
```bash
- Depending on your system, the end of the installer messaging may prompt you
to
```bash
Please update your PATH environment variable to include the solana programs:
```
- If you get the above message, copy and paste the recommended command below
it to update `PATH`
- Confirm you have the desired version of `solana` installed by running:
```bash
solana --version
- If you get the above message, copy and paste the recommended command below
it to update `PATH`
- Confirm you have the desired version of `solana` installed by running:
```bash
solana --version
```
- After a successful install, `solana-install update` may be used to easily
update the Solana software to a newer version at any time.
- After a successful install, `solana-install update` may be used to easily
update the Solana software to a newer version at any time.
***
---
###Windows
- Open a Command Prompt (`cmd.exe`) as an Administrator
- Search for Command Prompt in the Windows search bar. When the Command
Prompt app appears, right-click and select “Open as Administrator”.
If you are prompted by a pop-up window asking “Do you want to allow this app to
make changes to your device?”, click Yes.
- Open a Command Prompt (`cmd.exe`) as an Administrator
- Copy and paste the following command, then press Enter to download the Solana
installer into a temporary directory:
- Search for Command Prompt in the Windows search bar. When the Command
Prompt app appears, right-click and select “Open as Administrator”.
If you are prompted by a pop-up window asking “Do you want to allow this app to
make changes to your device?”, click Yes.
- Copy and paste the following command, then press Enter to download the Solana
installer into a temporary directory:
```bash
curl http://release.solana.com/LATEST_SOLANA_RELEASE_VERSION/solana-install-init-x86_64-pc-windows-gnu.exe --output C:\solana-install-tmp\solana-install-init.exe --create-dirs
```
- Copy and paste the following command, then press Enter to install the latest
version of Solana. If you see a security pop-up by your system, please select
to allow the program to run.
- Copy and paste the following command, then press Enter to install the latest
version of Solana. If you see a security pop-up by your system, please select
to allow the program to run.
```bash
C:\solana-install-tmp\solana-install-init.exe LATEST_SOLANA_RELEASE_VERSION
```
- When the installer is finished, press Enter.
- When the installer is finished, press Enter.
- Close the command prompt window and re-open a new command prompt window as a
normal user
- Search for "Command Prompt" in the search bar, then left click on the
Command Prompt app icon, no need to run as Administrator)
- Confirm you have the desired version of `solana` installed by entering:
```bash
solana --version
- Close the command prompt window and re-open a new command prompt window as a
normal user
- Search for "Command Prompt" in the search bar, then left click on the
Command Prompt app icon, no need to run as Administrator)
- Confirm you have the desired version of `solana` installed by entering:
```bash
solana --version
```
- After a successful install, `solana-install update` may be used to easily
update the Solana software to a newer version at any time.
- After a successful install, `solana-install update` may be used to easily
update the Solana software to a newer version at any time.
## Download Prebuilt Binaries
@ -99,7 +107,7 @@ manually download and install the binaries.
Download the binaries by navigating to
[https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest),
download **solana-release-x86\_64-unknown-linux-gnu.tar.bz2**, then extract the
download **solana-release-x86_64-unknown-linux-gnu.tar.bz2**, then extract the
archive:
```bash
@ -112,7 +120,7 @@ export PATH=$PWD/bin:$PATH
Download the binaries by navigating to
[https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest),
download **solana-release-x86\_64-apple-darwin.tar.bz2**, then extract the
download **solana-release-x86_64-apple-darwin.tar.bz2**, then extract the
archive:
```bash
@ -124,12 +132,12 @@ export PATH=$PWD/bin:$PATH
### Windows
- Download the binaries by navigating to
[https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest),
download **solana-release-x86\_64-pc-windows-gnu.tar.bz2**, then extract the
archive using WinZip or similar.
[https://github.com/solana-labs/solana/releases/latest](https://github.com/solana-labs/solana/releases/latest),
download **solana-release-x86_64-pc-windows-gnu.tar.bz2**, then extract the
archive using WinZip or similar.
- Open a Command Prompt and navigate to the directory into which you extracted
the binaries and run:
the binaries and run:
```bash
cd solana-release/

View File

@ -1,4 +1,6 @@
# Manage Stake Accounts
---
title: Manage Stake Accounts
---
If you want to delegate stake to many different validators, you will need
to create a separate stake account for each. If you follow the convention

View File

@ -1,10 +1,13 @@
# Send and Receive Tokens
---
title: Send and Receive Tokens
---
This page decribes how to receive and send SOL tokens using the command line
tools with a command line wallet such as a [paper wallet](../paper-wallet/README.md),
a [file system wallet](../file-system-wallet/README.md), or a
[hardware wallet](../hardware-wallets/README.md). Before you begin, make sure
[hardware wallet](../hardware-wallets/README.md). Before you begin, make sure
you have created a wallet and have access to its address (pubkey) and the
signing keypair. Check out our
signing keypair. Check out our
[conventions for entering keypairs for different wallet types](../cli/conventions.md#keypair-conventions).
## Testing your Wallet
@ -13,15 +16,15 @@ Before sharing your public key with others, you may want to first ensure the
key is valid and that you indeed hold the corresponding private key.
In this example, we will create a second wallet in addition to your first wallet,
and then transfer some tokens to it. This will confirm that you can send and
and then transfer some tokens to it. This will confirm that you can send and
receive tokens on your wallet type of choice.
This test example uses our Developer Testnet, called devnet. Tokens issued
This test example uses our Developer Testnet, called devnet. Tokens issued
on devnet have **no** value, so don't worry if you lose them.
#### Airdrop some tokens to get started
First, *airdrop* yourself some play tokens on the devnet.
First, _airdrop_ yourself some play tokens on the devnet.
```bash
solana airdrop 10 <RECIPIENT_ACCOUNT_ADDRESS> --url https://devnet.solana.com
@ -85,6 +88,7 @@ where `<ACCOUNT_ADDRESS>` is either the public key from your keypair or the
recipient's public key.
#### Full example of test transfer
```bash
$ solana-keygen new --outfile my_solana_wallet.json # Creating my first wallet, a file system wallet
Generating a new keypair
@ -130,7 +134,7 @@ $ solana balance 7S3P4HxJpyyigGzodYwHtCxZyUQe9JiBMHyRWXArAaKv --url https://devn
To receive tokens, you will need an address for others to send tokens to. In
Solana, the wallet address is the public key of a keypair. There are a variety
of techniques for generating keypairs. The method you choose will depend on how
you choose to store keypairs. Keypairs are stored in wallets. Before receiving
you choose to store keypairs. Keypairs are stored in wallets. Before receiving
tokens, you will need to [create a wallet](../wallet-guide/cli.md).
Once completed, you should have a public key
for each keypair you generated. The public key is a long string of base58

View File

@ -1,4 +1,6 @@
# A Solana Cluster
---
title: A Solana Cluster
---
A Solana cluster is a set of validators working together to serve client transactions and maintain the integrity of the ledger. Many clusters may coexist. When two clusters share a common genesis block, they attempt to converge. Otherwise, they simply ignore the existence of the other. Transactions sent to the wrong one are quietly rejected. In this section, we'll discuss how a cluster is created, how nodes join the cluster, how they share the ledger, how they ensure the ledger is replicated, and how they cope with buggy and malicious nodes.

View File

@ -1,4 +1,6 @@
# Benchmark a Cluster
---
title: Benchmark a Cluster
---
The Solana git repository contains all the scripts you might need to spin up your own local testnet. Depending on what you're looking to achieve, you may want to run a different variation, as the full-fledged, performance-enhanced multinode testnet is considerably more complex to set up than a Rust-only, singlenode testnode. If you are looking to develop high-level features, such as experimenting with smart contracts, save yourself some setup headaches and stick to the Rust-only singlenode demo. If you're doing performance optimization of the transaction pipeline, consider the enhanced singlenode demo. If you're doing consensus work, you'll need at least a Rust-only multinode demo. If you want to reproduce our TPS metrics, run the enhanced multinode demo.
@ -92,17 +94,17 @@ What just happened? The client demo spins up several threads to send 500,000 tra
### Testnet Debugging
There are some useful debug messages in the code, you can enable them on a per-module and per-level basis. Before running a leader or validator set the normal RUST\_LOG environment variable.
There are some useful debug messages in the code, you can enable them on a per-module and per-level basis. Before running a leader or validator set the normal RUST_LOG environment variable.
For example
* To enable `info` everywhere and `debug` only in the solana::banking\_stage module:
- To enable `info` everywhere and `debug` only in the solana::banking_stage module:
```bash
$ export RUST_LOG=solana=info,solana::banking_stage=debug
```
* To enable BPF program logging:
- To enable BPF program logging:
```bash
$ export RUST_LOG=solana_bpf_loader=trace

View File

@ -1,4 +1,6 @@
# Fork Generation
---
title: Fork Generation
---
This section describes how forks naturally occur as a consequence of [leader rotation](leader-rotation.md).
@ -58,7 +60,7 @@ Validators vote based on a greedy choice to maximize their reward described in [
The diagram below represents a validator's view of the PoH stream with possible forks over time. L1, L2, etc. are leader slots, and `E`s represent entries from that leader during that leader's slot. The `x`s represent ticks only, and time flows downwards in the diagram.
![Fork generation](../.gitbook/assets/fork-generation.svg)
![Fork generation](/img/fork-generation.svg)
Note that an `E` appearing on 2 forks at the same slot is a slashable condition, so a validator observing `E3` and `E3'` can slash L3 and safely choose `x` for that slot. Once a validator commits to a forks, other forks can be discarded below that tick count. For any slot, validators need only consider a single "has entries" chain or a "ticks only" chain to be proposed by a leader. But multiple virtual entries may overlap as they link back to the a previous slot.
@ -66,10 +68,10 @@ Note that an `E` appearing on 2 forks at the same slot is a slashable condition,
It's useful to consider leader rotation over PoH tick count as time division of the job of encoding state for the cluster. The following table presents the above tree of forks as a time-divided ledger.
| leader slot | L1 | L2 | L3 | L4 | L5 |
| :--- | :--- | :--- | :--- | :--- | :--- |
| data | E1 | E2 | E3 | E4 | E5 |
| ticks since prev | | | | x | xx |
| leader slot | L1 | L2 | L3 | L4 | L5 |
| :--------------- | :-- | :-- | :-- | :-- | :-- |
| data | E1 | E2 | E3 | E4 | E5 |
| ticks since prev | | | | x | xx |
Note that only data from leader L3 will be accepted during leader slot L3. Data from L3 may include "catchup" ticks back to a slot other than L2 if L3 did not observe L2's data. L4 and L5's transmissions include the "ticks to prev" PoH entries.

View File

@ -1,4 +1,6 @@
# Leader Rotation
---
title: Leader Rotation
---
At any given moment, a cluster expects only one validator to produce ledger entries. By having only one leader at a time, all validators are able to replay identical copies of the ledger. The drawback of only one leader at a time, however, is that a malicious leader is capable of censoring votes and transactions. Since censoring cannot be distinguished from the network dropping packets, the cluster cannot simply elect a single node to hold the leader role indefinitely. Instead, the cluster minimizes the influence of a malicious leader by rotating which node takes the lead.
@ -31,8 +33,8 @@ Two partitions that are generating half of the blocks each. Neither is coming to
In this unstable scenario, multiple valid leader schedules exist.
* A leader schedule is generated for every fork whose direct parent is in the previous epoch.
* The leader schedule is valid after the start of the next epoch for descendant forks until it is updated.
- A leader schedule is generated for every fork whose direct parent is in the previous epoch.
- The leader schedule is valid after the start of the next epoch for descendant forks until it is updated.
Each partition's schedule will diverge after the partition lasts more than an epoch. For this reason, the epoch duration should be selected to be much much larger then slot time and the expected length for a fork to be committed to root.
@ -73,8 +75,8 @@ The seed that is selected is predictable but unbiasable. There is no grinding at
A leader can bias the active set by censoring validator votes. Two possible ways exist for leaders to censor the active set:
* Ignore votes from validators
* Refuse to vote for blocks with votes from validators
- Ignore votes from validators
- Refuse to vote for blocks with votes from validators
To reduce the likelihood of censorship, the active set is calculated at the leader schedule offset boundary over an _active set sampling duration_. The active set sampling duration is long enough such that votes will have been collected by multiple leaders.

View File

@ -1,4 +1,6 @@
# Managing Forks
---
title: Managing Forks
---
The ledger is permitted to fork at slot boundaries. The resulting data structure forms a tree called a _blockstore_. When the validator interprets the blockstore, it must maintain state for each fork in the chain. We call each instance an _active fork_. It is the responsibility of a validator to weigh those forks, such that it may eventually select a fork.
@ -8,14 +10,14 @@ A validator selects a fork by submiting a vote to a slot leader on that fork. Th
An active fork is as a sequence of checkpoints that has a length at least one longer than the rollback depth. The shortest fork will have a length exactly one longer than the rollback depth. For example:
![Forks](../.gitbook/assets/forks.svg)
![Forks](/img/forks.svg)
The following sequences are _active forks_:
* {4, 2, 1}
* {5, 2, 1}
* {6, 3, 1}
* {7, 3, 1}
- {4, 2, 1}
- {5, 2, 1}
- {6, 3, 1}
- {7, 3, 1}
## Pruning and Squashing
@ -23,12 +25,12 @@ A validator may vote on any checkpoint in the tree. In the diagram above, that's
Starting from the example above, wth a rollback depth of 2, consider a vote on 5 versus a vote on 6. First, a vote on 5:
![Forks after pruning](../.gitbook/assets/forks-pruned.svg)
![Forks after pruning](/img/forks-pruned.svg)
The new root is 2, and any active forks that are not descendants from 2 are pruned.
Alternatively, a vote on 6:
![Forks](../.gitbook/assets/forks-pruned2.svg)
![Forks](/img/forks-pruned2.svg)
The tree remains with a root of 1, since the active fork starting at 6 is only 2 checkpoints from the root.

View File

@ -1,4 +1,6 @@
# Performance Metrics
---
title: Performance Metrics
---
Solana cluster performance is measured as average number of transactions per second that the network can sustain \(TPS\). And, how long it takes for a transaction to be confirmed by super majority of the cluster \(Confirmation Time\).
@ -21,4 +23,3 @@ The validator software is deployed to GCP n1-standard-16 instances with 1TB pd-s
solana-bench-tps is started after the network converges from a client machine with n1-standard-16 CPU-only instance with the following arguments: `--tx\_count=50000 --thread-batch-sleep 1000`
TPS and confirmation metrics are captured from the dashboard numbers over a 5 minute average of when the bench-tps transfer stage begins.

View File

@ -1,4 +1,6 @@
# Stake Delegation and Rewards
---
title: Stake Delegation and Rewards
---
Stakers are rewarded for helping to validate the ledger. They do this by delegating their stake to validator nodes. Those validators do the legwork of replaying the ledger and send votes to a per-node vote account to which stakers can delegate their stakes. The rest of the cluster uses those stake-weighted votes to select a block when forks arise. Both the validator and staker need some economic incentive to play their part. The validator needs to be compensated for its hardware and the staker needs to be compensated for the risk of getting its stake slashed. The economics are covered in [staking rewards](../implemented-proposals/staking-rewards.md). This section, on the other hand, describes the underlying mechanics of its implementation.
@ -22,18 +24,18 @@ The rewards process is split into two on-chain programs. The Vote program solves
VoteState is the current state of all the votes the validator has submitted to the network. VoteState contains the following state information:
* `votes` - The submitted votes data structure.
* `credits` - The total number of rewards this vote program has generated over its lifetime.
* `root_slot` - The last slot to reach the full lockout commitment necessary for rewards.
* `commission` - The commission taken by this VoteState for any rewards claimed by staker's Stake accounts. This is the percentage ceiling of the reward.
* Account::lamports - The accumulated lamports from the commission. These do not count as stakes.
* `authorized_voter` - Only this identity is authorized to submit votes. This field can only modified by this identity.
* `node_pubkey` - The Solana node that votes in this account.
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address and the authorized vote signer
- `votes` - The submitted votes data structure.
- `credits` - The total number of rewards this vote program has generated over its lifetime.
- `root_slot` - The last slot to reach the full lockout commitment necessary for rewards.
- `commission` - The commission taken by this VoteState for any rewards claimed by staker's Stake accounts. This is the percentage ceiling of the reward.
- Account::lamports - The accumulated lamports from the commission. These do not count as stakes.
- `authorized_voter` - Only this identity is authorized to submit votes. This field can only modified by this identity.
- `node_pubkey` - The Solana node that votes in this account.
- `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address and the authorized vote signer
### VoteInstruction::Initialize\(VoteInit\)
* `account[0]` - RW - The VoteState
- `account[0]` - RW - The VoteState
`VoteInit` carries the new vote account's `node_pubkey`, `authorized_voter`, `authorized_withdrawer`, and `commission`
@ -43,16 +45,16 @@ VoteState is the current state of all the votes the validator has submitted to t
Updates the account with a new authorized voter or withdrawer, according to the VoteAuthorize parameter \(`Voter` or `Withdrawer`\). The transaction must be by signed by the Vote account's current `authorized_voter` or `authorized_withdrawer`.
* `account[0]` - RW - The VoteState
- `account[0]` - RW - The VoteState
`VoteState::authorized_voter` or `authorized_withdrawer` is set to to `Pubkey`.
### VoteInstruction::Vote\(Vote\)
* `account[0]` - RW - The VoteState
- `account[0]` - RW - The VoteState
`VoteState::lockouts` and `VoteState::credits` are updated according to voting lockout rules see [Tower BFT](../implemented-proposals/tower-bft.md)
* `account[1]` - RO - `sysvar::slot_hashes` A list of some N most recent slots and their hashes for the vote to be verified against.
* `account[2]` - RO - `sysvar::clock` The current network time, expressed in slots, epochs.
- `account[1]` - RO - `sysvar::slot_hashes` A list of some N most recent slots and their hashes for the vote to be verified against.
- `account[2]` - RO - `sysvar::clock` The current network time, expressed in slots, epochs.
### StakeState
@ -62,15 +64,15 @@ A StakeState takes one of four forms, StakeState::Uninitialized, StakeState::Ini
StakeState::Stake is the current delegation preference of the **staker** and contains the following state information:
* Account::lamports - The lamports available for staking.
* `stake` - the staked amount \(subject to warm up and cool down\) for generating rewards, always less than or equal to Account::lamports
* `voter_pubkey` - The pubkey of the VoteState instance the lamports are delegated to.
* `credits_observed` - The total credits claimed over the lifetime of the program.
* `activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up.
* `deactivated` - the epoch at which this stake was de-activated, some cool down epochs are required before the account is fully deactivated, and the stake available for withdrawal
- Account::lamports - The lamports available for staking.
- `stake` - the staked amount \(subject to warm up and cool down\) for generating rewards, always less than or equal to Account::lamports
- `voter_pubkey` - The pubkey of the VoteState instance the lamports are delegated to.
- `credits_observed` - The total credits claimed over the lifetime of the program.
- `activated` - the epoch at which this stake was activated/delegated. The full stake will be counted after warm up.
- `deactivated` - the epoch at which this stake was de-activated, some cool down epochs are required before the account is fully deactivated, and the stake available for withdrawal
* `authorized_staker` - the pubkey of the entity that must sign delegation, activation, and deactivation transactions
* `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address, and the authorized staker
- `authorized_staker` - the pubkey of the entity that must sign delegation, activation, and deactivation transactions
- `authorized_withdrawer` - the identity of the entity in charge of the lamports of this account, separate from the account's address, and the authorized staker
### StakeState::RewardsPool
@ -82,17 +84,17 @@ The Stakes and the RewardsPool are accounts that are owned by the same `Stake` p
The Stake account is moved from Initialized to StakeState::Stake form, or from a deactivated (i.e. fully cooled-down) StakeState::Stake to activated StakeState::Stake. This is how stakers choose the vote account and validator node to which their stake account lamports are delegated. The transaction must be signed by the stake's `authorized_staker`.
* `account[0]` - RW - The StakeState::Stake instance. `StakeState::Stake::credits_observed` is initialized to `VoteState::credits`, `StakeState::Stake::voter_pubkey` is initialized to `account[1]`. If this is the initial delegation of stake, `StakeState::Stake::stake` is initialized to the account's balance in lamports, `StakeState::Stake::activated` is initialized to the current Bank epoch, and `StakeState::Stake::deactivated` is initialized to std::u64::MAX
* `account[1]` - R - The VoteState instance.
* `account[2]` - R - sysvar::clock account, carries information about current Bank epoch
* `account[3]` - R - sysvar::stakehistory account, carries information about stake history
* `account[4]` - R - stake::Config accoount, carries warmup, cooldown, and slashing configuration
- `account[0]` - RW - The StakeState::Stake instance. `StakeState::Stake::credits_observed` is initialized to `VoteState::credits`, `StakeState::Stake::voter_pubkey` is initialized to `account[1]`. If this is the initial delegation of stake, `StakeState::Stake::stake` is initialized to the account's balance in lamports, `StakeState::Stake::activated` is initialized to the current Bank epoch, and `StakeState::Stake::deactivated` is initialized to std::u64::MAX
- `account[1]` - R - The VoteState instance.
- `account[2]` - R - sysvar::clock account, carries information about current Bank epoch
- `account[3]` - R - sysvar::stakehistory account, carries information about stake history
- `account[4]` - R - stake::Config accoount, carries warmup, cooldown, and slashing configuration
### StakeInstruction::Authorize\(Pubkey, StakeAuthorize\)
Updates the account with a new authorized staker or withdrawer, according to the StakeAuthorize parameter \(`Staker` or `Withdrawer`\). The transaction must be by signed by the Stakee account's current `authorized_staker` or `authorized_withdrawer`. Any stake lock-up must have expired, or the lock-up custodian must also sign the transaction.
Updates the account with a new authorized staker or withdrawer, according to the StakeAuthorize parameter \(`Staker` or `Withdrawer`\). The transaction must be by signed by the Stakee account's current `authorized_staker` or `authorized_withdrawer`. Any stake lock-up must have expired, or the lock-up custodian must also sign the transaction.
* `account[0]` - RW - The StakeState
- `account[0]` - RW - The StakeState
`StakeState::authorized_staker` or `authorized_withdrawer` is set to to `Pubkey`.
@ -101,8 +103,8 @@ Updates the account with a new authorized staker or withdrawer, according to the
A staker may wish to withdraw from the network. To do so he must first deactivate his stake, and wait for cool down.
The transaction must be signed by the stake's `authorized_staker`.
* `account[0]` - RW - The StakeState::Stake instance that is deactivating.
* `account[1]` - R - sysvar::clock account from the Bank that carries current epoch
- `account[0]` - RW - The StakeState::Stake instance that is deactivating.
- `account[1]` - R - sysvar::clock account from the Bank that carries current epoch
StakeState::Stake::deactivated is set to the current epoch + cool down. The account's stake will ramp down to zero by that epoch, and Account::lamports will be available for withdrawal.
@ -110,21 +112,21 @@ StakeState::Stake::deactivated is set to the current epoch + cool down. The acco
Lamports build up over time in a Stake account and any excess over activated stake can be withdrawn. The transaction must be signed by the stake's `authorized_withdrawer`.
* `account[0]` - RW - The StakeState::Stake from which to withdraw.
* `account[1]` - RW - Account that should be credited with the withdrawn lamports.
* `account[2]` - R - sysvar::clock account from the Bank that carries current epoch, to calculate stake.
* `account[3]` - R - sysvar::stake\_history account from the Bank that carries stake warmup/cooldown history
- `account[0]` - RW - The StakeState::Stake from which to withdraw.
- `account[1]` - RW - Account that should be credited with the withdrawn lamports.
- `account[2]` - R - sysvar::clock account from the Bank that carries current epoch, to calculate stake.
- `account[3]` - R - sysvar::stake_history account from the Bank that carries stake warmup/cooldown history
## Benefits of the design
* Single vote for all the stakers.
* Clearing of the credit variable is not necessary for claiming rewards.
* Each delegated stake can claim its rewards independently.
* Commission for the work is deposited when a reward is claimed by the delegated stake.
- Single vote for all the stakers.
- Clearing of the credit variable is not necessary for claiming rewards.
- Each delegated stake can claim its rewards independently.
- Commission for the work is deposited when a reward is claimed by the delegated stake.
## Example Callflow
![Passive Staking Callflow](../.gitbook/assets/passive-staking-callflow.svg)
![Passive Staking Callflow](/img/passive-staking-callflow.svg)
## Staking Rewards
@ -171,22 +173,22 @@ Consider the situation of a single stake of 1,000 activated at epoch N, with net
At epoch N+1, the amount available to be activated for the network is 400 \(20% of 200\), and at epoch N, this example stake is the only stake activating, and so is entitled to all of the warmup room available.
| epoch | effective | activating | total effective | total activating |
| :--- | ---: | ---: | ---: | ---: |
| N-1 | | | 2,000 | 0 |
| N | 0 | 1,000 | 2,000 | 1,000 |
| N+1 | 400 | 600 | 2,400 | 600 |
| N+2 | 880 | 120 | 2,880 | 120 |
| N+3 | 1000 | 0 | 3,000 | 0 |
| :---- | --------: | ---------: | --------------: | ---------------: |
| N-1 | | | 2,000 | 0 |
| N | 0 | 1,000 | 2,000 | 1,000 |
| N+1 | 400 | 600 | 2,400 | 600 |
| N+2 | 880 | 120 | 2,880 | 120 |
| N+3 | 1000 | 0 | 3,000 | 0 |
Were 2 stakes \(X and Y\) to activate at epoch N, they would be awarded a portion of the 20% in proportion to their stakes. At each epoch effective and activating for each stake is a function of the previous epoch's state.
| epoch | X eff | X act | Y eff | Y act | total effective | total activating |
| :--- | ---: | ---: | ---: | ---: | ---: | ---: |
| N-1 | | | | | 2,000 | 0 |
| N | 0 | 1,000 | 0 | 200 | 2,000 | 1,200 |
| N+1 | 333 | 667 | 67 | 133 | 2,400 | 800 |
| N+2 | 733 | 267 | 146 | 54 | 2,880 | 321 |
| N+3 | 1000 | 0 | 200 | 0 | 3,200 | 0 |
| :---- | ----: | ----: | ----: | ----: | --------------: | ---------------: |
| N-1 | | | | | 2,000 | 0 |
| N | 0 | 1,000 | 0 | 200 | 2,000 | 1,200 |
| N+1 | 333 | 667 | 67 | 133 | 2,400 | 800 |
| N+2 | 733 | 267 | 146 | 54 | 2,880 | 321 |
| N+3 | 1000 | 0 | 200 | 0 | 3,200 | 0 |
### Withdrawal
@ -194,4 +196,4 @@ Only lamports in excess of effective+activating stake may be withdrawn at any ti
### Lock-up
Stake accounts support the notion of lock-up, wherein the stake account balance is unavailable for withdrawal until a specified time. Lock-up is specified as an epoch height, i.e. the minimum epoch height that must be reached by the network before the stake account balance is available for withdrawal, unless the transaction is also signed by a specified custodian. This information is gathered when the stake account is created, and stored in the Lockup field of the stake account's state. Changing the authorized staker or withdrawer is also subject to lock-up, as such an operation is effectively a transfer.
Stake accounts support the notion of lock-up, wherein the stake account balance is unavailable for withdrawal until a specified time. Lock-up is specified as an epoch height, i.e. the minimum epoch height that must be reached by the network before the stake account balance is available for withdrawal, unless the transaction is also signed by a specified custodian. This information is gathered when the stake account is created, and stored in the Lockup field of the stake account's state. Changing the authorized staker or withdrawer is also subject to lock-up, as such an operation is effectively a transfer.

View File

@ -1,4 +1,6 @@
# Synchronization
---
title: Synchronization
---
Fast, reliable synchronization is the biggest reason Solana is able to achieve such high throughput. Traditional blockchains synchronize on large chunks of transactions called blocks. By synchronizing on blocks, a transaction cannot be processed until a duration called "block time" has passed. In Proof of Work consensus, these block times need to be very large \(~10 minutes\) to minimize the odds of multiple validators producing a new valid block at the same time. There's no such constraint in Proof of Stake consensus, but without reliable timestamps, a validator cannot determine the order of incoming blocks. The popular workaround is to tag each block with a [wallclock timestamp](https://en.bitcoin.it/wiki/Block_timestamp). Because of clock drift and variance in network latencies, the timestamp is only accurate within an hour or two. To workaround the workaround, these systems lengthen block times to provide reasonable certainty that the median timestamp on each block is always increasing.
@ -22,6 +24,5 @@ Proof of History is not a consensus mechanism, but it is used to improve the per
## More on Proof of History
* [water clock analogy](https://medium.com/solana-labs/proof-of-history-explained-by-a-water-clock-e682183417b8)
* [Proof of History overview](https://medium.com/solana-labs/proof-of-history-a-clock-for-blockchain-cf47a61a9274)
- [water clock analogy](https://medium.com/solana-labs/proof-of-history-explained-by-a-water-clock-e682183417b8)
- [Proof of History overview](https://medium.com/solana-labs/proof-of-history-a-clock-for-blockchain-cf47a61a9274)

View File

@ -1,4 +1,6 @@
# Turbine Block Propagation
---
title: Turbine Block Propagation
---
A Solana cluster uses a multi-layer block propagation mechanism called _Turbine_ to broadcast transaction shreds to all nodes with minimal amount of duplicate messages. The cluster divides itself into small collections of nodes, called _neighborhoods_. Each node is responsible for sharing any data it receives with the other nodes in its neighborhood, as well as propagating the data on to a small set of nodes in other neighborhoods. This way each node only has to communicate with a small number of nodes.
@ -20,15 +22,15 @@ This way each node only has to communicate with a maximum of `2 * DATA_PLANE_FAN
The following diagram shows how the Leader sends shreds with a Fanout of 2 to Neighborhood 0 in Layer 0 and how the nodes in Neighborhood 0 share their data with each other.
![Leader sends shreds to Neighborhood 0 in Layer 0](../.gitbook/assets/data-plane-seeding.svg)
![Leader sends shreds to Neighborhood 0 in Layer 0](/img/data-plane-seeding.svg)
The following diagram shows how Neighborhood 0 fans out to Neighborhoods 1 and 2.
![Neighborhood 0 Fanout to Neighborhood 1 and 2](../.gitbook/assets/data-plane-fanout.svg)
![Neighborhood 0 Fanout to Neighborhood 1 and 2](/img/data-plane-fanout.svg)
Finally, the following diagram shows a two layer cluster with a Fanout of 2.
![Two layer cluster with a Fanout of 2](../.gitbook/assets/data-plane.svg)
![Two layer cluster with a Fanout of 2](/img/data-plane.svg)
### Configuration Values
@ -38,59 +40,62 @@ Currently, configuration is set when the cluster is launched. In the future, the
## Calcuating the required FEC rate
Turbine relies on retransmission of packets between validators. Due to
Turbine relies on retransmission of packets between validators. Due to
retransmission, any network wide packet loss is compounded, and the
probability of the packet failing to reach is destination increases
on each hop. The FEC rate needs to take into account the network wide
on each hop. The FEC rate needs to take into account the network wide
packet loss, and the propagation depth.
A shred group is the set of data and coding packets that can be used
to reconstruct each other. Each shred group has a chance of failure,
to reconstruct each other. Each shred group has a chance of failure,
based on the likelyhood of the number of packets failing that exceeds
the FEC rate. If a validator fails to reconstruct the shred group,
then the block cannot be reconstructed, and the validator has to rely
on repair to fixup the blocks.
The probability of the shred group failing can be computed using the
binomial distribution. If the FEC rate is `16:4`, then the group size
binomial distribution. If the FEC rate is `16:4`, then the group size
is 20, and at least 4 of the shreds must fail for the group to fail.
Which is equal to the sum of the probability of 4 or more trails failing
out of 20.
Probability of a block succeeding in turbine:
* Probability of packet failure: `P = 1 - (1 - network_packet_loss_rate)^2`
* FEC rate: `K:M`
* Number of trials: `N = K + M`
* Shred group failure rate: `S = SUM of i=0 -> M for binomial(prob_failure = P, trials = N, failures = i)`
* Shreds per block: `G`
* Block success rate: `B = (1 - S) ^ (G / N) `
* Binomial distribution for exactly `i` results with probability of P in N trials is defined as `(N choose i) * P^i * (1 - P)^(N-i)`
- Probability of packet failure: `P = 1 - (1 - network_packet_loss_rate)^2`
- FEC rate: `K:M`
- Number of trials: `N = K + M`
- Shred group failure rate: `S = SUM of i=0 -> M for binomial(prob_failure = P, trials = N, failures = i)`
- Shreds per block: `G`
- Block success rate: `B = (1 - S) ^ (G / N)`
- Binomial distribution for exactly `i` results with probability of P in N trials is defined as `(N choose i) * P^i * (1 - P)^(N-i)`
For example:
* Network packet loss rate is 15%.
* 50kpts network generates 6400 shreds per second.
* FEC rate increases the total shres per block by the FEC ratio.
- Network packet loss rate is 15%.
- 50kpts network generates 6400 shreds per second.
- FEC rate increases the total shres per block by the FEC ratio.
With a FEC rate: `16:4`
* `G = 8000`
* `P = 1 - 0.85 * 0.85 = 1 - 0.7225 = 0.2775`
* `S = SUM of i=0 -> 4 for binomial(prob_failure = 0.2775, trials = 20, failures = i) = 0.689414`
* `B = (1 - 0.689) ^ (8000 / 20) = 10^-203`
- `G = 8000`
- `P = 1 - 0.85 * 0.85 = 1 - 0.7225 = 0.2775`
- `S = SUM of i=0 -> 4 for binomial(prob_failure = 0.2775, trials = 20, failures = i) = 0.689414`
- `B = (1 - 0.689) ^ (8000 / 20) = 10^-203`
With FEC rate of `16:16`
* `G = 12800`
* `S = SUM of i=0 -> 32 for binomial(prob_failure = 0.2775, trials = 64, failures = i) = 0.002132`
* `B = (1 - 0.002132) ^ (12800 / 32) = 0.42583`
- `G = 12800`
- `S = SUM of i=0 -> 32 for binomial(prob_failure = 0.2775, trials = 64, failures = i) = 0.002132`
- `B = (1 - 0.002132) ^ (12800 / 32) = 0.42583`
With FEC rate of `32:32`
* `G = 12800`
* `S = SUM of i=0 -> 32 for binomial(prob_failure = 0.2775, trials = 64, failures = i) = 0.000048`
* `B = (1 - 0.000048) ^ (12800 / 64) = 0.99045`
- `G = 12800`
- `S = SUM of i=0 -> 32 for binomial(prob_failure = 0.2775, trials = 64, failures = i) = 0.000048`
- `B = (1 - 0.000048) ^ (12800 / 64) = 0.99045`
## Neighborhoods
The following diagram shows how two neighborhoods in different layers interact. To cripple a neighborhood, enough nodes \(erasure codes +1\) from the neighborhood above need to fail. Since each neighborhood receives shreds from multiple nodes in a neighborhood in the upper layer, we'd need a big network failure in the upper layers to end up with incomplete data.
![Inner workings of a neighborhood](../.gitbook/assets/data-plane-neighborhood.svg)
![Inner workings of a neighborhood](/img/data-plane-neighborhood.svg)

View File

@ -1,4 +1,6 @@
# Secure Vote Signing
---
title: Secure Vote Signing
---
A validator receives entries from the current leader and submits votes confirming those entries are valid. This vote submission presents a security challenge, because forged votes that violate consensus rules could be used to slash the validator's stake.
@ -20,30 +22,30 @@ Currently, there is a 1:1 relationship between validators and vote signers, and
The vote signing service consists of a JSON RPC server and a request processor. At startup, the service starts the RPC server at a configured port and waits for validator requests. It expects the following type of requests: 1. Register a new validator node
* The request must contain validator's identity \(public key\)
* The request must be signed with the validator's private key
* The service drops the request if signature of the request cannot be
- The request must contain validator's identity \(public key\)
- The request must be signed with the validator's private key
- The service drops the request if signature of the request cannot be
verified
* The service creates a new voting asymmetric key for the validator, and
- The service creates a new voting asymmetric key for the validator, and
returns the public key as a response
* If a validator tries to register again, the service returns the public key
- If a validator tries to register again, the service returns the public key
from the pre-existing keypair
1. Sign a vote
* The request must contain a voting transaction and all verification data
* The request must be signed with the validator's private key
* The service drops the request if signature of the request cannot be
- The request must contain a voting transaction and all verification data
- The request must be signed with the validator's private key
- The service drops the request if signature of the request cannot be
verified
* The service verifies the voting data
* The service returns a signature for the transaction
- The service verifies the voting data
- The service returns a signature for the transaction
## Validator voting
@ -64,4 +66,3 @@ The validator looks up the votes submitted by all the nodes in the cluster for t
### New Vote Signing
The validator creates a "new vote" transaction and sends it to the signing service using JSON RPC. The RPC request also includes the vote verification data. On success, the RPC call returns the signature for the vote. On failure, RPC call returns the failure code.

View File

@ -1,34 +1,40 @@
# Solana Clusters
---
title: Solana Clusters
---
Solana maintains several different clusters with different purposes.
Before you begin make sure you have first
[installed the Solana command line tools](cli/install-solana-cli-tools.md)
Explorers:
* [http://explorer.solana.com/](https://explorer.solana.com/).
* [http://solanabeach.io/](http://solanabeach.io/).
- [http://explorer.solana.com/](https://explorer.solana.com/).
- [http://solanabeach.io/](http://solanabeach.io/).
## Devnet
* Devnet serves as a playground for anyone who wants to take Solana for a
test drive, as a user, token holder, app developer, or validator.
* Application developers should target Devnet.
* Potential validators should first target Devnet.
* Key differences between Devnet and Mainnet Beta:
* Devnet tokens are **not real**
* Devnet includes a token faucet for airdrops for application testing
* Devnet may be subject to ledger resets
* Devnet typically runs a newer software version than Mainnet Beta
* Devnet may be maintained by different validators than Mainnet Beta
* Gossip entrypoint for Devnet: `devnet.solana.com:8001`
* RPC URL for Devnet: `https://devnet.solana.com`
- Devnet serves as a playground for anyone who wants to take Solana for a
test drive, as a user, token holder, app developer, or validator.
- Application developers should target Devnet.
- Potential validators should first target Devnet.
- Key differences between Devnet and Mainnet Beta:
- Devnet tokens are **not real**
- Devnet includes a token faucet for airdrops for application testing
- Devnet may be subject to ledger resets
- Devnet typically runs a newer software version than Mainnet Beta
- Devnet may be maintained by different validators than Mainnet Beta
- Gossip entrypoint for Devnet: `devnet.solana.com:8001`
- RPC URL for Devnet: `https://devnet.solana.com`
##### Example `solana` command-line configuration
```bash
solana config set --url https://devnet.solana.com
```
##### Example `solana-validator` command-line
```bash
$ solana-validator \
--identity ~/validator-keypair.json \
@ -46,29 +52,30 @@ $ solana-validator \
The `--trusted-validator`s is operated by Solana
## Testnet
* Testnet is where we stress test recent release features on a live
cluster, particularly focused on network performance, stability and validator
behavior.
* [Tour de SOL](tour-de-sol/README.md) initiative runs on Testnet, where we
encourage malicious behavior and attacks on the network to help us find and
squash bugs or network vulnerabilities.
* Testnet tokens are **not real**
* Testnet may be subject to ledger resets.
* Testnet typically runs a newer software release than both Devnet and
Mainnet Beta
* Testnet may be maintained by different validators than Mainnet Beta
* Gossip entrypoint for Testnet: `35.203.170.30:8001`
* RPC URL for Testnet: `https://testnet.solana.com`
- Testnet is where we stress test recent release features on a live
cluster, particularly focused on network performance, stability and validator
behavior.
- [Tour de SOL](tour-de-sol/README.md) initiative runs on Testnet, where we
encourage malicious behavior and attacks on the network to help us find and
squash bugs or network vulnerabilities.
- Testnet tokens are **not real**
- Testnet may be subject to ledger resets.
- Testnet typically runs a newer software release than both Devnet and
Mainnet Beta
- Testnet may be maintained by different validators than Mainnet Beta
- Gossip entrypoint for Testnet: `35.203.170.30:8001`
- RPC URL for Testnet: `https://testnet.solana.com`
##### Example `solana` command-line configuration
```bash
solana config set --url https://testnet.solana.com
```
##### Example `solana-validator` command-line
```bash
$ solana-validator \
--identity ~/validator-keypair.json \
@ -87,28 +94,33 @@ $ solana-validator \
```
The identity of the `--trusted-validator`s are:
* `5D1fNXzvv5NjV1ysLjirC4WY92RNsVH18vjmcszZd8on` - testnet.solana.com (Solana)
* `Ft5fbkqNa76vnsjYNwjDZUXoTWpP7VYm3mtsaQckQADN` - Certus One
* `9QxCLckBiJc783jnMvXZubK4wH86Eqqvashtrwvcsgkv` - Algo|Stake
- `5D1fNXzvv5NjV1ysLjirC4WY92RNsVH18vjmcszZd8on` - testnet.solana.com (Solana)
- `Ft5fbkqNa76vnsjYNwjDZUXoTWpP7VYm3mtsaQckQADN` - Certus One
- `9QxCLckBiJc783jnMvXZubK4wH86Eqqvashtrwvcsgkv` - Algo|Stake
## Mainnet Beta
A permissionless, persistent cluster for early token holders and launch partners.
Currently smart contracts, rewards, and inflation are disabled.
* Tokens that are issued on Mainnet Beta are **real** SOL
* If you have paid money to purchase/be issued tokens, such as through our
CoinList auction, these tokens will be transferred on Mainnet Beta.
* Note: If you are using a non-command-line wallet such as
[Trust Wallet](wallet-guide/trust-wallet.md),
the wallet will always be connecting to Mainnet Beta.
* Gossip entrypoint for Mainnet Beta: `mainnet-beta.solana.com:8001`
* RPC URL for Mainnet Beta: `https://api.mainnet-beta.solana.com`
- Tokens that are issued on Mainnet Beta are **real** SOL
- If you have paid money to purchase/be issued tokens, such as through our
CoinList auction, these tokens will be transferred on Mainnet Beta.
- Note: If you are using a non-command-line wallet such as
[Trust Wallet](wallet-guide/trust-wallet.md),
the wallet will always be connecting to Mainnet Beta.
- Gossip entrypoint for Mainnet Beta: `mainnet-beta.solana.com:8001`
- RPC URL for Mainnet Beta: `https://api.mainnet-beta.solana.com`
##### Example `solana` command-line configuration
```bash
solana config set --url https://api.mainnet-beta.solana.com
```
##### Example `solana-validator` command-line
```bash
$ solana-validator \
--identity ~/validator-keypair.json \

69
docs/src/css/custom.css Normal file
View File

@ -0,0 +1,69 @@
/* stylelint-disable docusaurus/copyright-header */
/**
* Any CSS included here will be global. The classic template
* bundles Infima by default. Infima is a CSS framework designed to
* work well for content-centric websites.
*/
/* You can override the default Infima variables here. */
@import url('https://fonts.googleapis.com/css2?family=Roboto');
:root {
--ifm-color-primary: #25c2a0;
--ifm-color-primary-dark: #409088;
--ifm-color-primary-darker: #387462;
--ifm-color-primary-darkest: #1b4e3f;
--ifm-color-primary-light: #42ba96;
--ifm-color-primary-lighter: #86b8b6;
--ifm-color-primary-lightest: #abd5c6;
--ifm-code-font-size: 95%;
--ifm-spacing-horizontal: 1em;
--ifm-font-family-base: "Roboto", system-ui, -apple-system, Segoe UI, Roboto, Ubuntu, Cantarell, Noto Sans, sans-serif, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol';
--ifm-footer-background-color: #232323;
}
@keyframes fadeInUp {
0% { opacity: 0; transform: translateY(1.5rem); }
}
main {
margin: 1rem 0 5rem 0;
}
.docusaurus-highlight-code-line {
background-color: rgb(72, 77, 91);
display: block;
margin: 0 calc(-1 * var(--ifm-pre-padding));
padding: 0 var(--ifm-pre-padding);
}
.card {
padding: 1rem;
margin-top: 2rem;
animation: fadeInUp 400ms backwards;
animation-delay: 150ms;
transition-property: all;
transition-duration: 200ms;
box-shadow: 0 8px 28px 4px rgba(86,91,115,0.15);
}
.card a {
text-decoration: none;
}
.card:hover {
transform: translate(0px, -5px);
}
.footer--dark {
background-color: #232323 !important;
}
footer .text--center {
padding: 2rem 0 0 0;
}

View File

@ -1,15 +1,15 @@
# File System Wallet
---
title: File System Wallet
---
This document describes how to create and use a file system wallet with the
Solana CLI tools. A file system wallet exists as an unencrypted keypair file
Solana CLI tools. A file system wallet exists as an unencrypted keypair file
on your computer system's filesystem.
{% hint style="info" %}
File system wallets are the **least secure** method of storing SOL tokens.
Storing large amounts of tokens in a file system wallet is **not recommended**.
{% endhint %}
> File system wallets are the **least secure** method of storing SOL tokens. Storing large amounts of tokens in a file system wallet is **not recommended**.
## Before you Begin
Make sure you have
[installed the Solana Command Line Tools](../cli/install-solana-cli-tools.md)
@ -40,8 +40,8 @@ ErRr1caKzK8L8nn4xmEWtimYRiTCAZXjBtVphuZ5vMKy
```
This is the public key corresponding to the keypair in
`~/my-solana-wallet/my-keypair.json`. The public key of the keypair file is
your *wallet address*.
`~/my-solana-wallet/my-keypair.json`. The public key of the keypair file is
your _wallet address_.
## Verify your Address against your Keypair file
@ -57,7 +57,8 @@ The command will output "Success" if the given address matches the
the one in your keypair file, and "Failed" otherwise.
## Creating Multiple File System Wallet Addresses
You can create as many wallet addresses as you like. Simply re-run the
You can create as many wallet addresses as you like. Simply re-run the
steps in [Generate a File System Wallet](#generate-a-file-system-wallet-keypair)
and make sure to use a new filename or path with the `--outfile` argument.
Multiple wallet addresses can be useful if you want to transfer tokens between

View File

@ -1,22 +1,25 @@
# Hardware Wallets
---
title: Hardware Wallets
---
Signing a transaction requires a private key, but storing a private
key on your personal computer or phone leaves it subject to theft.
Adding a password to your key adds security, but many people prefer
to take it a step further and move their private keys to a separate
physical device called a *hardware wallet*. A hardware wallet is a
physical device called a _hardware wallet_. A hardware wallet is a
small handheld device that stores private keys and provides some
interface for signing transactions.
The Solana CLI has first class support for hardware wallets. Anywhere
you use a keypair filepath (denoted as `<KEYPAIR>` in usage docs), you
can pass a *keypair URL* that uniquely identifies a keypair in a
can pass a _keypair URL_ that uniquely identifies a keypair in a
hardware wallet.
## Supported Hardware Wallets
The Solana CLI supports the following hardware wallets:
- [Ledger Nano S](ledger.md)
- [Ledger Nano S](ledger.md)
## Specify a Keypair URL
@ -44,7 +47,7 @@ usb://ledger/BsNsvfXqQTtJnagwFWdBS7FBXgnsK8VZ5CmuznN85swK?key=0/0
All derivation paths implicitly include the prefix `44'/501'`, which indicates
the path follows the [BIP44 specifications](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki)
and that any derived keys are Solana keys (Coin type 501). The single quote
and that any derived keys are Solana keys (Coin type 501). The single quote
indicates a "hardened" derivation. Because Solana uses Ed25519 keypairs, all
derivations are hardened and therefore adding the quote is optional and
unnecessary.

View File

@ -1,4 +1,6 @@
# Ledger Hardware Wallet
---
title: Ledger Hardware Wallet
---
The Ledger Nano S hardware wallet offers secure storage of your Solana private
keys. The Solana Ledger app enables derivation of essentially infinite keys, and
@ -27,10 +29,10 @@ solana-keygen pubkey usb://ledger
This confirms your Ledger device is connected properly and in the correct state
to interact with the Solana CLI. The command returns your Ledger's unique
*wallet ID*. When you have multiple Nano S devices connected to the same
_wallet ID_. When you have multiple Nano S devices connected to the same
computer, you can use your wallet ID to specify which Ledger hardware wallet
you want to use. If you only plan to use a single Nano S on your computer
at a time, you don't need to include the wallet ID. For information on
you want to use. If you only plan to use a single Nano S on your computer
at a time, you don't need to include the wallet ID. For information on
using the wallet ID to use a specific Ledger, see
[Manage Multiple Hardware Wallets](#manage-multiple-hardware-wallets).
@ -45,7 +47,7 @@ your own accounts for different purposes, or use different keypairs on the
device as signing authorities for a stake account, for example.
All of the following commands will display different addresses, associated with
the keypair path given. Try them out!
the keypair path given. Try them out!
```bash
solana-keygen pubkey usb://ledger
@ -62,8 +64,9 @@ Just make a note of which keypair URL you used to derive any address you will be
using to receive tokens.
If you are only planning to use a single address/keypair on your device, a good
easy-to-remember path might be to use the address at `key=0`. View this address
easy-to-remember path might be to use the address at `key=0`. View this address
with:
```bash
solana-keygen pubkey usb://ledger?key=0
```
@ -76,12 +79,14 @@ associated keypair URL as the signer for transactions from that address.
To view the balance of any account, regardless of which wallet it uses, use the
`solana balance` command:
```bash
solana balance SOME_WALLET_ADDRESS
```
For example, if your address is `7cvkjYAkUYs4W8XcXsca7cBrEGFeSUjeZmKoNBvEwyri`,
then enter the following command to view the balance:
```bash
solana balance 7cvkjYAkUYs4W8XcXsca7cBrEGFeSUjeZmKoNBvEwyri
```
@ -91,15 +96,15 @@ You can also view the balance of any account address on the Accounts tab in the
and paste the address in the box to view the balance in you web browser.
Note: Any address with a balance of 0 SOL, such as a newly created one on your
Ledger, will show as "Not Found" in the explorer. Empty accounts and non-existent
accounts are treated the same in Solana. This will change when your account
Ledger, will show as "Not Found" in the explorer. Empty accounts and non-existent
accounts are treated the same in Solana. This will change when your account
address has some SOL in it.
### Send SOL from a Ledger Nano S
To send some tokens from an address controlled by your Nano S device, you will
need to use the device to sign a transaction, using the same keypair URL you
used to derive the address. To do this, make sure your Nano S is plugged in,
used to derive the address. To do this, make sure your Nano S is plugged in,
unlocked with the PIN, Ledger Live is not running, and the Solana App is open
on the device, showing "Application is Ready".
@ -112,12 +117,12 @@ from the associated address will decrease.
solana transfer RECIPIENT_ADDRESS AMOUNT --keypair KEYPAIR_URL_OF_SENDER
```
Below is a full example. First, an address is viewed at a certain keypair URL.
Second, the balance of tht address is checked. Lastly, a transfer transaction
Below is a full example. First, an address is viewed at a certain keypair URL.
Second, the balance of tht address is checked. Lastly, a transfer transaction
is entered to send `1` SOL to the recipient address `7cvkjYAkUYs4W8XcXsca7cBrEGFeSUjeZmKoNBvEwyri`.
When you hit Enter for a transfer command, you will be prompted to approve the
transaction details on your Ledger device. On the device, use the right and
left buttons to review the transaction details. If they look correct, click
transaction details on your Ledger device. On the device, use the right and
left buttons to review the transaction details. If they look correct, click
both buttons on the "Approve" screen, otherwise push both buttons on the "Reject"
screen.
@ -137,8 +142,8 @@ Signature: kemu9jDEuPirKNRKiHan7ycybYsZp7pFefAdvWZRq5VRHCLgXTXaFVw3pfh87MQcWX4kQ
After approving the transaction on your device, the program will display the
transaction signature, and wait for the maximum number of confirmations (32)
before returning. This only takes a few seconds, and then the transaction is
finalized on the Solana network. You can view details of this or any other
before returning. This only takes a few seconds, and then the transaction is
finalized on the Solana network. You can view details of this or any other
transaction by going to the Transaction tab in the
[Explorer](https://explorer.solana.com/transactions)
and paste in the transaction signature.
@ -148,7 +153,7 @@ and paste in the transaction signature.
### Manage Multiple Hardware Wallets
It is sometimes useful to sign a transaction with keys from multiple hardware
wallets. Signing with multiple wallets requires *fully qualified keypair URLs*.
wallets. Signing with multiple wallets requires _fully qualified keypair URLs_.
When the URL is not fully qualified, the Solana CLI will prompt you with
the fully qualified URLs of all connected hardware wallets, and ask you to
choose which wallet to use for each signature.
@ -183,7 +188,7 @@ on one of the public testnets.
You can use the command-line to install the latest Solana Ledger app release
before it has been validated by
the Ledger team and made available via Ledger Live. Note that because the app
the Ledger team and made available via Ledger Live. Note that because the app
is not installed via Ledger Live, you will need to approve installation from an
"unsafe" manager, as well as see the message, "This app is not genuine" each
time you open the app. Once the app is available on Ledger Live, you can
@ -262,9 +267,6 @@ solana-keygen pubkey usb://ledger\?key=0
Check out our [Wallet Support Page](../wallet-guide/support.md)
for ways to get help.
Read more about [sending and receiving tokens](../cli/transfer-tokens.md) and
[delegating stake](../cli/delegate-stake.md). You can use your Ledger keypair URL
anywhere you see an option or argument that accepts a `<KEYPAIR>`.
anywhere you see an option or argument that accepts a `<KEYPAIR>`.

View File

@ -1,4 +1,6 @@
# History of the Solana Codebase
---
title: History of the Solana Codebase
---
In November of 2017, Anatoly Yakovenko published a whitepaper describing Proof
of History, a technique for keeping time between computers that do not trust

View File

@ -1,4 +1,5 @@
# Implemented Design Proposals
---
title: Implemented Design Proposals
---
The following design proposals are fully implemented.

View File

@ -1,4 +1,6 @@
# Solana ABI management process
---
title: Solana ABI management process
---
This document proposes the Solana ABI management process. The ABI management
process is an engineering practice and a supporting technical framework to avoid
@ -109,7 +111,7 @@ This part is a bit complex. There is three inter-depending parts: `AbiExample`,
First, the generated test creates an example instance of the digested type with
a trait called `AbiExample`, which should be implemented for all of digested
types like the `Serialize` and return `Self` like the `Default` trait. Usually,
it's provided via generic trait specialization for most of common types. Also
it's provided via generic trait specialization for most of common types. Also
it is possible to `derive` for `struct` and `enum` and can be hand-written if
needed.

View File

@ -1,4 +1,6 @@
# Commitment
---
title: Commitment
---
The commitment metric aims to give clients a measure of the network confirmation
and stake levels on a particular block. Clients can then use this information to
@ -47,9 +49,10 @@ banks are not included in the commitment calculations here.
Now we can naturally augment the above computation to also build a
`BlockCommitment` array for every bank `b` by:
1) Adding a `ForkCommitmentCache` to collect the `BlockCommitment` structs
2) Replacing `f` with `f'` such that the above computation also builds this
`BlockCommitment` for every bank `b`.
1. Adding a `ForkCommitmentCache` to collect the `BlockCommitment` structs
2. Replacing `f` with `f'` such that the above computation also builds this
`BlockCommitment` for every bank `b`.
We will proceed with the details of 2) as 1) is trivial.
@ -75,6 +78,7 @@ Now more specifically, we augment the above computation to:
```
where `f'` is defined as:
```text
fn f`(
stake: &mut Stake,

View File

@ -1,4 +1,6 @@
# Cross-Program Invocation
---
title: Cross-Program Invocation
---
## Problem
@ -67,13 +69,13 @@ mod acme {
`invoke()` is built into Solana's runtime and is responsible for routing the given instruction to the `token` program via the instruction's `program_id` field.
Before invoking `pay()`, the runtime must ensure that `acme` didn't modify any accounts owned by `token`. It does this by applying the runtime's policy to the current state of the accounts at the time `acme` calls `invoke` vs. the initial state of the accounts at the beginning of the `acme`'s instruction. After `pay()` completes, the runtime must again ensure that `token` didn't modify any accounts owned by `acme` by again applying the runtime's policy, but this time with the `token` program ID. Lastly, after `pay_and_launch_missiles()` completes, the runtime must apply the runtime policy one more time, where it normally would, but using all updated `pre_*` variables. If executing `pay_and_launch_missiles()` up to `pay()` made no invalid account changes, `pay()` made no invalid changes, and executing from `pay()` until `pay_and_launch_missiles()` returns made no invalid changes, then the runtime can transitively assume `pay_and_launch_missiles()` as whole made no invalid account changes, and therefore commit all these account modifications.
Before invoking `pay()`, the runtime must ensure that `acme` didn't modify any accounts owned by `token`. It does this by applying the runtime's policy to the current state of the accounts at the time `acme` calls `invoke` vs. the initial state of the accounts at the beginning of the `acme`'s instruction. After `pay()` completes, the runtime must again ensure that `token` didn't modify any accounts owned by `acme` by again applying the runtime's policy, but this time with the `token` program ID. Lastly, after `pay_and_launch_missiles()` completes, the runtime must apply the runtime policy one more time, where it normally would, but using all updated `pre_*` variables. If executing `pay_and_launch_missiles()` up to `pay()` made no invalid account changes, `pay()` made no invalid changes, and executing from `pay()` until `pay_and_launch_missiles()` returns made no invalid changes, then the runtime can transitively assume `pay_and_launch_missiles()` as whole made no invalid account changes, and therefore commit all these account modifications.
### Instructions that require privileges
The runtime uses the privileges granted to the caller program to determine what privileges can be extended to the callee. Privileges in this context refer to signers and writable accounts. For example, if the instruction the caller is processing contains a signer or writable account, then the caller can invoke an instruction that also contains that signer and/or writable account.
The runtime uses the privileges granted to the caller program to determine what privileges can be extended to the callee. Privileges in this context refer to signers and writable accounts. For example, if the instruction the caller is processing contains a signer or writable account, then the caller can invoke an instruction that also contains that signer and/or writable account.
This privilege extension relies on the fact that programs are immutable. In the case of the `acme` program, the runtime can safely treat the transaction's signature as a signature of a `token` instruction. When the runtime sees the `token` instruction references `alice_pubkey`, it looks up the key in the `acme` instruction to see if that key corresponds to a signed account. In this case, it does and thereby authorizes the `token` program to modify Alice's account.
This privilege extension relies on the fact that programs are immutable. In the case of the `acme` program, the runtime can safely treat the transaction's signature as a signature of a `token` instruction. When the runtime sees the `token` instruction references `alice_pubkey`, it looks up the key in the `acme` instruction to see if that key corresponds to a signed account. In this case, it does and thereby authorizes the `token` program to modify Alice's account.
### Program signed accounts
@ -86,11 +88,11 @@ To sign an account with program derived addresses, a program may `invoke_signed(
invoke_signed(
&instruction,
accounts,
&[&["First addresses seed"],
&[&["First addresses seed"],
&["Second addresses first seed", "Second addresses second seed"]],
)?;
```
### Reentrancy
Reentrancy is currently limited to direct self recursion capped at a fixed depth. This restriction prevents situations where a program might invoke another from an intermediary state without the knowledge that it might later be called back into. Direct recursion gives the program full control of its state at the point that it gets called back.
Reentrancy is currently limited to direct self recursion capped at a fixed depth. This restriction prevents situations where a program might invoke another from an intermediary state without the knowledge that it might later be called back into. Direct recursion gives the program full control of its state at the point that it gets called back.

View File

@ -1,4 +1,6 @@
# Durable Transaction Nonces
---
title: Durable Transaction Nonces
---
## Problem
@ -11,8 +13,8 @@ offline network participants.
## Requirements
1) The transaction's signature needs to cover the nonce value
2) The nonce must not be reusable, even in the case of signing key disclosure
1. The transaction's signature needs to cover the nonce value
2. The nonce must not be reusable, even in the case of signing key disclosure
## A Contract-based Solution
@ -25,8 +27,8 @@ When making use of a durable nonce, the client must first query its value from
account data. A transaction is now constructed in the normal way, but with the
following additional requirements:
1) The durable nonce value is used in the `recent_blockhash` field
2) An `AdvanceNonceAccount` instruction is the first issued in the transaction
1. The durable nonce value is used in the `recent_blockhash` field
2. An `AdvanceNonceAccount` instruction is the first issued in the transaction
### Contract Mechanics
@ -63,7 +65,7 @@ WithdrawInstruction(to, lamports)
success
```
A client wishing to use this feature starts by creating a nonce account under
A client wishing to use this feature starts by creating a nonce account under
the system program. This account will be in the `Uninitialized` state with no
stored hash, and thus unusable.
@ -95,11 +97,7 @@ can be changed using the `AuthorizeNonceAccount` instruction. It takes one param
the `Pubkey` of the new authority. Executing this instruction grants full
control over the account and its balance to the new authority.
{% hint style="info" %}
`AdvanceNonceAccount`, `WithdrawNonceAccount` and `AuthorizeNonceAccount` all require the current
[nonce authority](../offline-signing/durable-nonce.md#nonce-authority) for the
account to sign the transaction.
{% endhint %}
> `AdvanceNonceAccount`, `WithdrawNonceAccount` and `AuthorizeNonceAccount` all require the current [nonce authority](../offline-signing/durable-nonce.md#nonce-authority) for the account to sign the transaction.
### Runtime Support
@ -114,11 +112,11 @@ instruction as the first instruction in the transaction.
If the runtime determines that a Durable Transaction Nonce is in use, it will
take the following additional actions to validate the transaction:
1) The `NonceAccount` specified in the `Nonce` instruction is loaded.
2) The `NonceState` is deserialized from the `NonceAccount`'s data field and
confirmed to be in the `Initialized` state.
3) The nonce value stored in the `NonceAccount` is tested to match against the
one specified in the transaction's `recent_blockhash` field.
1. The `NonceAccount` specified in the `Nonce` instruction is loaded.
2. The `NonceState` is deserialized from the `NonceAccount`'s data field and
confirmed to be in the `Initialized` state.
3. The nonce value stored in the `NonceAccount` is tested to match against the
one specified in the transaction's `recent_blockhash` field.
If all three of the above checks succeed, the transaction is allowed to continue
validation.

View File

@ -1,4 +1,6 @@
# Cluster Economics
---
title: Cluster Economics
---
**Subject to change.**
@ -12,6 +14,6 @@ Transaction fees are market-based participant-to-participant transfers, attached
A high-level schematic of Solanas crypto-economic design is shown below in **Figure 1**. The specifics of validation-client economics are described in sections: [Validation-client Economics](ed_validation_client_economics/README.md), [State-validation Protocol-based Rewards](ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards.md), [State-validation Transaction Fees](ed_validation_client_economics/ed_vce_state_validation_transaction_fees.md). Also, the section titled [Validation Stake Delegation](ed_validation_client_economics/ed_vce_validation_stake_delegation.md) closes with a discussion of validator delegation opportunities and marketplace. Additionally, in [Storage Rent Economics](ed_storage_rent_economics.md), we describe an implementation of storage rent to account for the externality costs of maintaining the active state of the ledger. An outline of features for an MVP economic design is discussed in the [Economic Design MVP](ed_mvp.md) section.
![](../../.gitbook/assets/economic_design_infl_230719.png)
![](/img/economic_design_infl_230719.png)
**Figure 1**: Schematic overview of Solana economic incentive design.

View File

@ -1,4 +1,6 @@
# Economic Sustainability
---
title: Economic Sustainability
---
**Subject to change.**

View File

@ -1,4 +1,6 @@
# Economic Design MVP
---
title: Economic Design MVP
---
**Subject to change.**
@ -6,7 +8,7 @@ The preceding sections, outlined in the [Economic Design Overview](../README.md)
## MVP Economic Features
* Faucet to deliver testnet SOLs to validators for staking and application development.
* Mechanism by which validators are rewarded via network inflation.
* Ability to delegate tokens to validator nodes
* Validator set commission fees on interest from delegated tokens.
- Faucet to deliver testnet SOLs to validators for staking and application development.
- Mechanism by which validators are rewarded via network inflation.
- Ability to delegate tokens to validator nodes
- Validator set commission fees on interest from delegated tokens.

View File

@ -1,6 +1,7 @@
# References
---
title: References
---
1. [https://blog.ethereum.org/2016/07/27/inflation-transaction-fees-cryptocurrency-monetary-policy/](https://blog.ethereum.org/2016/07/27/inflation-transaction-fees-cryptocurrency-monetary-policy/)
2. [https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281](https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281)
3. [https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281](https://medium.com/solana-labs/how-to-create-decentralized-storage-for-a-multi-petabyte-digital-ledger-2499a3a8c281)

View File

@ -1,4 +1,6 @@
## Storage Rent Economics
---
title: Storage Rent Economics
---
Each transaction that is submitted to the Solana ledger imposes costs. Transaction fees paid by the submitter, and collected by a validator, in theory, account for the acute, transactional, costs of validating and adding that data to the ledger. Unaccounted in this process is the mid-term storage of active ledger state, necessarily maintained by the rotating validator set. This type of storage imposes costs not only to validators but also to the broader network as active state grows so does data transmission and validation overhead. To account for these costs, we describe here our preliminary design and implementation of storage rent.
@ -13,6 +15,3 @@ Method 2: Pay per byte
If an account has less than two-years worth of deposited rent the network charges rent on a per-epoch basis, in credit for the next epoch. This rent is deducted at a rate specified in genesis, in lamports per kilobyte-year.
For information on the technical implementation details of this design, see the [Rent](../rent.md) section.

View File

@ -1,8 +1,9 @@
# Validation-client Economics
---
title: Validation-client Economics
---
**Subject to change.**
Validator-clients are eligible to receive protocol-based \(i.e. inflation-based\) rewards issued via stake-based annual interest rates \(calculated per epoch\) by providing compute \(CPU+GPU\) resources to validate and vote on a given PoH state. These protocol-based rewards are determined through an algorithmic disinflationary schedule as a function of total amount of circulating tokens. The network is expected to launch with an annual inflation rate around 15%, set to decrease by 15% per year until a long-term stable rate of 1-2% is reached. These issuances are to be split and distributed to participating validators, with around 90% of the issued tokens allocated for validator rewards. Because the network will be distributing a fixed amount of inflation rewards across the stake-weighted validator set, any individual validator's interest rate will be a function of the amount of staked SOL in relation to the circulating SOL.
Additionally, validator clients may earn revenue through fees via state-validation transactions. For clarity, we separately describe the design and motivation of these revenue distributions for validation-clients below: state-validation protocol-based rewards and state-validation transaction fees and rent.

View File

@ -1,33 +1,35 @@
# State-validation Protocol-based Rewards
---
title: State-validation Protocol-based Rewards
---
**Subject to change.**
Validator-clients have two functional roles in the Solana network:
* Validate \(vote\) the current global state of that PoH.
* Be elected as leader on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
- Validate \(vote\) the current global state of that PoH.
- Be elected as leader on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. As previously discussed, compensation for validator-clients is provided via a protocol-based annual inflation rate dispersed in proportion to the stake-weight of each validator \(see below\) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each transaction fee, less a protocol-specified amount that is destroyed \(see [Validation-client State Transaction Fees](ed_vce_state_validation_transaction_fees.md)\).
The effective protocol-based annual interest rate \(%\) per epoch received by validation-clients is to be a function of:
* the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](README.md)\)
* the fraction of staked SOLs out of the current total circulating supply,
* the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
- the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule \(see [Validation-client Economics](README.md)\)
- the fraction of staked SOLs out of the current total circulating supply,
- the up-time/participation \[% of available slots that validator had opportunity to vote on\] of a given validator over the previous epoch.
The first factor is a function of protocol parameters only \(i.e. independent of validator behavior in a given epoch\) and results in a global validation reward schedule designed to incentivize early participation, provide clear monetary stability and provide optimal security in the network.
At any given point in time, a specific validator's interest rate can be determined based on the proportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year \(the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch\). With these broad assumptions, the 10-year inflation rate \(adjusted daily for this example\) is shown in **Figure 1**, while the total circulating token supply is illustrated in **Figure 2**. Neglected in this toy-model is the inflation suppression due to the portion of each transaction fee that is to be destroyed.
![](../../../.gitbook/assets/p_ex_schedule.png)
![](/img/p_ex_schedule.png)
**Figure 1:** In this example schedule, the annual inflation rate \[%\] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
![](../../../.gitbook/assets/p_ex_supply.png)
![](/img/p_ex_supply.png)
**Figure 2:** The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in **Figure 1**. Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabilize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assumed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in **Figure 3**.
![](../../../.gitbook/assets/p_ex_interest.png)
![](/img/p_ex_interest.png)
**Figure 3:** Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.

View File

@ -1,13 +1,15 @@
# State-validation Transaction Fees
---
title: State-validation Transaction Fees
---
**Subject to change.**
Each transaction sent through the network, to be processed by the current leader validation-client and confirmed as a global state transaction, must contain a transaction fee. Transaction fees offer many benefits in the Solana economic design, for example they:
* provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
* reduce network spam by introducing real cost to transactions,
* open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
* and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
- provide unit compensation to the validator network for the CPU/GPU resources necessary to process the state transaction,
- reduce network spam by introducing real cost to transactions,
- open avenues for a transaction market to incentivize validation-client to collect and process submitted transactions in their function as leader,
- and provide potential long-term economic stability of the network through a protocol-captured minimum fee amount per transaction, as described below.
Many current blockchain economies \(e.g. Bitcoin, Ethereum\), rely on protocol-based rewards to support the economy in the short term, with the assumption that the revenue generated through transaction fees will support the economy in the long term, when the protocol derived rewards expire. In an attempt to create a sustainable economy through protocol-based rewards and transaction fees, a fixed portion of each transaction fee is destroyed, with the remaining fee going to the current leader processing the transaction. A scheduled global inflation rate provides a source for rewards distributed to validation-clients, through the process described above.

View File

@ -1,27 +1,28 @@
# Validation Stake Delegation
---
title: Validation Stake Delegation
---
**Subject to change.**
Running a Solana validation-client required relatively modest upfront hardware capital investment. **Table 2** provides an example hardware configuration to support ~1M tx/s with estimated off-the-shelf costs:
| Component | Example | Estimated Cost |
| :--- | :--- | :--- |
| GPU | 2x 2080 Ti | $2500 |
| or | 4x 1080 Ti | $2800 |
| OS/Ledger Storage | Samsung 860 Evo 2TB | $370 |
| Accounts storage | 2x Samsung 970 Pro M.2 512GB | $340 |
| RAM | 32 Gb | $300 |
| Motherboard | AMD x399 | $400 |
| CPU | AMD Threadripper 2920x | $650 |
| Case | | $100 |
| Power supply | EVGA 1600W | $300 |
| Network | &gt; 500 mbps | |
| Network \(1\) | Google webpass business bay area 1gbps unlimited | $5500/mo |
| Network \(2\) | Hurricane Electric bay area colo 1gbps | $500/mo |
| Component | Example | Estimated Cost |
| :---------------- | :----------------------------------------------- | :------------- |
| GPU | 2x 2080 Ti | \$2500 |
| or | 4x 1080 Ti | \$2800 |
| OS/Ledger Storage | Samsung 860 Evo 2TB | \$370 |
| Accounts storage | 2x Samsung 970 Pro M.2 512GB | \$340 |
| RAM | 32 Gb | \$300 |
| Motherboard | AMD x399 | \$400 |
| CPU | AMD Threadripper 2920x | \$650 |
| Case | | \$100 |
| Power supply | EVGA 1600W | \$300 |
| Network | &gt; 500 mbps | |
| Network \(1\) | Google webpass business bay area 1gbps unlimited | \$5500/mo |
| Network \(2\) | Hurricane Electric bay area colo 1gbps | \$500/mo |
**Table 2** example high-end hardware setup for running a Solana client.
Despite the low-barrier to entry as a validation-client, from a capital investment perspective, as in any developing economy, there will be much opportunity and need for trusted validation services as evidenced by node reliability, UX/UI, APIs and other software accessibility tools. Additionally, although Solanas validator node startup costs are nominal when compared to similar networks, they may still be somewhat restrictive for some potential participants. In the spirit of developing a true decentralized, permissionless network, these interested parties can become involved in the Solana network/economy via delegation of previously acquired tokens with a reliable validation node to earn a portion of the interest generated.
Delegation of tokens to validation-clients provides a way for passive Solana token holders to become part of the active Solana economy and earn interest rates proportional to the interest rate generated by the delegated validation-client. Additionally, this feature intends to create a healthy validation-client market, with potential validation-client nodes competing to build reliable, transparent and profitable delegation services.

View File

@ -1,4 +1,6 @@
# Embedding the Move Langauge
---
title: Embedding the Move Langauge
---
## Problem
@ -10,15 +12,15 @@ The biggest design difference between Solana's runtime and Libra's Move VM is ho
This proposal attempts to define a way to embed the Move VM such that:
* cross-module invocations within Move do not require the runtime's
- cross-module invocations within Move do not require the runtime's
cross-program runtime checks
* Move programs can leverage functionality in other Solana programs and vice
- Move programs can leverage functionality in other Solana programs and vice
versa
* Solana's runtime parallelism is exposed to batches of Move and non-Move
- Solana's runtime parallelism is exposed to batches of Move and non-Move
transactions
@ -33,4 +35,3 @@ All data accounts owned by Move modules must set their owners to the loader, `MO
### Interacting with Solana programs
To invoke instructions in non-Move programs, Solana would need to extend the Move VM with a `process_instruction()` system call. It would work the same as `process_instruction()` Rust BPF programs.

View File

@ -1,4 +1,6 @@
# Cluster Software Installation and Updates
---
title: Cluster Software Installation and Updates
---
Currently users are required to build the solana cluster software themselves from the git repository and manually update it, which is error prone and inconvenient.
@ -93,11 +95,11 @@ To guard against rollback attacks, `solana-install` will refuse to install an up
A release archive is expected to be a tar file compressed with bzip2 with the following internal structure:
* `/version.yml` - a simple YAML file containing the field `"target"` - the
- `/version.yml` - a simple YAML file containing the field `"target"` - the
target tuple. Any additional fields are ignored.
* `/bin/` -- directory containing available programs in the release.
- `/bin/` -- directory containing available programs in the release.
`solana-install` will symlink this directory to
@ -105,7 +107,7 @@ A release archive is expected to be a tar file compressed with bzip2 with the fo
variable.
* `...` -- any additional files and directories are permitted
- `...` -- any additional files and directories are permitted
## solana-install Tool
@ -113,9 +115,9 @@ The `solana-install` tool is used by the user to install and update their cluste
It manages the following files and directories in the user's home directory:
* `~/.config/solana/install/config.yml` - user configuration and information about currently installed software version
* `~/.local/share/solana/install/bin` - a symlink to the current release. eg, `~/.local/share/solana-update/<update-pubkey>-<manifest_signature>/bin`
* `~/.local/share/solana/install/releases/<download_sha256>/` - contents of a release
- `~/.config/solana/install/config.yml` - user configuration and information about currently installed software version
- `~/.local/share/solana/install/bin` - a symlink to the current release. eg, `~/.local/share/solana-update/<update-pubkey>-<manifest_signature>/bin`
- `~/.local/share/solana/install/releases/<download_sha256>/` - contents of a release
### Command-line Interface
@ -212,4 +214,3 @@ ARGS:
The program will be restarted upon a successful software update
```

View File

@ -1,4 +1,6 @@
# Leader-to-Leader Transition
---
title: Leader-to-Leader Transition
---
This design describes how leaders transition production of the PoH ledger between each other as each leader generates its own slot.
@ -18,19 +20,19 @@ While a leader is actively receiving entries for the previous slot, the leader c
The downsides:
* Leader delays its own slot, potentially allowing the next leader more time to
- Leader delays its own slot, potentially allowing the next leader more time to
catch up.
The upsides compared to guards:
* All the space in a block is used for entries.
* The timeout is not fixed.
* The timeout is local to the leader, and therefore can be clever. The leader's heuristic can take into account turbine performance.
* This design doesn't require a ledger hard fork to update.
* The previous leader can redundantly transmit the last entry in the block to the next leader, and the next leader can speculatively decide to trust it to generate its block without verification of the previous block.
* The leader can speculatively generate the last tick from the last received entry.
* The leader can speculatively process transactions and guess which ones are not going to be encoded by the previous leader. This is also a censorship attack vector. The current leader may withhold transactions that it receives from the clients so it can encode them into its own slot. Once processed, entries can be replayed into PoH quickly.
- All the space in a block is used for entries.
- The timeout is not fixed.
- The timeout is local to the leader, and therefore can be clever. The leader's heuristic can take into account turbine performance.
- This design doesn't require a ledger hard fork to update.
- The previous leader can redundantly transmit the last entry in the block to the next leader, and the next leader can speculatively decide to trust it to generate its block without verification of the previous block.
- The leader can speculatively generate the last tick from the last received entry.
- The leader can speculatively process transactions and guess which ones are not going to be encoded by the previous leader. This is also a censorship attack vector. The current leader may withhold transactions that it receives from the clients so it can encode them into its own slot. Once processed, entries can be replayed into PoH quickly.
## Alternative design options
@ -42,13 +44,12 @@ If the next leader receives the _penultimate tick_ before it produces its own _f
The downsides:
* Every vote, and therefore confirmation, is delayed by a fixed timeout. 1 tick, or around 100ms.
* Average case confirmation time for a transaction would be at least 50ms worse.
* It is part of the ledger definition, so to change this behavior would require a hard fork.
* Not all the available space is used for entries.
- Every vote, and therefore confirmation, is delayed by a fixed timeout. 1 tick, or around 100ms.
- Average case confirmation time for a transaction would be at least 50ms worse.
- It is part of the ledger definition, so to change this behavior would require a hard fork.
- Not all the available space is used for entries.
The upsides compared to leader timeout:
* The next leader has received all the previous entries, so it can start processing transactions without recording them into PoH.
* The previous leader can redundantly transmit the last entry containing the _penultimate tick_ to the next leader. The next leader can speculatively generate the _last tick_ as soon as it receives the _penultimate tick_, even before verifying it.
- The next leader has received all the previous entries, so it can start processing transactions without recording them into PoH.
- The previous leader can redundantly transmit the last entry containing the _penultimate tick_ to the next leader. The next leader can speculatively generate the _last tick_ as soon as it receives the _penultimate tick_, even before verifying it.

View File

@ -1,4 +1,6 @@
# Leader-to-Validator Transition
---
title: Leader-to-Validator Transition
---
A validator typically spends its time validating blocks. If, however, a staker delegates its stake to a validator, it will occasionally be selected as a _slot leader_. As a slot leader, the validator is responsible for producing blocks during an assigned _slot_. A slot has a duration of some number of preconfigured _ticks_. The duration of those ticks are estimated with a _PoH Recorder_ described later in this document.
@ -48,4 +50,3 @@ The loop is synchronized to PoH and does a synchronous start and stop of the slo
the TVU may resume voting.
5. Goto 1.

View File

@ -1,4 +1,6 @@
# Persistent Account Storage
---
title: Persistent Account Storage
---
## Persistent Account Storage
@ -49,9 +51,9 @@ An account can be _garbage-collected_ when squashing makes it unreachable.
Three possible options exist:
* Maintain a HashSet of root forks. One is expected to be created every second. The entire tree can be garbage-collected later. Alternatively, if every fork keeps a reference count of accounts, garbage collection could occur any time an index location is updated.
* Remove any pruned forks from the index. Any remaining forks lower in number than the root are can be considered root.
* Scan the index, migrate any old roots into the new one. Any remaining forks lower than the new root can be deleted later.
- Maintain a HashSet of root forks. One is expected to be created every second. The entire tree can be garbage-collected later. Alternatively, if every fork keeps a reference count of accounts, garbage collection could occur any time an index location is updated.
- Remove any pruned forks from the index. Any remaining forks lower in number than the root are can be considered root.
- Scan the index, migrate any old roots into the new one. Any remaining forks lower than the new root can be deleted later.
## Append-only Writes
@ -85,10 +87,9 @@ To snapshot, the underlying memory-mapped files in the AppendVec need to be flus
## Performance
* Append-only writes are fast. SSDs and NVMEs, as well as all the OS level kernel data structures, allow for appends to run as fast as PCI or NVMe bandwidth will allow \(2,700 MB/s\).
* Each replay and banking thread writes concurrently to its own AppendVec.
* Each AppendVec could potentially be hosted on a separate NVMe.
* Each replay and banking thread has concurrent read access to all the AppendVecs without blocking writes.
* Index requires an exclusive write lock for writes. Single-thread performance for HashMap updates is on the order of 10m per second.
* Banking and Replay stages should use 32 threads per NVMe. NVMes have optimal performance with 32 concurrent readers or writers.
- Append-only writes are fast. SSDs and NVMEs, as well as all the OS level kernel data structures, allow for appends to run as fast as PCI or NVMe bandwidth will allow \(2,700 MB/s\).
- Each replay and banking thread writes concurrently to its own AppendVec.
- Each AppendVec could potentially be hosted on a separate NVMe.
- Each replay and banking thread has concurrent read access to all the AppendVecs without blocking writes.
- Index requires an exclusive write lock for writes. Single-thread performance for HashMap updates is on the order of 10m per second.
- Banking and Replay stages should use 32 threads per NVMe. NVMes have optimal performance with 32 concurrent readers or writers.

View File

@ -1,4 +1,6 @@
# Program Derived Addresses
---
title: Program Derived Addresses
---
## Problem
@ -7,14 +9,14 @@ other programs as defined in the [Cross-Program Invocations](cross-program-invoc
design.
The lack of programmatic signature generation limits the kinds of programs
that can be implemented in Solana. A program may be given the
that can be implemented in Solana. A program may be given the
authority over an account and later want to transfer that authority to another.
This is impossible today because the program cannot act as the signer in the transaction that gives authority.
For example, if two users want
to make a wager on the outcome of a game in Solana, they must each
transfer their wager's assets to some intermediary that will honor
their agreement. Currently, there is no way to implement this intermediary
their agreement. Currently, there is no way to implement this intermediary
as a program in Solana because the intermediary program cannot transfer the
assets to the winner.
@ -22,24 +24,24 @@ This capability is necessary for many DeFi applications since they
require assets to be transferred to an escrow agent until some event
occurs that determines the new owner.
* Decentralized Exchanges that transfer assets between matching bid and
ask orders.
- Decentralized Exchanges that transfer assets between matching bid and
ask orders.
* Auctions that transfer assets to the winner.
- Auctions that transfer assets to the winner.
* Games or prediction markets that collect and redistribute prizes to
the winners.
- Games or prediction markets that collect and redistribute prizes to
the winners.
## Proposed Solution
The key to the design is two-fold:
1. Allow programs to control specific addresses, called Program-Addresses, in such a way that no external
user can generate valid transactions with signatures for those
addresses.
user can generate valid transactions with signatures for those
addresses.
2. Allow programs to programmatically sign for Program-Addresses that are
present in instructions invoked via [Cross-Program Invocations](cross-program-invocation.md).
present in instructions invoked via [Cross-Program Invocations](cross-program-invocation.md).
Given the two conditions, users can securely transfer or assign
the authority of on-chain assets to Program-Addresses and the program
@ -48,13 +50,13 @@ can then assign that authority elsewhere at its discretion.
### Private keys for Program Addresses
A Program -Address has no private key associated with it, and generating
a signature for it is impossible. While it has no private key of
a signature for it is impossible. While it has no private key of
its own, it can issue an instruction that includes the Program-Address as a signer.
### Hash-based generated Program Addresses
All 256-bit values are valid ed25519 curve points and valid ed25519 public
keys. All are equally secure and equally as hard to break.
keys. All are equally secure and equally as hard to break.
Based on this assumption, Program Addresses can be deterministically
derived from a base seed using a 256-bit preimage resistant hash function.
@ -81,7 +83,7 @@ pub fn create_address_with_seed(
```
Programs can deterministically derive any number of addresses by
using keywords. These keywords can symbolically identify how the addresses are used.
using keywords. These keywords can symbolically identify how the addresses are used.
```rust,ignore
//! Generate a derived program address
@ -146,9 +148,9 @@ fn transfer_one_token_from_escrow(
### Instructions that require signers
The addresses generated with `create_program_address` are indistinguishable
from any other public key. The only way for the runtime to verify that the
from any other public key. The only way for the runtime to verify that the
address belongs to a program is for the program to supply the keywords used
to generate the address.
The runtime will internally call `create_program_address`, and compare the
result against the addresses supplied in the instruction.
result against the addresses supplied in the instruction.

View File

@ -1,4 +1,6 @@
# Read-Only Accounts
---
title: Read-Only Accounts
---
This design covers the handling of readonly and writable accounts in the [runtime](../validator/runtime.md). Multiple transactions that modify the same account must be processed serially so that they are always replayed in the same order. Otherwise, this could introduce non-determinism to the ledger. Some transactions, however, only need to read, and not modify, the data in particular accounts. Multiple transactions that only read the same account can be processed in parallel, since replay order does not matter, providing a performance benefit.
@ -10,7 +12,7 @@ Runtime transaction processing rules need to be updated slightly. Programs still
Readonly accounts have the following property:
* Read-only access to all account fields, including lamports (cannot be credited or debited), and account data
- Read-only access to all account fields, including lamports (cannot be credited or debited), and account data
Instructions that credit, debit, or modify the readonly account will fail.

View File

@ -1,4 +1,6 @@
# Reliable Vote Transmission
---
title: Reliable Vote Transmission
---
Validator votes are messages that have a critical function for consensus and continuous operation of the network. Therefore it is critical that they are reliably delivered and encoded into the ledger.
@ -56,4 +58,3 @@ Everything above plus the following:
4. Worst case 25mb memory overhead per node.
5. Sub 4 hops worst case to deliver to the entire network.
6. 80 shreds received by the leader for all the validator messages.

View File

@ -1,4 +1,6 @@
# Rent
---
title: Rent
---
Accounts on Solana may have owner-controlled state \(`Account::data`\) that's separate from the account's balance \(`Account::lamports`\). Since validators on the network need to maintain a working copy of this state in memory, the network charges a time-and-space based fee for this resource consumption, also known as Rent.
@ -42,11 +44,11 @@ As the overall consequence of this design, all of accounts is stored equally as
Collecting rent on an as-needed basis \(i.e. whenever accounts were loaded/accessed\) was considered. The issues with such an approach are:
* accounts loaded as "credit only" for a transaction could very reasonably be expected to have rent due,
- accounts loaded as "credit only" for a transaction could very reasonably be expected to have rent due,
but would not be writable during any such transaction
* a mechanism to "beat the bushes" \(i.e. go find accounts that need to pay rent\) is desirable,
- a mechanism to "beat the bushes" \(i.e. go find accounts that need to pay rent\) is desirable,
lest accounts that are loaded infrequently get a free ride
@ -54,6 +56,6 @@ Collecting rent on an as-needed basis \(i.e. whenever accounts were loaded/acces
Collecting rent via a system instruction was considered, as it would naturally have distributed rent to active and stake-weighted nodes and could have been done incrementally. However:
* it would have adversely affected network throughput
* it would require special-casing by the runtime, as accounts with non-SystemProgram owners may be debited by this instruction
* someone would have to issue the transactions
- it would have adversely affected network throughput
- it would require special-casing by the runtime, as accounts with non-SystemProgram owners may be debited by this instruction
- someone would have to issue the transactions

View File

@ -1,4 +1,6 @@
# Repair Service
---
title: Repair Service
---
## Repair Service
@ -19,25 +21,27 @@ repair these slots. If these slots happen to be part of the main chain, this
will halt replay progress on this node.
## Repair-related primitives
Epoch Slots:
Each validator advertises separately on gossip the various parts of an
`Epoch Slots`:
* The `stash`: An epoch-long compressed set of all completed slots.
* The `cache`: The Run-length Encoding (RLE) of the latest `N` completed
slots starting from some some slot `M`, where `N` is the number of slots
that will fit in an MTU-sized packet.
Each validator advertises separately on gossip the various parts of an
`Epoch Slots`:
`Epoch Slots` in gossip are updated every time a validator receives a
complete slot within the epoch. Completed slots are detected by blockstore
and sent over a channel to RepairService. It is important to note that we
know that by the time a slot `X` is complete, the epoch schedule must exist
for the epoch that contains slot `X` because WindowService will reject
shreds for unconfirmed epochs.
- The `stash`: An epoch-long compressed set of all completed slots.
- The `cache`: The Run-length Encoding (RLE) of the latest `N` completed
slots starting from some some slot `M`, where `N` is the number of slots
that will fit in an MTU-sized packet.
`Epoch Slots` in gossip are updated every time a validator receives a
complete slot within the epoch. Completed slots are detected by blockstore
and sent over a channel to RepairService. It is important to note that we
know that by the time a slot `X` is complete, the epoch schedule must exist
for the epoch that contains slot `X` because WindowService will reject
shreds for unconfirmed epochs.
Every `N/2` completed slots, the oldest `N/2` slots are moved from the
`cache` into the `stash`. The base value `M` for the RLE should also
be updated.
Every `N/2` completed slots, the oldest `N/2` slots are moved from the
`cache` into the `stash`. The base value `M` for the RLE should also
be updated.
## Repair Request Protocols
The repair protocol makes best attempts to progress the forking structure of
@ -46,28 +50,29 @@ Blockstore.
The different protocol strategies to address the above challenges:
1. Shred Repair \(Addresses Challenge \#1\): This is the most basic repair
protocol, with the purpose of detecting and filling "holes" in the ledger.
Blockstore tracks the latest root slot. RepairService will then periodically
iterate every fork in blockstore starting from the root slot, sending repair
requests to validators for any missing shreds. It will send at most some `N`
repair reqeusts per iteration. Shred repair should prioritize repairing
forks based on the leader's fork weight. Validators should only send repair
requests to validators who have marked that slot as completed in their
EpochSlots. Validators should prioritize repairing shreds in each slot
that they are responsible for retransmitting through turbine. Validators can
compute which shreds they are responsible for retransmitting because the
seed for turbine is based on leader id, slot, and shred index.
protocol, with the purpose of detecting and filling "holes" in the ledger.
Blockstore tracks the latest root slot. RepairService will then periodically
iterate every fork in blockstore starting from the root slot, sending repair
requests to validators for any missing shreds. It will send at most some `N`
repair reqeusts per iteration. Shred repair should prioritize repairing
forks based on the leader's fork weight. Validators should only send repair
requests to validators who have marked that slot as completed in their
EpochSlots. Validators should prioritize repairing shreds in each slot
that they are responsible for retransmitting through turbine. Validators can
compute which shreds they are responsible for retransmitting because the
seed for turbine is based on leader id, slot, and shred index.
Note: Validators will only accept shreds within the current verifiable
epoch \(epoch the validator has a leader schedule for\).
2. Preemptive Slot Repair \(Addresses Challenge \#2\): The goal of this
protocol is to discover the chaining relationship of "orphan" slots that do not
currently chain to any known fork. Shred repair should prioritize repairing
orphan slots based on the leader's fork weight.
* Blockstore will track the set of "orphan" slots in a separate column family.
* RepairService will periodically make `Orphan` requests for each of
the orphans in blockstore.
protocol is to discover the chaining relationship of "orphan" slots that do not
currently chain to any known fork. Shred repair should prioritize repairing
orphan slots based on the leader's fork weight.
- Blockstore will track the set of "orphan" slots in a separate column family.
- RepairService will periodically make `Orphan` requests for each of
the orphans in blockstore.
`Orphan(orphan)` request - `orphan` is the orphan slot that the
requestor wants to know the parents of `Orphan(orphan)` response -
@ -77,9 +82,9 @@ orphan slots based on the leader's fork weight.
On receiving the responses `p`, where `p` is some shred in a parent slot,
validators will:
* Insert an empty `SlotMeta` in blockstore for `p.slot` if it doesn't
already exist.
* If `p.slot` does exist, update the parent of `p` based on `parents`
- Insert an empty `SlotMeta` in blockstore for `p.slot` if it doesn't
already exist.
- If `p.slot` does exist, update the parent of `p` based on `parents`
Note: that once these empty slots are added to blockstore, the
`Shred Repair` protocol should attempt to fill those slots.
@ -95,10 +100,9 @@ randomly select a validator in a stake-weighted fashion.
## Repair Response Protocol
When a validator receives a request for a shred `S`, they respond with the
shred if they have it.
shred if they have it.
When a validator receives a shred through a repair response, they check
`EpochSlots` to see if <= `1/3` of the network has marked this slot as
completed. If so, they resubmit this shred through its associated turbine
path, but only if this validator has not retransmitted this shred before.

View File

@ -1,4 +1,6 @@
# Snapshot Verification
---
title: Snapshot Verification
---
## Problem
@ -18,11 +20,11 @@ To verify the snapshot, we do the following:
On account store of non-zero lamport accounts, we hash the following data:
* Account owner
* Account data
* Account pubkey
* Account lamports balance
* Fork the account is stored on
- Account owner
- Account data
- Account pubkey
- Account lamports balance
- Fork the account is stored on
Use this resulting hash value as input to an expansion function which expands the hash value into an image value.
The function will create a 440 byte block of data where the first 32 bytes are the hash value, and the next 440 - 32 bytes are
@ -42,7 +44,7 @@ a validator bank to read that an account is not present when it really should be
An attack on the xor state could be made to influence its value:
Thus the 440 byte image size comes from this paper, avoiding xor collision with 0 \(or thus any other given bit pattern\): \[[https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9\_19.pdf](https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9_19.pdf)\]
Thus the 440 byte image size comes from this paper, avoiding xor collision with 0 \(or thus any other given bit pattern\): \[[https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9_19.pdf](https://link.springer.com/content/pdf/10.1007%2F3-540-45708-9_19.pdf)\]
The math provides 128 bit security in this case:
@ -52,4 +54,3 @@ k=2^40 accounts
n=440
2^(40) * 2^(448 * 8 / 41) ~= O(2^(128))
```

View File

@ -1,16 +1,18 @@
# Staking Rewards
---
title: Staking Rewards
---
A Proof of Stake \(PoS\), \(i.e. using in-protocol asset, SOL, to provide secure consensus\) design is outlined here. Solana implements a proof of stake reward/security scheme for validator nodes in the cluster. The purpose is threefold:
* Align validator incentives with that of the greater cluster through
- Align validator incentives with that of the greater cluster through
skin-in-the-game deposits at risk
* Avoid 'nothing at stake' fork voting issues by implementing slashing rules
- Avoid 'nothing at stake' fork voting issues by implementing slashing rules
aimed at promoting fork convergence
* Provide an avenue for validator rewards provided as a function of validator
- Provide an avenue for validator rewards provided as a function of validator
participation in the cluster.
@ -22,13 +24,13 @@ Solana's ledger validation design is based on a rotating, stake-weighted selecte
To become a Solana validator, one must deposit/lock-up some amount of SOL in a contract. This SOL will not be accessible for a specific time period. The precise duration of the staking lockup period has not been determined. However we can consider three phases of this time for which specific parameters will be necessary:
* _Warm-up period_: which SOL is deposited and inaccessible to the node,
- _Warm-up period_: which SOL is deposited and inaccessible to the node,
however PoH transaction validation has not begun. Most likely on the order of
days to weeks
* _Validation period_: a minimum duration for which the deposited SOL will be
- _Validation period_: a minimum duration for which the deposited SOL will be
inaccessible, at risk of slashing \(see slashing rules below\) and earning
@ -36,7 +38,7 @@ To become a Solana validator, one must deposit/lock-up some amount of SOL in a c
year.
* _Cool-down period_: a duration of time following the submission of a
- _Cool-down period_: a duration of time following the submission of a
'withdrawal' transaction. During this period validation responsibilities have
@ -53,4 +55,3 @@ Solana's trustless sense of time and ordering provided by its PoH data structure
As discussed in the [Economic Design](../implemented-proposals/ed_overview/README.md) section, annual validator interest rates are to be specified as a function of total percentage of circulating supply that has been staked. The cluster rewards validators who are online and actively participating in the validation process throughout the entirety of their _validation period_. For validators that go offline/fail to validate transactions during this period, their annual reward is effectively reduced.
Similarly, we may consider an algorithmic reduction in a validator's active amount staked amount in the case that they are offline. I.e. if a validator is inactive for some amount of time, either due to a partition or otherwise, the amount of their stake that is considered active \(eligible to earn rewards\) may be reduced. This design would be structured to help long-lived partitions to eventually reach finality on their respective chains as the % of non-voting total stake is reduced over time until a supermajority can be achieved by the active validators in each partition. Similarly, upon re-engaging, the active amount staked will come back online at some defined rate. Different rates of stake reduction may be considered depending on the size of the partition/active set.

View File

@ -1,18 +1,20 @@
# Testing Programs
---
title: Testing Programs
---
Applications send transactions to a Solana cluster and query validators to confirm the transactions were processed and to check each transaction's result. When the cluster doesn't behave as anticipated, it could be for a number of reasons:
* The program is buggy
* The BPF loader rejected an unsafe program instruction
* The transaction was too big
* The transaction was invalid
* The Runtime tried to execute the transaction when another one was accessing
- The program is buggy
- The BPF loader rejected an unsafe program instruction
- The transaction was too big
- The transaction was invalid
- The Runtime tried to execute the transaction when another one was accessing
the same account
* The network dropped the transaction
* The cluster rolled back the ledger
* A validator responded to queries maliciously
- The network dropped the transaction
- The cluster rolled back the ledger
- A validator responded to queries maliciously
## The AsyncClient and SyncClient Traits
@ -49,4 +51,3 @@ Below the TPU level is the Bank. The Bank doesn't do signature verification or g
## Unit-testing with the Runtime
Below the Bank is the Runtime. The Runtime is the ideal test environment for unit-testing. By statically linking the Runtime into a native program implementation, the developer gains the shortest possible edit-compile-run loop. Without any dynamic linking, stack traces include debug symbols and program errors are straightforward to troubleshoot.

View File

@ -1,12 +1,14 @@
# Tower BFT
---
title: Tower BFT
---
This design describes Solana's _Tower BFT_ algorithm. It addresses the following problems:
* Some forks may not end up accepted by the supermajority of the cluster, and voters need to recover from voting on such forks.
* Many forks may be votable by different voters, and each voter may see a different set of votable forks. The selected forks should eventually converge for the cluster.
* Reward based votes have an associated risk. Voters should have the ability to configure how much risk they take on.
* The [cost of rollback](tower-bft.md#cost-of-rollback) needs to be computable. It is important to clients that rely on some measurable form of Consistency. The costs to break consistency need to be computable, and increase super-linearly for older votes.
* ASIC speeds are different between nodes, and attackers could employ Proof of History ASICS that are much faster than the rest of the cluster. Consensus needs to be resistant to attacks that exploit the variability in Proof of History ASIC speed.
- Some forks may not end up accepted by the supermajority of the cluster, and voters need to recover from voting on such forks.
- Many forks may be votable by different voters, and each voter may see a different set of votable forks. The selected forks should eventually converge for the cluster.
- Reward based votes have an associated risk. Voters should have the ability to configure how much risk they take on.
- The [cost of rollback](tower-bft.md#cost-of-rollback) needs to be computable. It is important to clients that rely on some measurable form of Consistency. The costs to break consistency need to be computable, and increase super-linearly for older votes.
- ASIC speeds are different between nodes, and attackers could employ Proof of History ASICS that are much faster than the rest of the cluster. Consensus needs to be resistant to attacks that exploit the variability in Proof of History ASIC speed.
For brevity this design assumes that a single voter with a stake is deployed as an individual validator in the cluster.
@ -35,35 +37,35 @@ Before a vote is pushed to the stack, all the votes leading up to vote with a lo
For example, a vote stack with the following state:
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 4 | 4 | 2 | 6 |
| 3 | 3 | 4 | 7 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 4 | 4 | 2 | 6 |
| 3 | 3 | 4 | 7 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
_Vote 5_ is at time 9, and the resulting state is
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 5 | 9 | 2 | 11 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 5 | 9 | 2 | 11 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
_Vote 6_ is at time 10
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 6 | 10 | 2 | 12 |
| 5 | 9 | 4 | 13 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 6 | 10 | 2 | 12 |
| 5 | 9 | 4 | 13 |
| 2 | 2 | 8 | 10 |
| 1 | 1 | 16 | 17 |
At time 10 the new votes caught up to the previous votes. But _vote 2_ expires at 10, so the when _vote 7_ at time 11 is applied the votes including and above _vote 2_ will be popped.
| vote | vote time | lockout | lock expiration time |
| ---: | ---: | ---: | ---: |
| 7 | 11 | 2 | 13 |
| 1 | 1 | 16 | 17 |
| ---: | --------: | ------: | -------------------: |
| 7 | 11 | 2 | 13 |
| 1 | 1 | 16 | 17 |
The lockout for vote 1 will not increase from 16 until the stack contains 5 votes.
@ -85,18 +87,18 @@ Each validator can independently set a threshold of cluster commitment to a fork
The following parameters need to be tuned:
* Number of votes in the stack before dequeue occurs \(32\).
* Rate of growth for lockouts in the stack \(2x\).
* Starting default lockout \(2\).
* Threshold depth for minimum cluster commitment before committing to the fork \(8\).
* Minimum cluster commitment size at threshold depth \(50%+\).
- Number of votes in the stack before dequeue occurs \(32\).
- Rate of growth for lockouts in the stack \(2x\).
- Starting default lockout \(2\).
- Threshold depth for minimum cluster commitment before committing to the fork \(8\).
- Minimum cluster commitment size at threshold depth \(50%+\).
### Free Choice
A "Free Choice" is an unenforcible validator action. There is no way for the protocol to encode and enforce these actions since each validator can modify the code and adjust the algorithm. A validator that maximizes self-reward over all possible futures should behave in such a way that the system is stable, and the local greedy choice should result in a greedy choice over all possible futures. A set of validator that are engaging in choices to disrupt the protocol should be bound by their stake weight to the denial of service. Two options exits for validator:
* a validator can outrun previous validator in virtual generation and submit a concurrent fork
* a validator can withhold a vote to observe multiple forks before voting
- a validator can outrun previous validator in virtual generation and submit a concurrent fork
- a validator can withhold a vote to observe multiple forks before voting
In both cases, the validator in the cluster have several forks to pick from concurrently, even though each fork represents a different height. In both cases it is impossible for the protocol to detect if the validator behavior is intentional or not.
@ -129,8 +131,8 @@ This attack is then limited to censoring the previous leaders fees, and individu
An attacker generates a concurrent fork from an older block to try to rollback the cluster. In this attack the concurrent fork is competing with forks that have already been voted on. This attack is limited by the exponential growth of the lockouts.
* 1 vote has a lockout of 2 slots. Concurrent fork must be at least 2 slots ahead, and be produced in 1 slot. Therefore requires an ASIC 2x faster.
* 2 votes have a lockout of 4 slots. Concurrent fork must be at least 4 slots ahead and produced in 2 slots. Therefore requires an ASIC 2x faster.
* 3 votes have a lockout of 8 slots. Concurrent fork must be at least 8 slots ahead and produced in 3 slots. Therefore requires an ASIC 2.6x faster.
* 10 votes have a lockout of 1024 slots. 1024/10, or 102.4x faster ASIC.
* 20 votes have a lockout of 2^20 slots. 2^20/20, or 52,428.8x faster ASIC.
- 1 vote has a lockout of 2 slots. Concurrent fork must be at least 2 slots ahead, and be produced in 1 slot. Therefore requires an ASIC 2x faster.
- 2 votes have a lockout of 4 slots. Concurrent fork must be at least 4 slots ahead and produced in 2 slots. Therefore requires an ASIC 2x faster.
- 3 votes have a lockout of 8 slots. Concurrent fork must be at least 8 slots ahead and produced in 3 slots. Therefore requires an ASIC 2.6x faster.
- 10 votes have a lockout of 1024 slots. 1024/10, or 102.4x faster ASIC.
- 20 votes have a lockout of 2^20 slots. 2^20/20, or 52,428.8x faster ASIC.

View File

@ -1,4 +1,6 @@
# Deterministic Transaction Fees
---
title: Deterministic Transaction Fees
---
Transactions currently include a fee field that indicates the maximum fee field a slot leader is permitted to charge to process a transaction. The cluster, on the other hand, agrees on a minimum fee. If the network is congested, the slot leader may prioritize the transactions offering higher fees. That means the client won't know how much was collected until the transaction is confirmed by the cluster and the remaining balance is checked. It smells of exactly what we dislike about Ethereum's "gas", non-determinism.
@ -14,14 +16,14 @@ Before sending a transaction to the cluster, a client may submit the transaction
## Fee Parameters
In the first implementation of this design, the only fee parameter is `lamports_per_signature`. The more signatures the cluster needs to verify, the higher the fee. The exact number of lamports is determined by the ratio of SPS to the SPS target. At the end of each slot, the cluster lowers `lamports_per_signature` when SPS is below the target and raises it when above the target. The minimum value for `lamports_per_signature` is 50% of the target `lamports_per_signature` and the maximum value is 10x the target \`lamports\_per\_signature'
In the first implementation of this design, the only fee parameter is `lamports_per_signature`. The more signatures the cluster needs to verify, the higher the fee. The exact number of lamports is determined by the ratio of SPS to the SPS target. At the end of each slot, the cluster lowers `lamports_per_signature` when SPS is below the target and raises it when above the target. The minimum value for `lamports_per_signature` is 50% of the target `lamports_per_signature` and the maximum value is 10x the target \`lamports_per_signature'
Future parameters might include:
* `lamports_per_pubkey` - cost to load an account
* `lamports_per_slot_distance` - higher cost to load very old accounts
* `lamports_per_byte` - cost per size of account loaded
* `lamports_per_bpf_instruction` - cost to run a program
- `lamports_per_pubkey` - cost to load an account
- `lamports_per_slot_distance` - higher cost to load very old accounts
- `lamports_per_byte` - cost per size of account loaded
- `lamports_per_bpf_instruction` - cost to run a program
## Attacks

View File

@ -1,4 +1,6 @@
# Validator Timestamp Oracle
---
title: Validator Timestamp Oracle
---
Third-party users of Solana sometimes need to know the real-world time a block
was produced, generally to meet compliance requirements for external auditors or
@ -10,17 +12,18 @@ The general outline of the proposed implementation is as follows:
- At regular intervals, each validator records its observed time for a known slot
on-chain (via a Timestamp added to a slot Vote)
- A client can request a block time for a rooted block using the `getBlockTime`
RPC method. When a client requests a timestamp for block N:
RPC method. When a client requests a timestamp for block N:
1. A validator determines a "cluster" timestamp for a recent timestamped slot
before block N by observing all the timestamped Vote instructions recorded on
the ledger that reference that slot, and determining the stake-weighted mean
timestamp.
before block N by observing all the timestamped Vote instructions recorded on
the ledger that reference that slot, and determining the stake-weighted mean
timestamp.
2. This recent mean timestamp is then used to calculate the timestamp of
block N using the cluster's established slot duration
block N using the cluster's established slot duration
Requirements:
- Any validator replaying the ledger in the future must come up with the same
time for every block since genesis
- Estimated block times should not drift more than an hour or so before resolving
@ -43,8 +46,7 @@ records its observed time by including a timestamp in its Vote instruction
submission. The corresponding slot for the timestamp is the newest Slot in the
Vote vector (`Vote::slots.iter().max()`). It is signed by the validator's
identity keypair as a usual Vote. In order to enable this reporting, the Vote
struct needs to be extended to include a timestamp field, `timestamp:
Option<UnixTimestamp>`, which will be set to `None` in most Votes.
struct needs to be extended to include a timestamp field, `timestamp: Option<UnixTimestamp>`, which will be set to `None` in most Votes.
This proposal suggests that Vote instructions with `Some(timestamp)` be issued
every 30min, which should be short enough to prevent block times drifting very
@ -67,7 +69,7 @@ A validator's vote account will hold its most recent slot-timestamp in VoteState
### Vote Program
The on-chain Vote program needs to be extended to process a timestamp sent with
a Vote instruction from validators. In addition to its current process\_vote
a Vote instruction from validators. In addition to its current process_vote
functionality (including loading the correct Vote account and verifying that the
transaction signer is the expected validator), this process needs to compare the
timestamp and corresponding slot to the currently stored values to verify that
@ -86,7 +88,7 @@ let timestamp_slot = floor(current_slot / timestamp_interval);
Then the validator needs to gather all Vote WithTimestamp transactions from the
ledger that reference that slot, using `Blockstore::get_slot_entries()`. As these
transactions could have taken some time to reach and be processed by the leader,
the validator needs to scan several completed blocks after the timestamp\_slot to
the validator needs to scan several completed blocks after the timestamp_slot to
get a reasonable set of Timestamps. The exact number of slots will need to be
tuned: More slots will enable greater cluster participation and more timestamp
datapoints; fewer slots will speed how long timestamp filtering takes.
@ -109,5 +111,5 @@ let block_n_timestamp = mean_timestamp + (block_n_slot_offset * slot_duration);
```
where `block_n_slot_offset` is the difference between the slot of block N and
the timestamp\_slot, and `slot_duration` is derived from the cluster's
the timestamp_slot, and `slot_duration` is derived from the cluster's
`slots_per_year` stored in each Bank

View File

@ -1,4 +1,6 @@
# Add Solana to Your Exchange
---
title: Add Solana to Your Exchange
---
This guide describes how to add Solana's native token SOL to your cryptocurrency
exchange.
@ -13,6 +15,7 @@ To run an api node:
1. [Install the Solana command-line tool suite](../cli/install-solana-cli-tools.md)
2. Boot the node with at least the following parameters:
```bash
solana-validator \
--ledger <LEDGER_PATH> \
@ -27,18 +30,19 @@ solana-validator \
--no-untrusted-rpc
```
Customize `--ledger` to your desired ledger storage location, and `--rpc-port` to the port you want to expose.
Customize `--ledger` to your desired ledger storage location, and `--rpc-port` to the port you want to expose.
The `--entrypoint`, `--expected-genesis-hash`, and `--expected-shred-version` parameters are all specific to the cluster you are joining. The shred version will change on any hard forks in the cluster, so including `--expected-shred-version` ensures you are receiving current data from the cluster you expect.
[Current parameters for Mainnet Beta](../clusters.md#example-solana-validator-command-line-2)
The `--entrypoint`, `--expected-genesis-hash`, and `--expected-shred-version` parameters are all specific to the cluster you are joining. The shred version will change on any hard forks in the cluster, so including `--expected-shred-version` ensures you are receiving current data from the cluster you expect.
[Current parameters for Mainnet Beta](../clusters.md#example-solana-validator-command-line-2)
The `--limit-ledger-size` parameter allows you to specify how many ledger [shreds](../terminology.md#shred) your node retains on disk. If you do not include this parameter, the ledger will keep the entire ledger until it runs out of disk space. A larger value like `--limit-ledger-size 250000000000` is good for a couple days
The `--limit-ledger-size` parameter allows you to specify how many ledger [shreds](../terminology.md#shred) your node retains on disk. If you do not include this parameter, the ledger will keep the entire ledger until it runs out of disk space. A larger value like `--limit-ledger-size 250000000000` is good for a couple days
Specifying one or more `--trusted-validator` parameters can protect you from booting from a malicious snapshot. [More on the value of booting with trusted validators](../running-validator/validator-start.md#trusted-validators)
Specifying one or more `--trusted-validator` parameters can protect you from booting from a malicious snapshot. [More on the value of booting with trusted validators](../running-validator/validator-start.md#trusted-validators)
Optional parameters to consider:
- `--private-rpc` prevents your RPC port from being published for use by other nodes
- `--rpc-bind-address` allows you to specify a different IP address to bind the RPC port
Optional parameters to consider:
- `--private-rpc` prevents your RPC port from being published for use by other nodes
- `--rpc-bind-address` allows you to specify a different IP address to bind the RPC port
### Automatic Restarts
@ -102,17 +106,18 @@ The easiest way to track all the deposit accounts for your exchange is to poll
for each confirmed block and inspect for addresses of interest, using the
JSON-RPC service of your Solana api node.
* To identify which blocks are available, send a [`getConfirmedBlocks` request](../apps/jsonrpc-api.md#getconfirmedblocks),
passing the last block you have already processed as the start-slot parameter:
- To identify which blocks are available, send a [`getConfirmedBlocks` request](../apps/jsonrpc-api.md#getconfirmedblocks),
passing the last block you have already processed as the start-slot parameter:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"method":"getConfirmedBlocks","params":[5]}' localhost:8899
{"jsonrpc":"2.0","result":[5,6,8,9,11],"id":1}
```
Not every slot produces a block, so there may be gaps in the sequence of integers.
* For each block, request its contents with a [`getConfirmedBlock` request](../apps/jsonrpc-api.md#getconfirmedblock):
- For each block, request its contents with a [`getConfirmedBlock` request](../apps/jsonrpc-api.md#getconfirmedblock):
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"method":"getConfirmedBlock","params":[5, "json"]}' localhost:8899
@ -195,8 +200,8 @@ can request the block from RPC in binary format, and parse it using either our
You can also query the transaction history of a specific address.
* Send a [`getConfirmedSignaturesForAddress`](../apps/jsonrpc-api.md#getconfirmedsignaturesforaddress)
request to the api node, specifying a range of recent slots:
- Send a [`getConfirmedSignaturesForAddress`](../apps/jsonrpc-api.md#getconfirmedsignaturesforaddress)
request to the api node, specifying a range of recent slots:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"method":"getConfirmedSignaturesForAddress","params":["6H94zdiaYfRfPfKjYLjyr2VFBg6JHXygy84r3qhc3NsC", 0, 10]}' localhost:8899
@ -212,8 +217,8 @@ curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"m
}
```
* For each signature returned, get the transaction details by sending a
[`getConfirmedTransaction`](../apps/jsonrpc-api.md#getconfirmedtransaction) request:
- For each signature returned, get the transaction details by sending a
[`getConfirmedTransaction`](../apps/jsonrpc-api.md#getconfirmedtransaction) request:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0","id":1,"method":"getConfirmedTransaction","params":["dhjhJp2V2ybQGVfELWM1aZy98guVVsxRCB5KhNiXFjCBMK5KEyzV8smhkVvs3xwkAug31KnpzJpiNPtcD5bG1t6", "json"]}' localhost:8899
@ -312,6 +317,7 @@ more on [blockhash expiration](#blockhash-expiration) below.
First, get a recent blockhash using the [`getFees` endpoint](../apps/jsonrpc-api.md#getfees)
or the CLI command:
```bash
solana fees --url http://localhost:8899
```

View File

@ -1,4 +1,6 @@
# Introduction
---
title: Introduction
---
## What is Solana?

View File

@ -1,12 +1,15 @@
# Offline Transaction Signing
---
title: Offline Transaction Signing
---
Some security models require keeping signing keys, and thus the signing
process, separated from transaction creation and network broadcast. Examples
include:
* Collecting signatures from geographically disparate signers in a
[multi-signature scheme](../cli/usage.md#multiple-witnesses)
* Signing transactions using an [airgapped](https://en.wikipedia.org/wiki/Air_gap_(networking))
signing device
- Collecting signatures from geographically disparate signers in a
[multi-signature scheme](../cli/usage.md#multiple-witnesses)
- Signing transactions using an [airgapped](<https://en.wikipedia.org/wiki/Air_gap_(networking)>)
signing device
This document describes using Solana's CLI to separately sign and submit a
transaction.
@ -14,27 +17,29 @@ transaction.
## Commands Supporting Offline Signing
At present, the following commands support offline signing:
* [`create-stake-account`](../cli/usage.md#solana-create-stake-account)
* [`deactivate-stake`](../cli/usage.md#solana-deactivate-stake)
* [`delegate-stake`](../cli/usage.md#solana-delegate-stake)
* [`split-stake`](../cli/usage.md#solana-split-stake)
* [`stake-authorize`](../cli/usage.md#solana-stake-authorize)
* [`stake-set-lockup`](../cli/usage.md#solana-stake-set-lockup)
* [`transfer`](../cli/usage.md#solana-transfer)
* [`withdraw-stake`](../cli/usage.md#solana-withdraw-stake)
- [`create-stake-account`](../cli/usage.md#solana-create-stake-account)
- [`deactivate-stake`](../cli/usage.md#solana-deactivate-stake)
- [`delegate-stake`](../cli/usage.md#solana-delegate-stake)
- [`split-stake`](../cli/usage.md#solana-split-stake)
- [`stake-authorize`](../cli/usage.md#solana-stake-authorize)
- [`stake-set-lockup`](../cli/usage.md#solana-stake-set-lockup)
- [`transfer`](../cli/usage.md#solana-transfer)
- [`withdraw-stake`](../cli/usage.md#solana-withdraw-stake)
## Signing Transactions Offline
To sign a transaction offline, pass the following arguments on the command line
1) `--sign-only`, prevents the client from submitting the signed transaction
to the network. Instead, the pubkey/signature pairs are printed to stdout.
2) `--blockhash BASE58_HASH`, allows the caller to specify the value used to
fill the transaction's `recent_blockhash` field. This serves a number of
purposes, namely:
* Eliminates the need to connect to the network and query a recent blockhash
via RPC
* Enables the signers to coordinate the blockhash in a multiple-signature
scheme
1. `--sign-only`, prevents the client from submitting the signed transaction
to the network. Instead, the pubkey/signature pairs are printed to stdout.
2. `--blockhash BASE58_HASH`, allows the caller to specify the value used to
fill the transaction's `recent_blockhash` field. This serves a number of
purposes, namely:
_ Eliminates the need to connect to the network and query a recent blockhash
via RPC
_ Enables the signers to coordinate the blockhash in a multiple-signature
scheme
### Example: Offline Signing a Payment
@ -60,10 +65,11 @@ Signers (Pubkey=Signature):
To submit a transaction that has been signed offline to the network, pass the
following arguments on the command line
1) `--blockhash BASE58_HASH`, must be the same blockhash as was used to sign
2) `--signer BASE58_PUBKEY=BASE58_SIGNATURE`, one for each offline signer. This
includes the pubkey/signature pairs directly in the transaction rather than
signing it with any local keypair(s)
1. `--blockhash BASE58_HASH`, must be the same blockhash as was used to sign
2. `--signer BASE58_PUBKEY=BASE58_SIGNATURE`, one for each offline signer. This
includes the pubkey/signature pairs directly in the transaction rather than
signing it with any local keypair(s)
### Example: Submitting an Offline Signed Payment

View File

@ -1,4 +1,6 @@
# Durable Transaction Nonces
---
title: Durable Transaction Nonces
---
Durable transaction nonces are a mechanism for getting around the typical
short lifetime of a transaction's [`recent_blockhash`](../transaction.md#recent-blockhash).
@ -19,15 +21,16 @@ creation of more complex account ownership arrangements and derived account
addresses not associated with a keypair. The `--nonce-authority <AUTHORITY_KEYPAIR>`
argument is used to specify this account and is supported by the following
commands
* `create-nonce-account`
* `new-nonce`
* `withdraw-from-nonce-account`
* `authorize-nonce-account`
- `create-nonce-account`
- `new-nonce`
- `withdraw-from-nonce-account`
- `authorize-nonce-account`
### Nonce Account Creation
The durable transaction nonce feature uses an account to store the next nonce
value. Durable nonce accounts must be [rent-exempt](../implemented-proposals/rent.md#two-tiered-rent-regime),
value. Durable nonce accounts must be [rent-exempt](../implemented-proposals/rent.md#two-tiered-rent-regime),
so need to carry the minimum balance to achieve this.
A nonce account is created by first generating a new keypair, then create the account on chain
@ -45,15 +48,9 @@ solana create-nonce-account nonce-keypair.json 1
2SymGjGV4ksPdpbaqWFiDoBz8okvtiik4KE9cnMQgRHrRLySSdZ6jrEcpPifW4xUpp4z66XM9d9wM48sA7peG2XL
```
{% hint style="info" %}
To keep the keypair entirely offline, use the [Paper Wallet](../paper-wallet/README.md)
keypair generation [instructions](../paper-wallet/paper-wallet-usage.md#seed-phrase-generation.md)
instead
{% endhint %}
> To keep the keypair entirely offline, use the [Paper Wallet](../paper-wallet/README.md) keypair generation [instructions](../paper-wallet/paper-wallet-usage.md#seed-phrase-generation.md) instead
{% hint style="info" %}
[Full usage documentation](../cli/usage.md#solana-create-nonce-account)
{% endhint %}
> [Full usage documentation](../cli/usage.md#solana-create-nonce-account)
### Querying the Stored Nonce Value
@ -73,9 +70,7 @@ solana nonce nonce-keypair.json
8GRipryfxcsxN8mAGjy8zbFo9ezaUsh47TsPzmZbuytU
```
{% hint style="info" %}
[Full usage documentation](../cli/usage.md#solana-get-nonce)
{% endhint %}
> [Full usage documentation](../cli/usage.md#solana-get-nonce)
### Advancing the Stored Nonce Value
@ -94,9 +89,7 @@ solana new-nonce nonce-keypair.json
44jYe1yPKrjuYDmoFTdgPjg8LFpYyh1PFKJqm5SC1PiSyAL8iw1bhadcAX1SL7KDmREEkmHpYvreKoNv6fZgfvUK
```
{% hint style="info" %}
[Full usage documentation](../cli/usage.md#solana-new-nonce)
{% endhint %}
> [Full usage documentation](../cli/usage.md#solana-new-nonce)
### Display Nonce Account
@ -116,9 +109,7 @@ minimum balance required: 0.00136416 SOL
nonce: DZar6t2EaCFQTbUP4DHKwZ1wT8gCPW2aRfkVWhydkBvS
```
{% hint style="info" %}
[Full usage documentation](../cli/usage.md#solana-nonce-account)
{% endhint %}
> [Full usage documentation](../cli/usage.md#solana-nonce-account)
### Withdraw Funds from a Nonce Account
@ -136,13 +127,9 @@ solana withdraw-from-nonce-account nonce-keypair.json ~/.config/solana/id.json 0
3foNy1SBqwXSsfSfTdmYKDuhnVheRnKXpoPySiUDBVeDEs6iMVokgqm7AqfTjbk7QBE8mqomvMUMNQhtdMvFLide
```
{% hint style="info" %}
Close a nonce account by withdrawing the full balance
{% endhint %}
> Close a nonce account by withdrawing the full balance
{% hint style="info" %}
[Full usage documentation](../cli/usage.md#solana-withdraw-from-nonce-account)
{% endhint %}
> [Full usage documentation](../cli/usage.md#solana-withdraw-from-nonce-account)
### Assign a New Authority to a Nonce Account
@ -160,21 +147,21 @@ solana authorize-nonce-account nonce-keypair.json nonce-authority.json
3F9cg4zN9wHxLGx4c3cUKmqpej4oa67QbALmChsJbfxTgTffRiL3iUehVhR9wQmWgPua66jPuAYeL1K2pYYjbNoT
```
{% hint style="info" %}
[Full usage documentation](../cli/usage.md#solana-authorize-nonce-account)
{% endhint %}
> [Full usage documentation](../cli/usage.md#solana-authorize-nonce-account)
## Other Commands Supporting Durable Nonces
To make use of durable nonces with other CLI subcommands, two arguments must be
supported.
* `--nonce`, specifies the account storing the nonce value
* `--nonce-authority`, specifies an optional [nonce authority](#nonce-authority)
- `--nonce`, specifies the account storing the nonce value
- `--nonce-authority`, specifies an optional [nonce authority](#nonce-authority)
The following subcommands have received this treatment so far
* [`pay`](../cli/usage.md#solana-pay)
* [`delegate-stake`](../cli/usage.md#solana-delegate-stake)
* [`deactivate-stake`](../cli/usage.md#solana-deactivate-stake)
- [`pay`](../cli/usage.md#solana-pay)
- [`delegate-stake`](../cli/usage.md#solana-delegate-stake)
- [`deactivate-stake`](../cli/usage.md#solana-deactivate-stake)
### Example Pay Using Durable Nonce
@ -205,10 +192,7 @@ $ solana airdrop -k alice.json 10
Now Alice needs a nonce account. Create one
{% hint style="info" %}
Here, no separate [nonce authority](#nonce-authority) is employed, so `alice.json`
has full authority over the nonce account
{% endhint %}
> Here, no separate [nonce authority](#nonce-authority) is employed, so `alice.json` has full authority over the nonce account
```bash
$ solana create-nonce-account -k alice.json nonce.json 1
@ -231,9 +215,7 @@ Error: Io(Custom { kind: Other, error: "Transaction \"33gQQaoPc9jWePMvDAeyJpcnSP
Alice retries the transaction, this time specifying her nonce account and the
blockhash stored there
{% hint style="info" %}
Remember, `alice.json` is the [nonce authority](#nonce-authority) in this example
{% endhint %}
> Remember, `alice.json` is the [nonce authority](#nonce-authority) in this example
```bash
$ solana nonce-account nonce.json
@ -241,6 +223,7 @@ balance: 1 SOL
minimum balance required: 0.00136416 SOL
nonce: F7vmkY3DTaxfagttWjQweib42b6ZHADSx94Tw8gHx3W7
```
```bash
$ solana pay -k alice.json --blockhash F7vmkY3DTaxfagttWjQweib42b6ZHADSx94Tw8gHx3W7 --nonce nonce.json bob.json 1
HR1368UKHVZyenmH7yVz5sBAijV6XAPeWbEiXEGVYQorRMcoijeNAbzZqEZiH8cDB8tk65ckqeegFjK8dHwNFgQ
@ -248,13 +231,14 @@ HR1368UKHVZyenmH7yVz5sBAijV6XAPeWbEiXEGVYQorRMcoijeNAbzZqEZiH8cDB8tk65ckqeegFjK8
#### - Success!
The transaction succeeds! Bob receives 1 SOL from Alice and Alice's stored
The transaction succeeds! Bob receives 1 SOL from Alice and Alice's stored
nonce advances to a new value
```bash
$ solana balance -k bob.json
1 SOL
```
```bash
$ solana nonce-account nonce.json
balance: 1 SOL

133
docs/src/pages/index.js Normal file
View File

@ -0,0 +1,133 @@
import React from "react";
import clsx from "clsx";
import Layout from "@theme/Layout";
import Link from "@docusaurus/Link";
import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
import useBaseUrl from "@docusaurus/useBaseUrl";
import styles from "./styles.module.css";
const features = [
{
title: <>Run a Validator</>,
imageUrl: "docs/running-validator/README",
description: <>Learn how to start a validator on the Solana cluster.</>,
},
{
title: <>Launch an Application</>,
imageUrl: "docs/apps/README",
description: <>Build superfast applications with one API.</>,
},
{
title: <>Participate in Tour de SOL</>,
imageUrl: "docs/tour-de-sol/README",
description: (
<>
Participate in our incentivised testnet and earn rewards by finding
bugs.
</>
),
},
{
title: <>Integrate the SOL token into your Exchange</>,
imageUrl: "docs/integrations/exchange",
description: (
<>
Follow our extensive integration guide to ensure a seamless user
experience.
</>
),
},
{
title: <>Create or Configure a Solana Wallet</>,
imageUrl: "docs/wallet-guide/README",
description: (
<>
Whether you need to create a wallet, check the balance of your funds, or
take a look at what's out there for housing SOL tokens, start here.
</>
),
},
{
title: <>Learn About Solana's Architecture</>,
imageUrl: "docs/cluster/README",
description: (
<>
Familiarize yourself with the high level architecture of a Solana
cluster.
</>
),
}, //
// {
// title: <>Understand Our Economic Design</>,
// imageUrl: "docs/implemented-proposals/ed_overview/README",
// description: (
// <>
// Solana's Economic Design provides a scalable blueprint for long term
// economic development and prosperity.
// </>
// ),
// }
];
function Feature({ imageUrl, title, description }) {
const imgUrl = useBaseUrl(imageUrl);
return (
<div className={clsx("col col--4", styles.feature)}>
{imgUrl && (
<Link className="navbar__link" to={imgUrl}>
<div className="card">
<div className="card__header">
<h3>{title}</h3>
</div>
<div className="card__body">
<p>{description}</p>
</div>
</div>
</Link>
)}
</div>
);
}
function Home() {
const context = useDocusaurusContext();
const { siteConfig = {} } = context;
return (
<Layout
title="Homepage"
description="Description will go into a meta tag in <head />"
>
{/* <header className={clsx("hero hero--primary", styles.heroBanner)}> */}
{/* <div className="container">
<h1 className="hero__title">{siteConfig.title}</h1>
<p className="hero__subtitle">{siteConfig.tagline}</p> */}
{/* <div className={styles.buttons}>
<Link
className={clsx(
'button button--outline button--secondary button--lg',
styles.getStarted,
)}
to={useBaseUrl('docs/')}>
Get Started
</Link>
</div> */}
{/* </div> */}
{/* </header> */}
<main>
{features && features.length > 0 && (
<section className={styles.features}>
<div className="container">
<div className="row cards__container">
{features.map((props, idx) => (
<Feature key={idx} {...props} />
))}
</div>
</div>
</section>
)}
</main>
</Layout>
);
}
export default Home;

View File

@ -0,0 +1,37 @@
/* stylelint-disable docusaurus/copyright-header */
/**
* CSS files with the .module.css suffix will be treated as CSS modules
* and scoped locally.
*/
.heroBanner {
padding: 4rem 0;
text-align: center;
position: relative;
overflow: hidden;
}
@media screen and (max-width: 966px) {
.heroBanner {
padding: 2rem;
}
}
.buttons {
display: flex;
align-items: center;
justify-content: center;
}
.features {
display: flex;
align-items: center;
padding: 2rem 0;
width: 100%;
}
.featureImage {
height: 200px;
width: 200px;
}

View File

@ -1,12 +1,11 @@
# Paper Wallet
---
title: Paper Wallet
---
This document describes how to create and use a paper wallet with the Solana CLI
tools.
{% hint style="info" %}
We do not intend to advise on how to *securely* create or manage paper wallets.
Please research the security concerns carefully.
{% endhint %}
> We do not intend to advise on how to _securely_ create or manage paper wallets. Please research the security concerns carefully.
## Overview
@ -17,4 +16,4 @@ support keypair input via seed phrases.
To learn more about the BIP39 standard, visit the Bitcoin BIPs Github repository
[here](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki).
{% page-ref page="usage.md" %}
[Usage](paper-wallet-usage.md)

View File

@ -1,14 +1,12 @@
# Paper Wallet Usage
---
title: Paper Wallet Usage
---
Solana commands can be run without ever saving a keypair to disk on a machine.
If avoiding writing a private key to disk is a security concern of yours, you've
come to the right place.
{% hint style="warning" %}
Even using this secure input method, it's still possible that a private key gets
written to disk by unencrypted memory swaps. It is the user's responsibility to
protect against this scenario.
{% endhint %}
> Even using this secure input method, it's still possible that a private key gets written to disk by unencrypted memory swaps. It is the user's responsibility to protect against this scenario.
## Before You Begin
@ -30,10 +28,7 @@ The seed phrase and passphrase can be used together as a paper wallet. As long
as you keep your seed phrase and passphrase stored safely, you can use them to
access your account.
{% hint style="info" %}
For more information about how seed phrases work, review this
[Bitcoin Wiki page](https://en.bitcoin.it/wiki/Seed_phrase).
{% endhint %}
> For more information about how seed phrases work, review this [Bitcoin Wiki page](https://en.bitcoin.it/wiki/Seed_phrase).
### Seed Phrase Generation
@ -50,26 +45,20 @@ have not made any errors.
solana-keygen new --no-outfile
```
{% hint style="warning" %}
If the `--no-outfile` flag is **omitted**, the default behavior is to write the
keypair to `~/.config/solana/id.json`, resulting in a
[file system wallet](../file-system-wallet/README.md)
{% endhint %}
> If the `--no-outfile` flag is **omitted**, the default behavior is to write the keypair to `~/.config/solana/id.json`, resulting in a [file system wallet](../file-system-wallet/README.md)
The output of this command will display a line like this:
```bash
pubkey: 9ZNTfG4NyQgxy2SWjSiQoUyBPEvXT2xo7fKc5hPYYJ7b
```
The value shown after `pubkey:` is your *wallet address*.
The value shown after `pubkey:` is your _wallet address_.
**Note:** In working with paper wallets and file system wallets, the terms "pubkey"
and "wallet address" are sometimes used interchangably.
{% hint style="info" %}
For added security, increase the seed phrase word count using the `--word-count`
argument
{% endhint %}
> For added security, increase the seed phrase word count using the `--word-count` argument
For full usage details run:
@ -88,10 +77,7 @@ through entering your seed phrase and a passphrase if you chose to use one.
solana-keygen pubkey ASK
```
{% hint style="info" %}
Note that you could potentially use different passphrases for the same seed
phrase. Each unique passphrase will yield a different keypair.
{% endhint %}
> Note that you could potentially use different passphrases for the same seed phrase. Each unique passphrase will yield a different keypair.
The `solana-keygen` tool uses the same BIP39 standard English word list as it
does to generate seed phrases. If your seed phrase was generated with another
@ -104,17 +90,12 @@ solana-keygen pubkey ASK --skip-seed-phrase-validation
```
After entering your seed phrase with `solana-keygen pubkey ASK` the console
will display a string of base-58 character. This is the *wallet address*
will display a string of base-58 character. This is the _wallet address_
associated with your seed phrase.
{% hint style="info" %}
Copy the derived address to a USB stick for easy usage on networked computers
{% endhint %}
> Copy the derived address to a USB stick for easy usage on networked computers
{% hint style="info" %}
A common next step is to [check the balance](#checking-account-balance) of the
account associated with a public key
{% endhint %}
> A common next step is to [check the balance](#checking-account-balance) of the account associated with a public key
For full usage details run:
@ -142,7 +123,7 @@ keypair generated from your seed phrase, and "Failed" otherwise.
All that is needed to check an account balance is the public key of an account.
To retrieve public keys securely from a paper wallet, follow the
[Public Key Derivation](#public-key-derivation) instructions on an
[air gapped computer](https://en.wikipedia.org/wiki/Air_gap_\(networking\)).
[air gapped computer](<https://en.wikipedia.org/wiki/Air_gap_(networking)>).
Public keys can then be typed manually or transferred via a USB stick to a
networked machine.
@ -160,7 +141,8 @@ solana balance <PUBKEY>
```
## Creating Multiple Paper Wallet Addresses
You can create as many wallet addresses as you like. Simply re-run the
You can create as many wallet addresses as you like. Simply re-run the
steps in [Seed Phrase Generation](#seed-phrase-generation) or
[Public Key Derivation](#public-key-derivation) to create a new address.
Multiple wallet addresses can be useful if you want to transfer tokens between

Some files were not shown because too many files have changed in this diff Show More