added wormhole-local-validator and evm messenger docs

This commit is contained in:
spacemandev 2022-06-30 23:14:08 -05:00
parent 5f7785bf91
commit d61c6e921b
68 changed files with 369 additions and 8490 deletions

View File

@ -6,13 +6,6 @@ This project uses Foundry to compile and deploy EVM contracts. You can find inst
The javascript dependencies can be installed via `npm install` in this folder. The javascript dependencies can be installed via `npm install` in this folder.
You will also need Docker; you can get either [Docker Desktop](https://docs.docker.com/get-docker/) if you're developing on your computer or if you're in a headless vm, install [Docker Engine](https://docs.docker.com/engine/)
## Run Guardiand
After you have the dependencies installed, we'll need to spin up the EVM chains, deploy the Wormhole contracts to them, then startup a Wormhole Guardian to observe and sign VAAs. We have provided a script to automate this all for you.
Simply run `npm run guardiand` and wait while the Wormhole Guardian builds a docker image. The first time you run this command, it might take a while (up to 550 seconds on a modern laptop!). After the image is built however, it'll be relatively fast to bring it up and down.
## Test Scripts ## Test Scripts
After you have Guardiand running, you can run the basic test with `npm run test`. This will: After you have Guardiand running, you can run the basic test with `npm run test`. This will:
- Deploy a simple Messenger contract (found in chains/evm/src/Messenger.sol) to each EVM chain - Deploy a simple Messenger contract (found in chains/evm/src/Messenger.sol) to each EVM chain

View File

@ -4,8 +4,6 @@
"description": "A simple template for getting started with xDapps.", "description": "A simple template for getting started with xDapps.",
"main": "starter.js", "main": "starter.js",
"scripts": { "scripts": {
"guardiand": "sh wormhole.sh",
"cleanup": "docker kill guardiand && docker rm guardiand && npx pm2 kill",
"test": "sh tests/evm0-evm1.sh" "test": "sh tests/evm0-evm1.sh"
}, },
"keywords": [], "keywords": [],

View File

@ -6,10 +6,8 @@
"rpc": "http://localhost:8545", "rpc": "http://localhost:8545",
"privateKey": "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d", "privateKey": "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d",
"bridgeAddress": "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550", "bridgeAddress": "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550",
"deployedAddress": "0xf19a2a01b70519f67adb309a994ec8c69a967e8b", "deployedAddress": "",
"emittedVAAs": [ "emittedVAAs": []
"AQAAAAABAOKdOtGAsVPWjD9EXXXqpi/MmWkJRbqvStBGPpzkTyf3XaPUn3lyKSCqyBuivoD2iIlfF0lC/txAO8TlzjVVt3sAYrn3kQAAAAAAAgAAAAAAAAAAAAAAAPGaKgG3BRn2etswmplOyMaaln6LAAAAAAAAAAABRnJvbTogZXZtMFxuTXNnOiBIZWxsbyBXb3JsZCE="
]
}, },
"evm1": { "evm1": {
"type": "evm", "type": "evm",
@ -17,10 +15,8 @@
"rpc": "http://localhost:8546", "rpc": "http://localhost:8546",
"privateKey": "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d", "privateKey": "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d",
"bridgeAddress": "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550", "bridgeAddress": "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550",
"deployedAddress": "0x4cfb3f70bf6a80397c2e634e5bdd85bc0bb189ee", "deployedAddress": "",
"emittedVAAs": [ "emittedVAAs": []
"AQAAAAABAGdr/ZQbi3Uvn5IlrOV+hvTMcpsTXCw6LXRmH1Tll3QtFtWo5genUwstcmCL20psf/vPH7wB7GSiwKS8DZw5h58AYrn3mwAAAAAABAAAAAAAAAAAAAAAAEz7P3C/aoA5fC5jTlvdhbwLsYnuAAAAAAAAAAABRnJvbTogZXZtMVxuTXNnOiBIZWxsbyBXb3JsZCE="
]
} }
}, },
"wormhole": { "wormhole": {

View File

@ -1,20 +1,14 @@
# EVM Messenger # EVM Token Bridge
Simple messenger project that sends a "Hello World" message between two EVM chains using Wormhole. Attests and send tokens from one EVM contract to another on another EVM chain.
## Dependencies ## Dependencies
This project uses Foundry to compile and deploy EVM contracts. You can find install instructions at [`https://getfoundry.sh`](http://getfoundry.sh) This project uses Foundry to compile and deploy EVM contracts. You can find install instructions at [`https://getfoundry.sh`](http://getfoundry.sh)
The javascript dependencies can be installed via `npm install` in this folder. The javascript dependencies can be installed via `npm install` in this folder.
You will also need Docker; you can get either [Docker Desktop](https://docs.docker.com/get-docker/) if you're developing on your computer or if you're in a headless vm, install [Docker Engine](https://docs.docker.com/engine/)
## Run Guardiand
After you have the dependencies installed, we'll need to spin up the EVM chains, deploy the Wormhole contracts to them, then startup a Wormhole Guardian to observe and sign VAAs. We have provided a script to automate this all for you.
Simply run `npm run guardiand` and wait while the Wormhole Guardian builds a docker image. The first time you run this command, it might take a while (up to 550 seconds on a modern laptop!). After the image is built however, it'll be relatively fast to bring it up and down.
## Test Scripts ## Test Scripts
After you have Guardiand running, you can run the basic test with `npm run test`. This will: You can run the basic test with `npm run test`. This will:
- Deploy a Treasury contract - Deploy a Treasury contract
- Attest the TKN ERC20 token from Chain0 (ETH) to Chain1 (BSC) - Attest the TKN ERC20 token from Chain0 (ETH) to Chain1 (BSC)
- Mint 100 TKN tokens to the Treasury on ETH - Mint 100 TKN tokens to the Treasury on ETH

View File

@ -4,8 +4,6 @@
"description": "A simple template for transferring tokens using EVM.", "description": "A simple template for transferring tokens using EVM.",
"main": "starter.js", "main": "starter.js",
"scripts": { "scripts": {
"guardiand": "sh wormhole.sh",
"cleanup": "docker kill guardiand && docker rm guardiand && npx pm2 kill",
"test": "sh tests/treasury_bridge.sh" "test": "sh tests/treasury_bridge.sh"
}, },
"keywords": [], "keywords": [],

View File

@ -1,113 +0,0 @@
#!/usr/bin/env bash
npm run cleanup
if [! docker info > /dev/null ] ; then
echo "This script uses docker, and it isn't running - please start docker and try again!"
exit 1
fi
# Check if wormhole/ repo exists.
# If it doens't then clone and build guardiand
if [ ! -d "./wormhole" ]
then
git clone https://github.com/certusone/wormhole
cd wormhole/
DOCKER_BUILDKIT=1 docker build --target go-export -f Dockerfile.proto -o type=local,dest=node .
DOCKER_BUILDKIT=1 docker build --target node-export -f Dockerfile.proto -o type=local,dest=. .
cd node/
echo "Have patience, this step takes upwards of 500 seconds!"
if [ $(uname -m) = "arm64" ]; then
echo "Building Guardian for linux/amd64"
DOCKER_BUILDKIT=1 docker build --platform linux/amd64 -f Dockerfile -t guardian .
else
echo "Building Guardian natively"
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t guardian .
fi
cd ../../
fi
# Start EVM Chain 0
npx pm2 start 'ganache -p 8545 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm0
# Start EVM Chain 1
npx pm2 start 'ganache -p 8546 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm1
#Install Wormhole Eth Dependencies
cd wormhole/ethereum
npm i
cp .env.test .env
npm run build
# Deploy Wormhole Contracts to EVM Chain 0
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_bsc_chain.js && npx truffle exec scripts/register_algo_chain.js
# Deploy Wormhole Contracts to EVM Chain 1
perl -pi -e 's/CHAIN_ID=0x2/CHAIN_ID=0x4/g' .env && perl -pi -e 's/8545/8546/g' truffle-config.js
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_eth_chain.js && npx truffle exec scripts/register_algo_chain.js && nc -lkp 2000 0.0.0.0
perl -pi -e 's/CHAIN_ID=0x4/CHAIN_ID=0x2/g' .env && perl -pi -e 's/8546/8545/g' truffle-config.js
cd ../../
# Run Guardiand
if [ $(uname -m) = "arm64" ]; then
docker run -d --name guardiand -p 7070:7070 -p 7071:7071 -p 7073:7073 --platform linux/amd64 --hostname guardian-0 --cap-add=IPC_LOCK --entrypoint /guardiand guardian node \
--unsafeDevMode --guardianKey /tmp/bridge.key --publicRPC "[::]:7070" --publicWeb "[::]:7071" --adminSocket /tmp/admin.sock --dataDir /tmp/data \
--ethRPC ws://host.docker.internal:8545 \
--ethContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--bscRPC ws://host.docker.internal:8546 \
--bscContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--polygonRPC ws://host.docker.internal:8545 \
--avalancheRPC ws://host.docker.internal:8545 \
--auroraRPC ws://host.docker.internal:8545 \
--fantomRPC ws://host.docker.internal:8545 \
--oasisRPC ws://host.docker.internal:8545 \
--karuraRPC ws://host.docker.internal:8545 \
--acalaRPC ws://host.docker.internal:8545 \
--klaytnRPC ws://host.docker.internal:8545 \
--celoRPC ws://host.docker.internal:8545 \
--moonbeamRPC ws://host.docker.internal:8545 \
--neonRPC ws://host.docker.internal:8545 \
--terraWS ws://host.docker.internal:8545 \
--terra2WS ws://host.docker.internal:8545 \
--terraLCD https://host.docker.internal:1317 \
--terra2LCD http://host.docker.internal:1317 \
--terraContract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--terra2Contract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--solanaContract Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o \
--solanaWS ws://host.docker.internal:8900 \
--solanaRPC http://host.docker.internal:8899 \
--algorandIndexerRPC ws://host.docker.internal:8545 \
--algorandIndexerToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodRPC https://host.docker.internal:4001 \
--algorandAppID "4"
else
docker run -d --name guardiand --network host --hostname guardian-0 --cap-add=IPC_LOCK --entrypoint /guardiand guardian node \
--unsafeDevMode --guardianKey /tmp/bridge.key --publicRPC "[::]:7070" --publicWeb "[::]:7071" --adminSocket /tmp/admin.sock --dataDir /tmp/data \
--ethRPC ws://localhost:8545 \
--ethContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--bscRPC ws://localhost:8546 \
--bscContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--polygonRPC ws://localhost:8545 \
--avalancheRPC ws://localhost:8545 \
--auroraRPC ws://localhost:8545 \
--fantomRPC ws://localhost:8545 \
--oasisRPC ws://localhost:8545 \
--karuraRPC ws://localhost:8545 \
--acalaRPC ws://localhost:8545 \
--klaytnRPC ws://localhost:8545 \
--celoRPC ws://localhost:8545 \
--moonbeamRPC ws://localhost:8545 \
--neonRPC ws://localhost:8545 \
--terraWS ws://localhost:8545 \
--terra2WS ws://localhost:8545 \
--terraLCD https://terra-terrad:1317 \
--terra2LCD http://localhost:1317 \
--terraContract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--terra2Contract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--solanaContract Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o \
--solanaWS ws://localhost:8900 \
--solanaRPC http://localhost:8899 \
--algorandIndexerRPC ws://localhost:8545 \
--algorandIndexerToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodRPC https://localhost:4001 \
--algorandAppID "4"
fi
echo "Guardiand Running! To look at logs: \"docker logs guardiand -f\""

View File

@ -1,2 +0,0 @@
node_modules/
xdapp.config.json

View File

@ -1,18 +0,0 @@
# Messenger
This program passes messages between the various connected chains.
It has a config (xdapp.config.json) that you'll usually find in xDapp Projects which outlines the RPC endpionts for various networks.
It also has a messenger.js, the main orchestration file that is in charge of carrying out the various tasks for each of the networks, like deploying code and interacting with contracts.
## Test
```
npm run test
```
## xdapp.config.json
TODO
## messenger.js
TODO

View File

@ -1,318 +0,0 @@
#!/usr/bin/python3
"""
Copyright 2022 Wormhole Project Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from algosdk import account, mnemonic, abi
from algosdk.encoding import decode_address, encode_address
from algosdk.future import transaction
from algosdk.logic import get_application_address
from algosdk.v2client.algod import AlgodClient
from base64 import b64decode
from pyteal.ast import *
from pyteal.compiler import *
from pyteal.ir import *
from pyteal.types import *
from typing import List, Tuple, Dict, Any, Optional, Union
import pprint
import sys
class PendingTxnResponse:
def __init__(self, response: Dict[str, Any]) -> None:
self.poolError: str = response["pool-error"]
self.txn: Dict[str, Any] = response["txn"]
self.applicationIndex: Optional[int] = response.get("application-index")
self.assetIndex: Optional[int] = response.get("asset-index")
self.closeRewards: Optional[int] = response.get("close-rewards")
self.closingAmount: Optional[int] = response.get("closing-amount")
self.confirmedRound: Optional[int] = response.get("confirmed-round")
self.globalStateDelta: Optional[Any] = response.get("global-state-delta")
self.localStateDelta: Optional[Any] = response.get("local-state-delta")
self.receiverRewards: Optional[int] = response.get("receiver-rewards")
self.senderRewards: Optional[int] = response.get("sender-rewards")
self.innerTxns: List[Any] = response.get("inner-txns", [])
self.logs: List[bytes] = [b64decode(l) for l in response.get("logs", [])]
class Account:
"""Represents a private key and address for an Algorand account"""
def __init__(self, privateKey: str) -> None:
self.sk = privateKey
self.addr = account.address_from_private_key(privateKey)
print (privateKey)
print (" " + self.getMnemonic())
print (" " + self.addr)
def getAddress(self) -> str:
return self.addr
def getPrivateKey(self) -> str:
return self.sk
def getMnemonic(self) -> str:
return mnemonic.from_private_key(self.sk)
@classmethod
def FromMnemonic(cls, m: str) -> "Account":
return cls(mnemonic.to_private_key(m))
def fullyCompileContract(client: AlgodClient, contract: Expr) -> bytes:
teal = compileTeal(contract, mode=Mode.Application, version=6)
response = client.compile(teal)
return response
def clear_app():
return Int(1)
devMode = True
def approve_app():
me = Global.current_application_address()
# This bit of magic causes the line number of the assert to show
# up in the small bit of info shown when an assert trips. This
# tells you the actual line number of the assert that caused the
# txn to fail.
def MagicAssert(a) -> Expr:
if devMode:
from inspect import currentframe
return Assert(And(a, Int(currentframe().f_back.f_lineno)))
else:
return Assert(a)
# potential badness
def assert_common_checks(e) -> Expr:
return MagicAssert(And(
e.rekey_to() == Global.zero_address(),
e.close_remainder_to() == Global.zero_address(),
e.asset_close_to() == Global.zero_address()
))
def sendMessage():
return Seq(
InnerTxnBuilder.Begin(),
InnerTxnBuilder.SetFields(
{
TxnField.type_enum: TxnType.ApplicationCall,
TxnField.application_id: App.globalGet(Bytes("coreid")),
TxnField.application_args: [Bytes("publishMessage"), Txn.application_args[1], Itob(Int(0))],
TxnField.accounts: [Txn.accounts[1]],
TxnField.note: Bytes("publishMessage"),
TxnField.fee: Int(0),
}
),
InnerTxnBuilder.Submit(),
# It is the way...
Approve()
)
@Subroutine(TealType.uint64)
def checkedGet(v) -> Expr:
maybe = App.globalGetEx(Txn.application_id(), v)
# If we assert here, it means we have not registered the emitter
return Seq(maybe, MagicAssert(maybe.hasValue()), maybe.value())
def receiveMessage():
off = ScratchVar()
emitter = ScratchVar()
sequence = ScratchVar()
tidx = ScratchVar()
return Seq([
# First, lets make sure we are looking at the correct vaa version...
MagicAssert(Btoi(Extract(Txn.application_args[1], Int(0), Int(1))) == Int(1)),
# From the vaa, I will grab the emitter and sequence number
off.store(Btoi(Extract(Txn.application_args[1], Int(5), Int(1))) * Int(66) + Int(14)), # The offset of the chain/emitter
emitter.store(Extract(Txn.application_args[1], off.load(), Int(34))),
sequence.store(Btoi(Extract(Txn.application_args[1], off.load() + Int(34), Int(8)))),
# Should be going up and never repeating.. If you want
# something that can take messages in any order, look at
# checkForDuplicate() in the token_bridge contract. It is
# kind of a heavy lift but it can be done
MagicAssert(sequence.load() > checkedGet(emitter.load())),
App.globalPut(emitter.load(), sequence.load()),
# Now lets check to see if this vaa was actually signed by
# the guardians. We do this by confirming that the
# previous txn in the group was to the wormhole core and
# against the verifyVAA method. If that passed, then the
# vaa must be legit
MagicAssert(Txn.group_index() > Int(0)),
tidx.store(Txn.group_index() - Int(1)),
MagicAssert(And(
# Lets see if the vaa we are about to process was actually verified by the core
Gtxn[tidx.load()].type_enum() == TxnType.ApplicationCall,
Gtxn[tidx.load()].application_id() == App.globalGet(Bytes("coreid")),
Gtxn[tidx.load()].application_args[0] == Bytes("verifyVAA"),
Gtxn[tidx.load()].sender() == Txn.sender(),
# we are all taking about the same vaa?
Gtxn[tidx.load()].application_args[1] == Txn.application_args[1],
# We all opted into the same accounts?
Gtxn[tidx.load()].accounts[0] == Txn.accounts[0],
Gtxn[tidx.load()].accounts[1] == Txn.accounts[1],
Gtxn[tidx.load()].accounts[2] == Txn.accounts[2],
)),
# check for hackery
assert_common_checks(Gtxn[tidx.load()]),
assert_common_checks(Txn),
# ... boiler plate is done...
# What is the offset into the vaa of the actual payload?
off.store(Btoi(Extract(Txn.application_args[1], Int(5), Int(1))) * Int(66) + Int(57)),
# Lets extract it and log it...
MagicAssert(Len(Txn.application_args[1]) > off.load()),
Log(Extract(Txn.application_args[1], off.load(), Len(Txn.application_args[1]) - off.load())),
# It is the way...
Approve()
])
# You could wrap your governance in a vaa from a trusted
# governance emitter. For the purposes of this demo, we are
# skipping that. Again, you could look at the core contract or
# the portal contract in wormhole to see examples of doing
# governance with vaa's.
def registerEmitter():
return Seq([
# The chain comes in as 8 bytes, we will take the last two bytes, append the emitter to it, and set it to zero
App.globalPut(Concat(Txn.application_args[2], Txn.application_args[1]), Int(0)),
# It is the way...
Approve()
])
METHOD = Txn.application_args[0]
router = Cond(
[METHOD == Bytes("registerEmitter"), registerEmitter()],
[METHOD == Bytes("receiveMessage"), receiveMessage()],
[METHOD == Bytes("sendMessage"), sendMessage()],
)
on_create = Seq( [
App.globalPut(Bytes("coreid"), Btoi(Txn.application_args[0])),
Return(Int(1))
])
on_update = Seq( [
Return(Int(0))
] )
on_delete = Seq( [
Return(Int(0))
] )
on_optin = Seq( [
Return(Int(1))
] )
return Cond(
[Txn.application_id() == Int(0), on_create],
[Txn.on_completion() == OnComplete.UpdateApplication, on_update],
[Txn.on_completion() == OnComplete.DeleteApplication, on_delete],
[Txn.on_completion() == OnComplete.OptIn, on_optin],
[Txn.on_completion() == OnComplete.NoOp, router]
)
def get_test_app(client: AlgodClient) -> Tuple[bytes, bytes]:
APPROVAL_PROGRAM = fullyCompileContract(client, approve_app())
CLEAR_STATE_PROGRAM = fullyCompileContract(client, clear_app())
return APPROVAL_PROGRAM, CLEAR_STATE_PROGRAM
def waitForTransaction(
client: AlgodClient, txID: str, timeout: int = 10
) -> PendingTxnResponse:
lastStatus = client.status()
lastRound = lastStatus["last-round"]
startRound = lastRound
while lastRound < startRound + timeout:
pending_txn = client.pending_transaction_info(txID)
if pending_txn.get("confirmed-round", 0) > 0:
return PendingTxnResponse(pending_txn)
if pending_txn["pool-error"]:
raise Exception("Pool error: {}".format(pending_txn["pool-error"]))
lastStatus = client.status_after_block(lastRound + 1)
lastRound += 1
raise Exception(
"Transaction {} not confirmed after {} rounds".format(txID, timeout)
)
if __name__ == "__main__":
#algod_address = "https://testnet-api.algonode.cloud"
#algod_address = "https://mainnet-api.algonode.cloud"
algod_address = sys.argv[3]
algod_token="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
client = AlgodClient(algod_token, algod_address)
approval, clear = get_test_app(client)
globalSchema = transaction.StateSchema(num_uints=2, num_byte_slices=40)
# localSchema is IMPORTANT.. you need 16 byte slices
localSchema = transaction.StateSchema(num_uints=0, num_byte_slices=16)
sender = Account.FromMnemonic(sys.argv[2])
coreid = int(sys.argv[1])
txn = transaction.ApplicationCreateTxn(
sender=sender.getAddress(),
on_complete=transaction.OnComplete.NoOpOC,
approval_program=b64decode(approval["result"]),
clear_program=b64decode(clear["result"]),
global_schema=globalSchema,
local_schema=localSchema,
sp=client.suggested_params(),
app_args = [coreid]
)
signedTxn = txn.sign(sender.getPrivateKey())
print("creating app")
client.send_transaction(signedTxn)
response = waitForTransaction(client, signedTxn.get_txid())
messenger=response.applicationIndex
#print("done.. Handing it some money")
txn = transaction.PaymentTxn(sender = sender.getAddress(), sp = client.suggested_params(), receiver = get_application_address(response.applicationIndex), amt = 100000)
signedTxn = txn.sign(sender.getPrivateKey())
client.send_transaction(signedTxn)
#pprint.pprint({"testapp": str(testapp), "address": get_application_address(testapp), "emitterAddress": decode_address(get_application_address(testapp)).hex()})
print("App ID:", messenger)
print("Address: ", get_application_address(messenger))

View File

@ -1,12 +0,0 @@
{
"name": "algorand",
"version": "1.0.0",
"description": "",
"main": "index.js",
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "MIT"
}

View File

@ -1,27 +0,0 @@
attrs==21.4.0
cffi==1.15.0
colorama==0.4.4
execnet==1.9.0
future-fstrings==1.2.0
iniconfig==1.1.1
msgpack==1.0.3
packaging==21.3
pluggy==1.0.0
py==1.11.0
pycparser==2.21
pycryptodomex==3.12.0
pydantic==1.9.0
PyNaCl==1.5.0
pyparsing==3.0.6
pyteal==v0.10.1
py-algorand-sdk==1.10.0b1
pytest==6.2.5
pytest-depends==1.0.1
pytest-forked==1.4.0
pytest-xdist==2.5.0
PyYAML==6.0
toml==0.10.2
typing-extensions==4.0.1
uvarint==1.2.0
eth_abi==2.1.1
coincurve==16.0.0

View File

@ -1,2 +0,0 @@
cache/
out/

View File

@ -1,7 +0,0 @@
[default]
src = 'src'
out = 'out'
libs = ['lib']
solc_version = '0.8.10'
# See more config options https://github.com/foundry-rs/foundry/tree/master/config

@ -1 +0,0 @@
Subproject commit 1680d7fb3e00b7b197a7336e7c88e838c7e6a3ec

View File

@ -1,53 +0,0 @@
//SPDX-License-Identifier: Unlicense
pragma solidity ^0.8.0;
import "./Wormhole/IWormhole.sol";
contract Messenger {
string private current_msg;
address private wormhole_core_bridge_address = address(0xC89Ce4735882C9F0f0FE26686c53074E09B0D550);
IWormhole core_bridge = IWormhole(wormhole_core_bridge_address);
uint32 nonce = 0;
mapping(uint16 => bytes32) _applicationContracts;
address owner;
mapping(bytes32 => bool) _completedMessages;
constructor(){
owner = msg.sender;
}
function sendMsg(bytes memory str) public returns (uint64 sequence) {
sequence = core_bridge.publishMessage(nonce, str, 1);
nonce = nonce+1;
}
function receiveEncodedMsg(bytes memory encodedMsg) public {
(IWormhole.VM memory vm, bool valid, string memory reason) = core_bridge.parseAndVerifyVM(encodedMsg);
//1. Check Wormhole Guardian Signatures
// If the VM is NOT valid, will return the reason it's not valid
// If the VM IS valid, reason will be blank
require(valid, reason);
//2. Check if the Emitter Chain contract is registered
require(_applicationContracts[vm.emitterChainId] == vm.emitterAddress, "Invalid Emitter Address!");
//3. Check that the message hasn't already been processed
require(!_completedMessages[vm.hash], "Message already processed");
_completedMessages[vm.hash] = true;
//Do the thing
current_msg = string(vm.payload);
}
function getCurrentMsg() public view returns (string memory){
return current_msg;
}
/**
Registers it's sibling applications on other chains as the only ones that can send this instance messages
*/
function registerApplicationContracts(uint16 chainId, bytes32 applicationAddr) public {
require(msg.sender == owner, "Only owner can register new chains!");
_applicationContracts[chainId] = applicationAddr;
}
}

View File

@ -1,42 +0,0 @@
// contracts/Messages.sol
// SPDX-License-Identifier: Apache 2
pragma solidity ^0.8.0;
import "./Structs.sol";
interface IWormhole is Structs {
event LogMessagePublished(address indexed sender, uint64 sequence, uint32 nonce, bytes payload, uint8 consistencyLevel);
function publishMessage(
uint32 nonce,
bytes memory payload,
uint8 consistencyLevel
) external payable returns (uint64 sequence);
function parseAndVerifyVM(bytes calldata encodedVM) external view returns (Structs.VM memory vm, bool valid, string memory reason);
function verifyVM(Structs.VM memory vm) external view returns (bool valid, string memory reason);
function verifySignatures(bytes32 hash, Structs.Signature[] memory signatures, Structs.GuardianSet memory guardianSet) external pure returns (bool valid, string memory reason) ;
function parseVM(bytes memory encodedVM) external pure returns (Structs.VM memory vm);
function getGuardianSet(uint32 index) external view returns (Structs.GuardianSet memory) ;
function getCurrentGuardianSetIndex() external view returns (uint32) ;
function getGuardianSetExpiry() external view returns (uint32) ;
function governanceActionIsConsumed(bytes32 hash) external view returns (bool) ;
function isInitialized(address impl) external view returns (bool) ;
function chainId() external view returns (uint16) ;
function governanceChainId() external view returns (uint16);
function governanceContract() external view returns (bytes32);
function messageFee() external view returns (uint256) ;
}

View File

@ -1,40 +0,0 @@
// contracts/Structs.sol
// SPDX-License-Identifier: Apache 2
pragma solidity ^0.8.0;
interface Structs {
struct Provider {
uint16 chainId;
uint16 governanceChainId;
bytes32 governanceContract;
}
struct GuardianSet {
address[] keys;
uint32 expirationTime;
}
struct Signature {
bytes32 r;
bytes32 s;
uint8 v;
uint8 guardianIndex;
}
struct VM {
uint8 version;
uint32 timestamp;
uint32 nonce;
uint16 emitterChainId;
bytes32 emitterAddress;
uint64 sequence;
uint8 consistencyLevel;
bytes payload;
uint32 guardianSetIndex;
Signature[] signatures;
bytes32 hash;
}
}

View File

@ -1,7 +0,0 @@
.anchor
.DS_Store
target
**/*.rs.bk
node_modules
test-ledger

View File

@ -1,8 +0,0 @@
.anchor
.DS_Store
target
node_modules
dist
build
test-ledger

View File

@ -1,14 +0,0 @@
[features]
seeds = false
[programs.localnet]
solana = "Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS"
[registry]
url = "https://anchor.projectserum.com"
[provider]
cluster = "localnet"
wallet = "/Users/spacemandev/.config/solana/id.json"
[scripts]
test = "yarn run ts-mocha -p ./tsconfig.json -t 1000000 tests/**/*.ts"

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +0,0 @@
[workspace]
members = [
"programs/*"
]

View File

@ -1,12 +0,0 @@
// Migrations are an early feature. Currently, they're nothing more than this
// single deploy script that's invoked from the CLI, injecting a provider
// configured from the workspace's Anchor.toml.
const anchor = require("@project-serum/anchor");
module.exports = async function (provider) {
// Configure client to use the provider.
anchor.setProvider(provider);
// Add your deploy script here.
};

View File

@ -1,19 +0,0 @@
{
"scripts": {
"lint:fix": "prettier */*.js \"*/**/*{.js,.ts}\" -w",
"lint": "prettier */*.js \"*/**/*{.js,.ts}\" --check"
},
"dependencies": {
"@project-serum/anchor": "^0.24.2"
},
"devDependencies": {
"chai": "^4.3.4",
"mocha": "^9.0.3",
"ts-mocha": "^8.0.0",
"@types/bn.js": "^5.1.0",
"@types/chai": "^4.3.0",
"@types/mocha": "^9.0.0",
"typescript": "^4.3.5",
"prettier": "^2.6.2"
}
}

View File

@ -1,26 +0,0 @@
[package]
name = "messenger"
version = "0.1.0"
description = "Simple messenger xdapp"
edition = "2021"
[lib]
crate-type = ["cdylib", "lib"]
name = "solana"
[features]
no-entrypoint = []
no-idl = []
no-log-ix-name = []
cpi = ["no-entrypoint"]
default = []
[profile.release]
overflow-checks = true
[dependencies]
anchor-lang = "0.24.2"
sha3 = "0.10.1"
byteorder = "1.4.3"
borsh = "0.9.3"
hex = "0.4.3"

View File

@ -1,2 +0,0 @@
[target.bpfel-unknown-unknown.dependencies.std]
features = []

View File

@ -1 +0,0 @@
pub const CORE_BRIDGE_ADDRESS: &str = "Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o";

View File

@ -1,146 +0,0 @@
use anchor_lang::prelude::*;
use crate::constants::*;
use crate::state::*;
use std::str::FromStr;
use anchor_lang::solana_program::sysvar::{rent, clock};
use crate::wormhole::*;
use hex::decode;
#[derive(Accounts)]
pub struct Initialize<'info> {
#[account(
init,
seeds=[b"config".as_ref()],
payer=owner,
bump,
space=8+32+32+1024
)]
pub config: Account<'info, Config>,
#[account(mut)]
pub owner: Signer<'info>,
pub system_program: Program<'info, System>
}
#[derive(Accounts)]
#[instruction(chain_id:u16, emitter_addr:String)]
pub struct RegisterChain<'info> {
#[account(mut)]
pub owner: Signer<'info>,
pub system_program: Program<'info, System>,
#[account(
constraint = config.owner == owner.key()
)]
pub config: Account<'info, Config>,
#[account(
init,
seeds=[b"EmitterAddress".as_ref(), chain_id.to_be_bytes().as_ref()],
payer=owner,
bump,
space=8+2+256
)]
pub emitter_acc: Account<'info, EmitterAddrAccount>,
}
#[derive(Accounts)]
pub struct SendMsg<'info>{
#[account(
constraint = core_bridge.key() == Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap()
)]
/// CHECK: If someone passes in the wrong account, Guardians won't read the message
pub core_bridge: AccountInfo<'info>,
#[account(
seeds = [
b"Bridge".as_ref()
],
bump,
seeds::program = Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap(),
mut
)]
/// CHECK: If someone passes in the wrong account, Guardians won't read the message
pub wormhole_config: AccountInfo<'info>,
#[account(
seeds = [
b"fee_collector".as_ref()
],
bump,
seeds::program = Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap(),
mut
)]
/// CHECK: If someone passes in the wrong account, Guardians won't read the message
pub wormhole_fee_collector: AccountInfo<'info>,
#[account(
seeds = [
b"emitter".as_ref(),
],
bump,
mut
)]
/// CHECK: If someone passes in the wrong account, Guardians won't read the message
pub wormhole_derived_emitter: AccountInfo<'info>,
#[account(
seeds = [
b"Sequence".as_ref(),
wormhole_derived_emitter.key().to_bytes().as_ref()
],
bump,
seeds::program = Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap(),
mut
)]
/// CHECK: If someone passes in the wrong account, Guardians won't read the message
pub wormhole_sequence: AccountInfo<'info>,
#[account(mut)]
pub wormhole_message_key: Signer<'info>,
#[account(mut)]
pub payer: Signer<'info>,
pub system_program: Program<'info, System>,
#[account(
constraint = clock.key() == clock::id()
)]
/// CHECK: The account constraint will make sure it's the right clock var
pub clock: AccountInfo<'info>,
#[account(
constraint = rent.key() == rent::id()
)]
/// CHECK: The account constraint will make sure it's the right rent var
pub rent: AccountInfo<'info>,
#[account(mut)]
pub config: Account<'info, Config>,
}
#[derive(Accounts)]
#[instruction()]
pub struct ConfirmMsg<'info>{
#[account(mut)]
pub payer: Signer<'info>,
pub system_program: Program<'info, System>,
#[account(
init,
seeds=[
&decode(&emitter_acc.emitter_addr.as_str()).unwrap()[..],
emitter_acc.chain_id.to_be_bytes().as_ref(),
(PostedMessageData::try_from_slice(&core_bridge_vaa.data.borrow())?.0).sequence.to_be_bytes().as_ref()
],
payer=payer,
bump,
space=8
)]
pub processed_vaa: Account<'info, ProcessedVAA>,
pub emitter_acc: Account<'info, EmitterAddrAccount>,
/// This requires some fancy hashing, so confirm it's derived address in the function itself.
#[account(
constraint = core_bridge_vaa.to_account_info().owner == &Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap()
)]
/// CHECK: This account is owned by Core Bridge so we trust it
pub core_bridge_vaa: AccountInfo<'info>,
#[account(mut)]
pub config: Account<'info, Config>,
}
#[derive(Accounts)]
pub struct Debug<'info>{
#[account(
constraint = core_bridge_vaa.to_account_info().owner == &Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap()
)]
/// CHECK: This account is owned by Core Bridge so we trust it
pub core_bridge_vaa: AccountInfo<'info>,
}

View File

@ -1,10 +0,0 @@
use anchor_lang::prelude::*;
#[error_code]
pub enum MessengerError {
#[msg("Posted VAA Key Mismatch")]
VAAKeyMismatch,
#[msg("Posted VAA Emitter Chain ID or Address Mismatch")]
VAAEmitterMismatch,
}

View File

@ -1,164 +0,0 @@
use anchor_lang::prelude::*;
use anchor_lang::solana_program::instruction::Instruction;
use anchor_lang::solana_program::system_instruction::transfer;
use anchor_lang::solana_program::borsh::try_from_slice_unchecked;
use sha3::Digest;
use byteorder::{
BigEndian,
WriteBytesExt,
};
use std::io::{
Cursor,
Write,
};
use std::str::FromStr;
use hex::decode;
mod context;
mod constants;
mod state;
mod wormhole;
mod errors;
use wormhole::*;
use context::*;
use constants::*;
use errors::*;
declare_id!("Fg6PaFpoGXkYsidMpWTK6W2BeZ7FEfcYkg476zPFsLnS");
#[program]
pub mod messenger {
use anchor_lang::solana_program::program::invoke_signed;
use super::*;
pub fn initialize(ctx: Context<Initialize>) -> Result<()> {
ctx.accounts.config.owner = ctx.accounts.owner.key();
ctx.accounts.config.nonce = 1;
Ok(())
}
pub fn register_chain(ctx:Context<RegisterChain>, chain_id:u16, emitter_addr:String) -> Result<()> {
ctx.accounts.emitter_acc.chain_id = chain_id;
ctx.accounts.emitter_acc.emitter_addr = emitter_addr;
Ok(())
}
pub fn send_msg(ctx:Context<SendMsg>, msg:String) -> Result<()> {
//Look Up Fee
let bridge_data:BridgeData = try_from_slice_unchecked(&ctx.accounts.wormhole_config.data.borrow_mut())?;
//Send Fee
invoke_signed(
&transfer(
&ctx.accounts.payer.key(),
&ctx.accounts.wormhole_fee_collector.key(),
bridge_data.config.fee
),
&[
ctx.accounts.payer.to_account_info(),
ctx.accounts.wormhole_fee_collector.to_account_info()
],
&[]
)?;
//Send Post Msg Tx
let sendmsg_ix = Instruction {
program_id: ctx.accounts.core_bridge.key(),
accounts: vec![
AccountMeta::new(ctx.accounts.wormhole_config.key(), false),
AccountMeta::new(ctx.accounts.wormhole_message_key.key(), true),
AccountMeta::new_readonly(ctx.accounts.wormhole_derived_emitter.key(), true),
AccountMeta::new(ctx.accounts.wormhole_sequence.key(), false),
AccountMeta::new(ctx.accounts.payer.key(), true),
AccountMeta::new(ctx.accounts.wormhole_fee_collector.key(), false),
AccountMeta::new_readonly(ctx.accounts.clock.key(), false),
AccountMeta::new_readonly(ctx.accounts.rent.key(), false),
AccountMeta::new_readonly(ctx.accounts.system_program.key(), false),
],
data: (
wormhole::Instruction::PostMessage,
PostMessageData {
nonce: ctx.accounts.config.nonce,
payload: msg.as_bytes().try_to_vec()?,
consistency_level: wormhole::ConsistencyLevel::Confirmed,
},
).try_to_vec()?,
};
invoke_signed(
&sendmsg_ix,
&[
ctx.accounts.wormhole_config.to_account_info(),
ctx.accounts.wormhole_message_key.to_account_info(),
ctx.accounts.wormhole_derived_emitter.to_account_info(),
ctx.accounts.wormhole_sequence.to_account_info(),
ctx.accounts.payer.to_account_info(),
ctx.accounts.wormhole_fee_collector.to_account_info(),
ctx.accounts.clock.to_account_info(),
ctx.accounts.rent.to_account_info(),
ctx.accounts.system_program.to_account_info(),
],
&[
&[
&b"emitter".as_ref(),
&[*ctx.bumps.get("wormhole_derived_emitter").unwrap()]
]
]
)?;
ctx.accounts.config.nonce += 1;
Ok(())
}
pub fn confirm_msg(ctx:Context<ConfirmMsg>) -> Result<()> {
//Hash a VAA Extract and derive a VAA Key
let vaa = PostedMessageData::try_from_slice(&ctx.accounts.core_bridge_vaa.data.borrow())?.0;
let serialized_vaa = serialize_vaa(&vaa);
let mut h = sha3::Keccak256::default();
h.write(serialized_vaa.as_slice()).unwrap();
let vaa_hash: [u8; 32] = h.finalize().into();
let (vaa_key, _) = Pubkey::find_program_address(&[
b"PostedVAA",
&vaa_hash
], &Pubkey::from_str(CORE_BRIDGE_ADDRESS).unwrap());
if ctx.accounts.core_bridge_vaa.key() != vaa_key {
return err!(MessengerError::VAAKeyMismatch);
}
// Already checked that the SignedVaa is owned by core bridge in account constraint logic
//Check that the emitter chain and address match up with the vaa
if vaa.emitter_chain != ctx.accounts.emitter_acc.chain_id ||
vaa.emitter_address != &decode(&ctx.accounts.emitter_acc.emitter_addr.as_str()).unwrap()[..] {
return err!(MessengerError::VAAEmitterMismatch)
}
ctx.accounts.config.current_msg = String::from_utf8(vaa.payload).unwrap();
Ok(())
}
pub fn debug(ctx:Context<Debug>) -> Result<()> {
let vaa = PostedMessageData::try_from_slice(&ctx.accounts.core_bridge_vaa.data.borrow())?.0;
msg!("{:?}", vaa);
Ok(())
}
}
// Convert a full VAA structure into the serialization of its unique components, this structure is
// what is hashed and verified by Guardians.
pub fn serialize_vaa(vaa: &MessageData) -> Vec<u8> {
let mut v = Cursor::new(Vec::new());
v.write_u32::<BigEndian>(vaa.vaa_time).unwrap();
v.write_u32::<BigEndian>(vaa.nonce).unwrap();
v.write_u16::<BigEndian>(vaa.emitter_chain.clone() as u16).unwrap();
v.write(&vaa.emitter_address).unwrap();
v.write_u64::<BigEndian>(vaa.sequence).unwrap();
v.write_u8(vaa.consistency_level).unwrap();
v.write(&vaa.payload).unwrap();
v.into_inner()
}

View File

@ -1,20 +0,0 @@
use anchor_lang::prelude::*;
#[account]
#[derive(Default)]
pub struct Config{
pub owner: Pubkey,
pub nonce: u32,
pub current_msg: String
}
#[account]
#[derive(Default)]
pub struct EmitterAddrAccount{
pub chain_id: u16,
pub emitter_addr: String
}
//Empty account, we just need to check that it *exists*
#[account]
pub struct ProcessedVAA {}

View File

@ -1,111 +0,0 @@
use anchor_lang::prelude::*;
use borsh::{BorshDeserialize, BorshSerialize};
use std::{
io::Write,
};
#[derive(AnchorDeserialize, AnchorSerialize)]
pub struct PostMessageData {
/// Unique nonce for this message
pub nonce: u32,
/// Message payload
pub payload: Vec<u8>,
/// Commitment Level required for an attestation to be produced
pub consistency_level: ConsistencyLevel,
}
#[derive(AnchorDeserialize, AnchorSerialize)]
pub enum ConsistencyLevel {
Confirmed,
Finalized
}
#[derive(AnchorDeserialize, AnchorSerialize)]
pub enum Instruction{
Initialize,
PostMessage,
PostVAA,
SetFees,
TransferFees,
UpgradeContract,
UpgradeGuardianSet,
VerifySignatures,
}
#[derive(AnchorDeserialize, AnchorSerialize, Clone)]
pub struct BridgeData {
/// The current guardian set index, used to decide which signature sets to accept.
pub guardian_set_index: u32,
/// Lamports in the collection account
pub last_lamports: u64,
/// Bridge configuration, which is set once upon initialization.
pub config: BridgeConfig,
}
#[derive(AnchorDeserialize, AnchorSerialize, Clone)]
pub struct BridgeConfig {
/// Period for how long a guardian set is valid after it has been replaced by a new one. This
/// guarantees that VAAs issued by that set can still be submitted for a certain period. In
/// this period we still trust the old guardian set.
pub guardian_set_expiration_time: u32,
/// Amount of lamports that needs to be paid to the protocol to post a message
pub fee: u64,
}
#[derive(Debug)]
#[repr(transparent)]
pub struct PostedMessageData(pub MessageData);
#[derive(Debug, Default, BorshDeserialize, BorshSerialize)]
pub struct MessageData {
/// Header of the posted VAA
pub vaa_version: u8,
/// Level of consistency requested by the emitter
pub consistency_level: u8,
/// Time the vaa was submitted
pub vaa_time: u32,
/// Account where signatures are stored
pub vaa_signature_account: Pubkey,
/// Time the posted message was created
pub submission_time: u32,
/// Unique nonce for this message
pub nonce: u32,
/// Sequence number of this message
pub sequence: u64,
/// Emitter of the message
pub emitter_chain: u16,
/// Emitter of the message
pub emitter_address: [u8; 32],
/// Message payload
pub payload: Vec<u8>,
}
impl AnchorSerialize for PostedMessageData {
fn serialize<W: Write>(&self, writer: &mut W) -> std::io::Result<()> {
writer.write(b"msg")?;
BorshSerialize::serialize(&self.0, writer)
}
}
impl AnchorDeserialize for PostedMessageData {
fn deserialize(buf: &mut &[u8]) -> std::io::Result<Self> {
*buf = &buf[3..];
Ok(PostedMessageData(
<MessageData as BorshDeserialize>::deserialize(buf)?,
))
}
}

View File

@ -1,16 +0,0 @@
import * as anchor from "@project-serum/anchor";
import { Program } from "@project-serum/anchor";
import { Solana } from "../target/types/solana";
describe("solana", () => {
// Configure the client to use the local cluster.
anchor.setProvider(anchor.AnchorProvider.env());
const program = anchor.workspace.Solana as Program<Solana>;
it("Is initialized!", async () => {
// Add your test here.
const tx = await program.methods.initialize().rpc();
console.log("Your transaction signature", tx);
});
});

View File

@ -1,10 +0,0 @@
{
"compilerOptions": {
"types": ["mocha", "chai"],
"typeRoots": ["./node_modules/@types"],
"lib": ["es2015"],
"module": "commonjs",
"target": "es6",
"esModuleInterop": true
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,334 +0,0 @@
import { exec } from "child_process";
import fs from "fs";
import { ethers } from 'ethers';
import algo from "algosdk";
import {
getEmitterAddressAlgorand,
getEmitterAddressEth,
getEmitterAddressSolana,
getEmitterAddressTerra,
parseSequenceFromLogEth,
parseSequenceFromLogAlgorand,
uint8ArrayToHex
} from "@certusone/wormhole-sdk";
import {
optin,
submitVAAHeader
} from "@certusone/wormhole-sdk/lib/cjs/algorand/Algorand.js";
import fetch from 'node-fetch';
async function main() {
let config = JSON.parse(fs.readFileSync('./xdapp.config.json').toString());
let network = config.networks[process.argv[2]];
if (!network){
throw new Error("Network not defined in config file.")
}
if(process.argv[3] == "deploy") {
if(network.type == "evm"){
console.log(`Deploying EVM network: ${process.argv[2]} to ${network.rpc}`);
exec(
`cd chains/evm && forge build && forge create --legacy --rpc-url ${network.rpc} --private-key ${network.privateKey} src/Messenger.sol:Messenger && exit`,
((err, out, errStr) => {
if(err){
throw new Error(err);
}
if(out) {
console.log(out);
network.deployedAddress = out.split("Deployed to: ")[1].split('\n')[0].trim();
network.emittedVAAs = []; //Resets the emittedVAAs
config.networks[process.argv[2]] = network;
fs.writeFileSync('./xdapp.config.json', JSON.stringify(config, null, 4));
}
})
);
} else if (network.type == "algorand"){
console.log(`Deploying Algorand network: ${process.argv[2]} to ${network.rpc}`);
exec(
`cd chains/algorand && python3 messenger.py ${network.bridgeAddress} '${network.mnemonic}' ${network.rpc}:${network.port}`,
((err,out,errStr) => {
if(err) {
throw new Error(err);
}
if(out){
console.log(out);
network.appId = parseInt(out.split("App ID:")[1].split("Address")[0].trim());
network.deployedAddress = out.split("Address: ")[1].trim();
network.emittedVAAs = [];
config.networks[process.argv[2]] = network;
fs.writeFileSync('./xdapp.config.json', JSON.stringify(config, null, 4));
}
})
)
} else if (network.type == "solana") {
//node exec solana deployer
/**
* solana config set --url $TILT_RPC_IP:8899
cd solana-project && anchor build && solana airdrop 100 -k test_keypair.json && sleep 5 && cd ../
cd solana-deployer && cargo build --release && cargo run --release -- -m=8 --payer=../solana-project/test_keypair.json --program-kp-path=../solana-project/solana_project-keypair.json --program-path=../solana-project/target/deploy/solana_project.so -r=$TILT_RPC_IP:8899 -s=1 -t=5 --thread-count=8 && cd ../
sleep 10
*/
} else {
throw new Error("Invalid Network Type!");
}
} else if (process.argv[3] == "register_chain") {
if(!network.deployedAddress){
throw new Error("Deploy to this network first!");
}
const targetNetwork = config.networks[process.argv[4]];
if(!targetNetwork.deployedAddress){
throw new Error("Target Network not deployed yet!");
}
let emitterAddr;
if(targetNetwork.type == "evm"){
emitterAddr = Buffer.from(getEmitterAddressEth(targetNetwork.deployedAddress), "hex");
} else if (targetNetwork.type == "algorand") {
emitterAddr = Buffer.from(getEmitterAddressAlgorand(targetNetwork.appId), "hex");
} else if (targetNetwork.type == "solana") {
emitterAddr = Buffer.from(await getEmitterAddressSolana(targetNetwork.deployedAddress), "hex");
} else if (targetNetwork.type == "terra") {
emitterAddr = Buffer.from(await getEmitterAddressTerra(targetNetwork.deployedAddress), "hex");
}
if(network.type == "evm"){
const signer = new ethers.Wallet(network.privateKey)
.connect(new ethers.providers.JsonRpcProvider(network.rpc));
const messenger = new ethers.Contract(
network.deployedAddress,
JSON.parse(fs.readFileSync('./chains/evm/out/Messenger.sol/Messenger.json').toString()).abi,
signer,
{
gasPrice: '2000000000'
}
);
await messenger.registerApplicationContracts(targetNetwork.wormholeChainId, emitterAddr);
} else if (network.type == "algorand"){
const algodClient = new algo.Algodv2(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
network.rpc,
network.port
);
const sender = algo.mnemonicToSecretKey(network.mnemonic);
const params = await algodClient.getTransactionParams().do();
let txs = [];
txs.push({
tx: algo.makeApplicationCallTxnFromObject({
appArgs: [
Uint8Array.from(Buffer.from("registerEmitter")),
Uint8Array.from(emitterAddr),
algo.bigIntToBytes(BigInt(targetNetwork.wormholeChainId), 2)
],
appIndex: network.appId,
from: sender.addr,
onComplete: algo.OnApplicationComplete.NoOpOC,
suggestedParams: params,
}),
signer: null,
});
await signSendAndConfirmAlgorand(algodClient, txs, sender);
}
console.log(`Network(${process.argv[2]}) Registered Emitter: ${targetNetwork.deployedAddress} from Chain: ${targetNetwork.wormholeChainId}`);
} else if (process.argv[3] == "send_msg") {
if(!network.deployedAddress){
throw new Error("Deploy to this network first!");
}
if(network.type == "evm"){
const signer = new ethers.Wallet(network.privateKey)
.connect(new ethers.providers.JsonRpcProvider(network.rpc));
const messenger = new ethers.Contract(
network.deployedAddress,
JSON.parse(fs.readFileSync('./chains/evm/out/Messenger.sol/Messenger.json').toString()).abi,
signer,
{
gasPrice: '2000000000'
}
);
const tx = await (await messenger.sendMsg(Buffer.from(process.argv[4]), {gasPrice: '2000000000'})).wait();
await new Promise((r) => setTimeout(r, 5000));
const emitterAddr = getEmitterAddressEth(messenger.address);
const seq = parseSequenceFromLogEth(
tx,
network.bridgeAddress
);
console.log(`${config.wormhole.restAddress}/v1/signed_vaa/${network.wormholeChainId}/${emitterAddr}/${seq}`);
const vaaBytes = await (
await fetch(
`${config.wormhole.restAddress}/v1/signed_vaa/${network.wormholeChainId}/${emitterAddr}/${seq}`
)
).json();
if(!network.emittedVAAs){
network.emittedVAAs = [vaaBytes.vaaBytes];
} else {
network.emittedVAAs.push(vaaBytes.vaaBytes);
}
config.networks[process.argv[2]] = network;
fs.writeFileSync('./xdapp.config.json', JSON.stringify(config, null, 2));
console.log(`Network(${process.argv[2]}) Emitted VAA: `, vaaBytes.vaaBytes);
} else if (network.type == "algorand"){
const algodClient = new algo.Algodv2(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
network.rpc,
network.port
);
const sender = algo.mnemonicToSecretKey(network.mnemonic);
const params = await algodClient.getTransactionParams().do();
let txs = [];
//Opt In to allow core brige to store data with Algo Contract
const messengerPubkey = uint8ArrayToHex(algo.decodeAddress(network.deployedAddress).publicKey);
const { addr: emitterAddr, txs: emitterOptInTxs } = await optin(
algodClient,
sender.addr,
BigInt(network.bridgeAddress),
BigInt(0),
messengerPubkey
);
txs.push(...emitterOptInTxs);
let accts = [
emitterAddr,
algo.getApplicationAddress(network.bridgeAddress),
];
let appTxn = algo.makeApplicationCallTxnFromObject({
appArgs: [
Uint8Array.from(Buffer.from("sendMessage")),
Uint8Array.from(Buffer.from(process.argv[4]))
],
accounts: accts,
appIndex: network.appId,
foreignApps: [network.bridgeAddress],
from: sender.addr,
onComplete: algo.OnApplicationComplete.NoOpOC,
suggestedParams: params,
});
appTxn.fee *= 2;
txs.push({tx: appTxn, signer: null});
const receipt = await signSendAndConfirmAlgorand(algodClient, txs, sender);
const emitAddr = getEmitterAddressAlgorand(network.appId);
const seq = parseSequenceFromLogAlgorand(receipt);
await new Promise((r) => setTimeout(r, 10000));
const vaaBytes = await (
await fetch(
`${config.wormhole.restAddress}/v1/signed_vaa/${network.wormholeChainId}/${emitAddr}/${seq}`
)
).json();
if(!network.emittedVAAs){
network.emittedVAAs = [vaaBytes.vaaBytes];
} else {
network.emittedVAAs.push(vaaBytes.vaaBytes);
}
config.networks[process.argv[2]] = network;
fs.writeFileSync('./xdapp.config.json', JSON.stringify(config, null, 2));
console.log(`Network(${process.argv[2]}) Emitted VAA: `, vaaBytes.vaaBytes);
}
} else if (process.argv[3] == "submit_vaa") {
if(!network.deployedAddress){
throw new Error("Deploy to this network first!");
}
const targetNetwork = config.networks[process.argv[4]];
const vaaBytes = isNaN(parseInt(process.argv[5])) ?
targetNetwork.emittedVAAs.pop() :
targetNetwork.emittedVAAs[parseInt(process.argv[5])];
if(network.type == "evm"){
const signer = new ethers.Wallet(network.privateKey)
.connect(new ethers.providers.JsonRpcProvider(network.rpc));
const messenger = new ethers.Contract(
network.deployedAddress,
JSON.parse(fs.readFileSync('./chains/evm/out/Messenger.sol/Messenger.json').toString()).abi,
signer,
{
gasPrice: '2000000000'
}
);
const tx = await messenger.receiveEncodedMsg(Buffer.from(vaaBytes, "base64"));
console.log(`Submitted VAA: ${vaaBytes}\nTX: ${tx.hash}`);
} else if (network.type == "algorand"){
const algodClient = new algo.Algodv2(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
network.rpc,
network.port
);
const sender = algo.mnemonicToSecretKey(network.mnemonic);
const params = await algodClient.getTransactionParams().do();
let txs = [];
let sstate = await submitVAAHeader(algodClient, BigInt(network.bridgeAddress), Uint8Array.from(Buffer.from(vaaBytes, "base64")), sender.addr, BigInt(network.appId))
txs = sstate.txs;
let accts = sstate.accounts;
txs.push({
tx: algo.makeApplicationCallTxnFromObject({
appArgs: [
Uint8Array.from(Buffer.from(("receiveMessage"))),
Uint8Array.from(Buffer.from(vaaBytes, "base64"))
],
accounts: accts,
appIndex: network.appId,
from: sender.addr,
onComplete: algo.OnApplicationComplete.NoOpOC,
suggestedParams: params,
}),
signer: null,
});
ret = await signSendAndConfirmAlgorand(algodClient, txs, sender);
console.log(ret);
}
} else if (process.argv[3] == "get_current_msg") {
if(!network.deployedAddress){
throw new Error("Deploy to this network first!");
}
if(network.type == "evm"){
const signer = new ethers.Wallet(network.privateKey)
.connect(new ethers.providers.JsonRpcProvider(network.rpc));
const messenger = new ethers.Contract(
network.deployedAddress,
JSON.parse(fs.readFileSync('./chains/evm/out/Messenger.sol/Messenger.json').toString()).abi,
signer,
{
gasPrice: '2000000000'
}
);
console.log(`${process.argv[2]} Current Msg: `, await messenger.getCurrentMsg());
}
} else {
throw new Error("Unkown command!")
}
}
async function signSendAndConfirmAlgorand(
algodClient,
txs,
wallet
) {
algo.assignGroupID(txs.map((tx) => tx.tx));
const signedTxns = [];
for (const tx of txs) {
if (tx.signer) {
signedTxns.push(await tx.signer.signTxn(tx.tx));
} else {
signedTxns.push(tx.tx.signTxn(wallet.sk));
}
}
await algodClient.sendRawTransaction(signedTxns).do();
const result = await algo.waitForConfirmation(
algodClient,
txs[txs.length - 1].tx.txID(),
1
);
return result;
}
main();

File diff suppressed because it is too large Load Diff

View File

@ -1,24 +0,0 @@
{
"name": "messenger",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "sh tests/eth0-eth1.sh"
},
"keywords": [],
"author": "",
"license": "MIT",
"workspaces": [
"chains/evm",
"chains/algorand"
],
"type": "module",
"dependencies": {
"@certusone/wormhole-sdk": "^0.3.3",
"algosdk": "^1.16.0",
"byteify": "^2.0.10",
"ethers": "^5.6.6",
"node-fetch": "^2.6.7"
}
}

View File

@ -1,12 +0,0 @@
node messenger.js eth0 deploy
node messenger.js eth1 deploy
sleep 5
node messenger.js eth0 register_chain eth1
node messenger.js eth1 register_chain eth0
node messenger.js eth0 send_msg "From: eth0\nMsg: Hello World!"
node messenger.js eth1 submit_vaa eth0 latest
node messenger.js eth1 send_msg "From: eth1\nMsg: Hello World!"
node messenger.js eth0 submit_vaa eth1 latest
sleep 10
node messenger.js eth0 get_current_msg
node messenger.js eth1 get_current_msg

View File

@ -0,0 +1 @@
test-ledger/

View File

@ -0,0 +1,26 @@
# Wormhole Local Validator
This repository contains a set of scripts to get started using Wormhole. It contains the wormhole local validator, along with code to spin up EVM and Solana local validators, and deployment code to add Wormhole contracts to those new chains.
## Dependencies
You will also need Docker; you can get either [Docker Desktop](https://docs.docker.com/get-docker/) if you're developing on your computer or if you're in a headless vm, install [Docker Engine](https://docs.docker.com/engine/). Make sure to have Docker running before you run any of the following commands.
To run EVM chains you will need [Ganache](https://github.com/trufflesuite/ganache#command-line-use)
To run Solana chains you will need [Solana](https://docs.solana.com/cli/install-solana-cli-tools) installed.
## Run EVM Chains
`npm run evm0` will start up an EVM chain with Wormhole Chain ID 2 (like ETH) and deploy the Wormhole Core Bridge (`0xC89Ce4735882C9F0f0FE26686c53074E09B0D550`), Token Bridge (`0x0290FB167208Af455bB137780163b7B7a9a10C16`), and NFT Bridge (`0x26b4afb60d6c903165150c6f0aa14f8016be4aec`) contracts to it.
`npm run evm1` will start up an EVM chain with Wormhole Chain ID 4 (like BSC) and deploy the Wormhole Core Bridge (`0xC89Ce4735882C9F0f0FE26686c53074E09B0D550`), Token Bridge (`0x0290FB167208Af455bB137780163b7B7a9a10C16`), and NFT Bridge (`0x26b4afb60d6c903165150c6f0aa14f8016be4aec`) contracts to it.
They'll also deploy a Test Token (TKN at `0x2D8BE6BF0baA74e0A907016679CaE9190e80dD0A`), test NFT (`0x5b9b42d6e4B2e4Bf8d42Eba32D46918e10899B66`), and WETH Contract (`0xDDb64fE46a91D46ee29420539FC25FD07c5FEa3E`)
They'll use the standard Wormhole test mnemonic (`myth like bonus scare over problem client lizard pioneer submit female collect`) and use the first key for deployment and payment (Public Key: `0x90F8bf6A479f320ead074411a4B0e7944Ea8c9C1`, Private Key: (`0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d`))
## Run Solana Chain
`npm run solana` will start up a Solana chain and load in Core Bridge (`Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o`) and Token Bridge (`B6RHG3mfcckmrYN1UhmJzyS1XX3fZKbkeUcpJe9Sy3FE`) accounts.
TODO: Add emitter registrations for token bridge.
## Run Wormhole
After you have the dependencies installed and the chains running, you can run Womrhole.
Simply run `npm run wormhole` and wait while the Wormhole Guardian builds a docker image. The first time you run this command, it might take a while (up to 550 seconds on a modern laptop!). After the image is built however, it'll be relatively fast to bring it up and down.

View File

@ -0,0 +1,14 @@
#!/usr/bin/env bash
# Start EVM Chain 0
npx pm2 kill -n evm0
npx pm2 start 'ganache -p 8545 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm0
#Install Wormhole Eth Dependencies
cd wormhole/ethereum
npm i
cp .env.test .env
npm run build
# Deploy Wormhole Contracts to EVM Chain 0
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_bsc_chain.js && npx truffle exec scripts/register_algo_chain.js

View File

@ -0,0 +1,16 @@
#!/usr/bin/env bash
# Start EVM Chain 1
npx pm2 kill -n evm1
npx pm2 start 'ganache -p 8546 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm1
#Install Wormhole Eth Dependencies
cd wormhole/ethereum
npm i
cp .env.test .env
npm run build
# Deploy Wormhole Contracts to EVM Chain 0
perl -pi -e 's/CHAIN_ID=0x2/CHAIN_ID=0x4/g' .env && perl -pi -e 's/8545/8546/g' truffle-config.js
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_eth_chain.js && npx truffle exec scripts/register_algo_chain.js && nc -lkp 2000 0.0.0.0
perl -pi -e 's/CHAIN_ID=0x4/CHAIN_ID=0x2/g' .env && perl -pi -e 's/8546/8545/g' truffle-config.js

View File

@ -4,9 +4,12 @@
"description": "A simple template for getting started with xDapps.", "description": "A simple template for getting started with xDapps.",
"main": "starter.js", "main": "starter.js",
"scripts": { "scripts": {
"guardiand": "sh wormhole.sh", "setup": "sh setup.sh",
"cleanup": "docker kill guardiand && docker rm guardiand && npx pm2 kill", "wormhole": "npm run setup && sh wormhole.sh",
"test": "sh tests/treasury_bridge.sh" "evm0": "npm run setup && sh evm0.sh",
"evm1": "npm run setup && sh evm1.sh",
"solana": "npm run setup && sh solana.sh",
"cleanup": "docker kill guardiand & docker rm guardiand & npx pm2 kill"
}, },
"keywords": [], "keywords": [],
"author": "", "author": "",
@ -14,11 +17,7 @@
"workspaces": [], "workspaces": [],
"type": "module", "type": "module",
"dependencies": { "dependencies": {
"@certusone/wormhole-sdk": "^0.3.3",
"byteify": "^2.0.10",
"ethers": "^5.6.9",
"ganache": "^7.3.1", "ganache": "^7.3.1",
"node-fetch": "^3.2.6",
"pm2": "^5.2.0" "pm2": "^5.2.0"
} }
} }

View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
# Check if wormhole/ repo exists.
# If it doens't then clone and build guardiand
if [ ! -d "./wormhole" ]
then
git clone https://github.com/certusone/wormhole
cd wormhole/
DOCKER_BUILDKIT=1 docker build --target go-export -f Dockerfile.proto -o type=local,dest=node .
DOCKER_BUILDKIT=1 docker build --target node-export -f Dockerfile.proto -o type=local,dest=. .
cd node/
echo "Have patience, this step takes upwards of 500 seconds!"
if [ $(uname -m) = "arm64" ]; then
echo "Building Guardian for linux/amd64"
DOCKER_BUILDKIT=1 docker build --platform linux/amd64 -f Dockerfile -t guardian .
else
echo "Building Guardian natively"
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t guardian .
fi
cd ../../
fi

View File

@ -0,0 +1,27 @@
# CORE BRIDGE
[[test.genesis]]
address = "Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o"
program = "./core/core_bridge.so"
[[test.validator.account]]
address = "FKoMTctsC7vJbEqyRiiPskPnuQx2tX1kurmvWByq5uZP"
filename = "./core/bridge_config.json"
[[test.validator.account]]
address = "6MxkvoEwgB9EqQRLNhvYaPGhfcLtBtpBqdQugr3AZUgD"
filename = "./core/guardian_set.json"
[[test.validator.account]]
address = "GXBsgBD3LDn3vkRZF6TfY5RqgajVZ4W5bMAdiAaaUARs"
filename = "./core/fee_collector.json"
# TOKEN BRIDGE
[[test.genesis]]
address = "B6RHG3mfcckmrYN1UhmJzyS1XX3fZKbkeUcpJe9Sy3FE"
program = "./token/token_bridge.so"
[[test.validator.account]]
address = "3GwVs8GSLdo4RUsoXTkGQhojauQ1sXcDNjm7LSDicw19"
filename = "./token/token_config.json"
# NFT BRIDGE

View File

@ -0,0 +1,13 @@
{
"pubkey": "FKoMTctsC7vJbEqyRiiPskPnuQx2tX1kurmvWByq5uZP",
"account": {
"lamports": 1057920,
"data": [
"AAAAAACYDQAAAAAAgFEBAGQAAAAAAAAA",
"base64"
],
"owner": "Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o",
"executable": false,
"rentEpoch": 0
}
}

View File

@ -0,0 +1,13 @@
{
"pubkey": "GXBsgBD3LDn3vkRZF6TfY5RqgajVZ4W5bMAdiAaaUARs",
"account": {
"lamports": 890880,
"data": [
"",
"base64"
],
"owner": "11111111111111111111111111111111",
"executable": false,
"rentEpoch": 0
}
}

View File

@ -0,0 +1,13 @@
{
"pubkey": "6MxkvoEwgB9EqQRLNhvYaPGhfcLtBtpBqdQugr3AZUgD",
"account": {
"lamports": 1141440,
"data": [
"AAAAAAEAAAC++kKdV80Yt/ik2RotqatK8F0PvoX2jWIAAAAA",
"base64"
],
"owner": "Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o",
"executable": false,
"rentEpoch": 0
}
}

View File

@ -0,0 +1,13 @@
{
"pubkey": "3GwVs8GSLdo4RUsoXTkGQhojauQ1sXcDNjm7LSDicw19",
"account": {
"lamports": 1113600,
"data": [
"AsgGMSy+W3nviqbBfj9CPY/f4dRpCfsfbN9l7o4ub6o=",
"base64"
],
"owner": "B6RHG3mfcckmrYN1UhmJzyS1XX3fZKbkeUcpJe9Sy3FE",
"executable": false,
"rentEpoch": 0
}
}

View File

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# Start Solana
npx pm2 kill -n solana
npx pm2 start "solana-test-validator" --name solana -- -r \
--bpf-program Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o ./solana-accounts/core/core_bridge.so \
--account FKoMTctsC7vJbEqyRiiPskPnuQx2tX1kurmvWByq5uZP ./solana-accounts/core/bridge_config.json \
--account GXBsgBD3LDn3vkRZF6TfY5RqgajVZ4W5bMAdiAaaUARs ./solana-accounts/core/fee_collector.json \
--account 6MxkvoEwgB9EqQRLNhvYaPGhfcLtBtpBqdQugr3AZUgD ./solana-accounts/core/guardian_set.json \
--bpf-program B6RHG3mfcckmrYN1UhmJzyS1XX3fZKbkeUcpJe9Sy3FE ./solana-accounts/token/token_bridge.so \
--account 3GwVs8GSLdo4RUsoXTkGQhojauQ1sXcDNjm7LSDicw19 ./solana-accounts/token/token_config.json

View File

@ -1,49 +1,4 @@
#!/usr/bin/env bash #!/usr/bin/env bash
npm run cleanup
if [! docker info > /dev/null ] ; then
echo "This script uses docker, and it isn't running - please start docker and try again!"
exit 1
fi
# Check if wormhole/ repo exists.
# If it doens't then clone and build guardiand
if [ ! -d "./wormhole" ]
then
git clone https://github.com/certusone/wormhole
cd wormhole/
DOCKER_BUILDKIT=1 docker build --target go-export -f Dockerfile.proto -o type=local,dest=node .
DOCKER_BUILDKIT=1 docker build --target node-export -f Dockerfile.proto -o type=local,dest=. .
cd node/
echo "Have patience, this step takes upwards of 500 seconds!"
if [ $(uname -m) = "arm64" ]; then
echo "Building Guardian for linux/amd64"
DOCKER_BUILDKIT=1 docker build --platform linux/amd64 -f Dockerfile -t guardian .
else
echo "Building Guardian natively"
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t guardian .
fi
cd ../../
fi
# Start EVM Chain 0
npx pm2 start 'ganache -p 8545 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm0
# Start EVM Chain 1
npx pm2 start 'ganache -p 8546 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm1
#Install Wormhole Eth Dependencies
cd wormhole/ethereum
npm i
cp .env.test .env
npm run build
# Deploy Wormhole Contracts to EVM Chain 0
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_bsc_chain.js && npx truffle exec scripts/register_algo_chain.js
# Deploy Wormhole Contracts to EVM Chain 1
perl -pi -e 's/CHAIN_ID=0x2/CHAIN_ID=0x4/g' .env && perl -pi -e 's/8545/8546/g' truffle-config.js
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_eth_chain.js && npx truffle exec scripts/register_algo_chain.js && nc -lkp 2000 0.0.0.0
perl -pi -e 's/CHAIN_ID=0x4/CHAIN_ID=0x2/g' .env && perl -pi -e 's/8546/8545/g' truffle-config.js
cd ../../
# Run Guardiand # Run Guardiand
if [ $(uname -m) = "arm64" ]; then if [ $(uname -m) = "arm64" ]; then
docker run -d --name guardiand -p 7070:7070 -p 7071:7071 -p 7073:7073 --platform linux/amd64 --hostname guardian-0 --cap-add=IPC_LOCK --entrypoint /guardiand guardian node \ docker run -d --name guardiand -p 7070:7070 -p 7071:7071 -p 7073:7073 --platform linux/amd64 --hostname guardian-0 --cap-add=IPC_LOCK --entrypoint /guardiand guardian node \

View File

@ -1,12 +0,0 @@
# xDapp Starter
Simple starter template with Guardiand script and two EVM chains.
## Dependencies
The javascript dependencies can be installed via `npm install` in this folder.
You will also need Docker; you can get either [Docker Desktop](https://docs.docker.com/get-docker/) if you're developing on your computer or if you're in a headless vm, install [Docker Engine](https://docs.docker.com/engine/)
## Run Guardiand
After you have the dependencies installed, we'll need to spin up the EVM chains, deploy the Wormhole contracts to them, then startup a Wormhole Guardian to observe and sign VAAs. We have provided a script to automate this all for you.
Simply run `npm run guardiand` and wait while the Wormhole Guardian builds a docker image. The first time you run this command, it might take a while (up to 550 seconds on a modern laptop!). After the image is built however, it'll be relatively fast to bring it up and down.

View File

@ -1,113 +0,0 @@
#!/usr/bin/env bash
npm run cleanup
if [! docker info > /dev/null ] ; then
echo "This script uses docker, and it isn't running - please start docker and try again!"
exit 1
fi
# Check if wormhole/ repo exists.
# If it doens't then clone and build guardiand
if [ ! -d "./wormhole" ]
then
git clone https://github.com/certusone/wormhole
cd wormhole/
DOCKER_BUILDKIT=1 docker build --target go-export -f Dockerfile.proto -o type=local,dest=node .
DOCKER_BUILDKIT=1 docker build --target node-export -f Dockerfile.proto -o type=local,dest=. .
cd node/
echo "Have patience, this step takes upwards of 500 seconds!"
if [ $(uname -m) = "arm64" ]; then
echo "Building Guardian for linux/amd64"
DOCKER_BUILDKIT=1 docker build --platform linux/amd64 -f Dockerfile -t guardian .
else
echo "Building Guardian natively"
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t guardian .
fi
cd ../../
fi
# Start EVM Chain 0
npx pm2 start 'ganache -p 8545 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm0
# Start EVM Chain 1
npx pm2 start 'ganache -p 8546 -m "myth like bonus scare over problem client lizard pioneer submit female collect" --block-time 1' --name evm1
#Install Wormhole Eth Dependencies
cd wormhole/ethereum
npm i
cp .env.test .env
npm run build
# Deploy Wormhole Contracts to EVM Chain 0
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_bsc_chain.js && npx truffle exec scripts/register_algo_chain.js
# Deploy Wormhole Contracts to EVM Chain 1
perl -pi -e 's/CHAIN_ID=0x2/CHAIN_ID=0x4/g' .env && perl -pi -e 's/8545/8546/g' truffle-config.js
npm run migrate && npx truffle exec scripts/deploy_test_token.js && npx truffle exec scripts/register_solana_chain.js && npx truffle exec scripts/register_terra_chain.js && npx truffle exec scripts/register_eth_chain.js && npx truffle exec scripts/register_algo_chain.js && nc -lkp 2000 0.0.0.0
perl -pi -e 's/CHAIN_ID=0x4/CHAIN_ID=0x2/g' .env && perl -pi -e 's/8546/8545/g' truffle-config.js
cd ../../
# Run Guardiand
if [ $(uname -m) = "arm64" ]; then
docker run -d --name guardiand -p 7070:7070 -p 7071:7071 -p 7073:7073 --platform linux/amd64 --hostname guardian-0 --cap-add=IPC_LOCK --entrypoint /guardiand guardian node \
--unsafeDevMode --guardianKey /tmp/bridge.key --publicRPC "[::]:7070" --publicWeb "[::]:7071" --adminSocket /tmp/admin.sock --dataDir /tmp/data \
--ethRPC ws://host.docker.internal:8545 \
--ethContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--bscRPC ws://host.docker.internal:8546 \
--bscContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--polygonRPC ws://host.docker.internal:8545 \
--avalancheRPC ws://host.docker.internal:8545 \
--auroraRPC ws://host.docker.internal:8545 \
--fantomRPC ws://host.docker.internal:8545 \
--oasisRPC ws://host.docker.internal:8545 \
--karuraRPC ws://host.docker.internal:8545 \
--acalaRPC ws://host.docker.internal:8545 \
--klaytnRPC ws://host.docker.internal:8545 \
--celoRPC ws://host.docker.internal:8545 \
--moonbeamRPC ws://host.docker.internal:8545 \
--neonRPC ws://host.docker.internal:8545 \
--terraWS ws://host.docker.internal:8545 \
--terra2WS ws://host.docker.internal:8545 \
--terraLCD https://host.docker.internal:1317 \
--terra2LCD http://host.docker.internal:1317 \
--terraContract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--terra2Contract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--solanaContract Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o \
--solanaWS ws://host.docker.internal:8900 \
--solanaRPC http://host.docker.internal:8899 \
--algorandIndexerRPC ws://host.docker.internal:8545 \
--algorandIndexerToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodRPC https://host.docker.internal:4001 \
--algorandAppID "4"
else
docker run -d --name guardiand --network host --hostname guardian-0 --cap-add=IPC_LOCK --entrypoint /guardiand guardian node \
--unsafeDevMode --guardianKey /tmp/bridge.key --publicRPC "[::]:7070" --publicWeb "[::]:7071" --adminSocket /tmp/admin.sock --dataDir /tmp/data \
--ethRPC ws://localhost:8545 \
--ethContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--bscRPC ws://localhost:8546 \
--bscContract "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550" \
--polygonRPC ws://localhost:8545 \
--avalancheRPC ws://localhost:8545 \
--auroraRPC ws://localhost:8545 \
--fantomRPC ws://localhost:8545 \
--oasisRPC ws://localhost:8545 \
--karuraRPC ws://localhost:8545 \
--acalaRPC ws://localhost:8545 \
--klaytnRPC ws://localhost:8545 \
--celoRPC ws://localhost:8545 \
--moonbeamRPC ws://localhost:8545 \
--neonRPC ws://localhost:8545 \
--terraWS ws://localhost:8545 \
--terra2WS ws://localhost:8545 \
--terraLCD https://terra-terrad:1317 \
--terra2LCD http://localhost:1317 \
--terraContract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--terra2Contract terra18vd8fpwxzck93qlwghaj6arh4p7c5n896xzem5 \
--solanaContract Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o \
--solanaWS ws://localhost:8900 \
--solanaRPC http://localhost:8899 \
--algorandIndexerRPC ws://localhost:8545 \
--algorandIndexerToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodToken "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
--algorandAlgodRPC https://localhost:4001 \
--algorandAppID "4"
fi
echo "Guardiand Running! To look at logs: \"docker logs guardiand -f\""

View File

@ -1,25 +0,0 @@
{
"networks": {
"evm0": {
"type": "evm",
"wormholeChainId": 2,
"rpc": "http://localhost:8545",
"privateKey": "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d",
"bridgeAddress": "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550",
"tokenBridgeAddress": "0x0290FB167208Af455bB137780163b7B7a9a10C16",
"testToken": "0x2D8BE6BF0baA74e0A907016679CaE9190e80dD0A"
},
"evm1": {
"type": "evm",
"wormholeChainId": 4,
"rpc": "http://localhost:8546",
"privateKey": "0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d",
"tokenBridgeAddress": "0x0290FB167208Af455bB137780163b7B7a9a10C16",
"bridgeAddress": "0xC89Ce4735882C9F0f0FE26686c53074E09B0D550",
"testToken": "0x2D8BE6BF0baA74e0A907016679CaE9190e80dD0A"
}
},
"wormhole": {
"restAddress": "http://localhost:7071"
}
}

View File

@ -58,7 +58,7 @@
# xDapp Development # xDapp Development
- [Before You Start](./development/overview.md) - [Before You Start](./development/overview.md)
- [Guardiand](./development/guardiand.md) - [Wormhole Local Validator](./development/wormhole-local-validator.md)
- [Tilt Installation](./development/tilt/overview.md) - [Tilt Installation](./development/tilt/overview.md)
- [MacOS](./development/tilt/mac.md) - [MacOS](./development/tilt/mac.md)
- [Linux](./development/tilt/linux.md) - [Linux](./development/tilt/linux.md)
@ -66,7 +66,6 @@
- [Project Scaffold](./development/scaffold/overview.md) - [Project Scaffold](./development/scaffold/overview.md)
- [Sending Messages](./development/messages/sending/overview.md) - [Sending Messages](./development/messages/sending/overview.md)
- [EVM](./development/messages/sending/evm.md) - [EVM](./development/messages/sending/evm.md)
- [Registering xDapps](./development/messages/registration/overview.md) - [Registering xDapps](./development/messages/registration/overview.md)
- [EVM](./development/messages/registration/evm.md) - [EVM](./development/messages/registration/evm.md)
- [Relaying Messages](./development/messages/relaying/overview.md) - [Relaying Messages](./development/messages/relaying/overview.md)

View File

@ -1,10 +0,0 @@
# Guardiand
Guardiand is a way to spin up a guardian node to point to RPC endpoints for running blockchains. By default, the script will spin up 2 Ganache chains, but this can be modified and expanded upon quite easily by editting the `wormhole.sh` file found in most projects.
## Prerequistes
- Ganache
- Docker
### FAQ & Common Problems
- Anvil isn't working
While we reccomend Foundry's Forge tool for compling and deploying code elsewhere in these docs, we *do not* at this time reccomend using anvil for guardiand; this is because guardiand is spec'd against go-ethereum, and anvil is out of spec for how it reports block headers (non left padding to normalize length), which means go-ethereum freaks out and can't read anvil headers.

View File

@ -1,13 +1,13 @@
# Wormhole Development Overview # Wormhole Development Overview
Getting started with Wormhole development ususally starts with testing your contract code locally -> deploying to testnet -> deploying to mainnet.
In each of these environments, we need to have atleast 2 different blockchains, as well as atleast one Guardian node running to observe and sign VAAs. To get started with cross chain development, first you're going to need a local environment to test your xdapp code on. The general flow for a cross chain message goes from an application deployed to chain A, to the Wormhole contract on chain A, to the Guardian network, then submitted to chain B.
## Environments To simulate all of these things locally, we need to be able to deploy some chains, deploy the Wormhole contracts to these chains, and then run atleast one Wormhole validator to pick up messages. Later, we might even introduce a relayer to automatically submit messages, but that's currently only supported for Mainnet Token Bridge transfers for native and stable coins. Developers currently have to use either a manual relayer method or an app sepecific relayer (but more on that in the Relayer section).
### Localhost First, before we setup an xdapp project, we'll need to choose a local environment to run the Wormhole Guardian Network. We can use either Wormhole Local Validator or Tilt.
- Guardiand: This is the simplest, custom environment. It's BYOB (Bring your own Blockchain), where you can run your own local validator nodes and connect them to a single guardian running on docker. Initial setup can take upwards of 500 seconds, but after the image is built, bringing it up and down is usually <1 minute.
- Tilt: A full fledged Kubernetes deployment of *every* chain connected to Wormhole, along with a Guardian node. Usually takes 30 min to spin up fully, but comes with all chains running out of the box. - [Wormhole Local Validator](./wormhole-local-validator.md): This is the simplest, custom environment. It's BYOB (Bring your own Blockchain), where you can run your own local validator nodes and connect them to a single guardian running on docker. Initial setup can take upwards of 500 seconds, but after the image is built, bringing it up and down is usually <1 minute. It requires installing the software for the validator nodes locally on your computer or somewhere to run them.
- [Tilt](./tilt/overview.md): A full fledged Kubernetes deployment of *every* chain connected to Wormhole, along with a Guardian node. Usually takes 30 min to spin up fully, but comes with all chains running out of the box.
### Testnet ### Testnet
If you want to test on the various test and devnets of existing connected chains, there's a single guardian node watching for transactions on various test networks. You can find the contracts [here](../reference/contracts.md) and the rpc node [here](../reference/rpcnodes.md). If you want to test on the various test and devnets of existing connected chains, there's a single guardian node watching for transactions on various test networks. You can find the contracts [here](../reference/contracts.md) and the rpc node [here](../reference/rpcnodes.md).
@ -15,4 +15,7 @@ If you want to test on the various test and devnets of existing connected chains
One thing to watch out for is that because testnet only has a single guardian running, there's a small chance that your VAAs do not get processed. This rate is *not* indiciative of performance on mainnet, where there are 19 guardians watching for transactions. One thing to watch out for is that because testnet only has a single guardian running, there's a small chance that your VAAs do not get processed. This rate is *not* indiciative of performance on mainnet, where there are 19 guardians watching for transactions.
### Mainnet ### Mainnet
When you're ready to deploy to mainnet, you can find the mainnet contracts [here](../reference/contracts.md) and the mainnet rpc nodes [here](../reference/rpcnodes.md). When you're ready to deploy to mainnet, you can find the mainnet contracts [here](../reference/contracts.md) and the mainnet rpc nodes [here](../reference/rpcnodes.md).
## Next Steps
To get started, first clone the a local host environment (WLV or Tilt), then proceed to the first project, the [evm-messenger]().

View File

@ -18,7 +18,4 @@ They all take in a context object that's made up of the
This file parses command line args and filters calls to chain management handlers. This file parses command line args and filters calls to chain management handlers.
### xdapp.config.json ### xdapp.config.json
The config file contains all the information about the network rpc nodes, accounts, and other constants used to communicate with contracts deployed to the selected chains. The config file contains all the information about the network rpc nodes, accounts, and other constants used to communicate with contracts deployed to the selected chains.
### wormhole.sh
This is a script that spins up chains using PM2 and guardiand via docker. It'll clone the wormhole repo. It is NOT necessary to do this for every project, if you're creating multiple xdapps, maybe have one folder that you run guardiand from so you're not rebuilding it every time you start a new project.

View File

@ -0,0 +1,28 @@
# Wormhole Local Validator
The Wormhole Local Validator is available [here](https://github.com/certusone/xdapp-book/tree/main/projects/wormhole-local-validator). It contains the wormhole local validator, along with code to spin up EVM and Solana local validators, and deployment code to add Wormhole contracts to those new chains.
## Dependencies
You will also need Docker; you can get either [Docker Desktop](https://docs.docker.com/get-docker/) if you're developing on your computer or if you're in a headless vm, install [Docker Engine](https://docs.docker.com/engine/). Make sure to have Docker running before you run any of the following commands.
To run EVM chains you will need [Ganache](https://github.com/trufflesuite/ganache#command-line-use)
To run Solana chains you will need [Solana](https://docs.solana.com/cli/install-solana-cli-tools) installed.
## Run EVM Chains
`npm run evm0` will start up an EVM chain with Wormhole Chain ID 2 (like ETH) and deploy the Wormhole Core Bridge (`0xC89Ce4735882C9F0f0FE26686c53074E09B0D550`), Token Bridge (`0x0290FB167208Af455bB137780163b7B7a9a10C16`), and NFT Bridge (`0x26b4afb60d6c903165150c6f0aa14f8016be4aec`) contracts to it.
`npm run evm1` will start up an EVM chain with Wormhole Chain ID 4 (like BSC) and deploy the Wormhole Core Bridge (`0xC89Ce4735882C9F0f0FE26686c53074E09B0D550`), Token Bridge (`0x0290FB167208Af455bB137780163b7B7a9a10C16`), and NFT Bridge (`0x26b4afb60d6c903165150c6f0aa14f8016be4aec`) contracts to it.
They'll also deploy a Test Token (TKN at `0x2D8BE6BF0baA74e0A907016679CaE9190e80dD0A`), test NFT (`0x5b9b42d6e4B2e4Bf8d42Eba32D46918e10899B66`), and WETH Contract (`0xDDb64fE46a91D46ee29420539FC25FD07c5FEa3E`)
They'll use the standard Wormhole test mnemonic (`myth like bonus scare over problem client lizard pioneer submit female collect`) and use the first key for deployment and payment (Public Key: `0x90F8bf6A479f320ead074411a4B0e7944Ea8c9C1`, Private Key: (`0x4f3edf983ac636a65a842ce7c78d9aa706d3b113bce9c46f30d7d21715b23b1d`))
## Run Solana Chain
`npm run solana` will start up a Solana chain and load in Core Bridge (`Bridge1p5gheXUvJ6jGWGeCsgPKgnE3YgdGKRVCMY9o`) and Token Bridge (`B6RHG3mfcckmrYN1UhmJzyS1XX3fZKbkeUcpJe9Sy3FE`) accounts.
TODO: Add emitter registrations for token bridge.
## Run Wormhole
After you have the dependencies installed and the chains running, you can run Womrhole.
Simply run `npm run wormhole` and wait while the Wormhole Guardian builds a docker image. The first time you run this command, it might take a while (up to 550 seconds on a modern laptop!). After the image is built however, it'll be relatively fast to bring it up and down.
### FAQ & Common Problems
- Anvil isn't working
While we reccomend Foundry's Forge tool for compling and deploying code elsewhere in these docs, we *do not* at this time reccomend using anvil for guardiand; this is because guardiand is spec'd against go-ethereum, and anvil is out of spec for how it reports block headers (non left padding to normalize length), which means go-ethereum freaks out and can't read anvil headers.

View File

@ -0,0 +1,17 @@
# Orchestrator.js
A JS client that deploys and calls the functions of the two Messenger contracts on two chains.
## Deploy
Uses [forge](https://getfoundry.sh) to compile and deploy the code. Stores the deployed address to be used later.
## Register Chain
Takes the deployed address from the target chain and registers it on the source chain. No wormhole interaction is necessary for this step.
## Send Msg
Calls the `sendMsg()` function on the source chain, which emits a VAA. Fetches the VAA from the Wormhole guardian after it's ready, then stores it.
## Submit VAA
Manually relays the VAA to the target chain.
## Get Current Msg
Returns the chain's current message.

View File

@ -0,0 +1,105 @@
# Messenger.sol
Messenger.sol an application contract on EVM that is capable of communicating with the Wormhole core bridge.
We start with hard coding the Wormhole core bridge address, and creating a interfaced link to it.
```solidity
//SPDX-License-Identifier: Unlicense
pragma solidity ^0.8.0;
import "./Wormhole/IWormhole.sol";
contract Messenger {
string private current_msg;
address private wormhole_core_bridge_address = address(0xC89Ce4735882C9F0f0FE26686c53074E09B0D550);
IWormhole core_bridge = IWormhole(wormhole_core_bridge_address);
// Used to calculate the Sequence for each message sent from this contract
uint32 nonce = 0;
// Chain ID => Application Contract mapping to ensure we only process messages from contracts we want to.
mapping(uint16 => bytes32) _applicationContracts;
address owner;
// Track which messages we've already processed so we don't double process messages.
mapping(bytes32 => bool) _completedMessages;
}
```
## Constructor
Nothing fancy, just setting the owner of the contract to the deployer. The owner is used later to register sibling contracts on foreign chains.
```solidity
constructor(){
owner = msg.sender;
}
```
## SendMsg
Takes in a bytes payload and calls the Wormhole Core Bridge to publish the bytes as a message.
The `publishMessage()` function of the core_bridge take three arguements:
- Nonce: a number to uniquely identify this message, used to make sure that the target chain doesn't double process the same message
- Payload: the bytes payload
- Confirmations: the number of blocks the guardians should wait before signing this VAA. For low security applications, this number can be low, but if you're on a chain that often reorgs a high number of blocks (like Polygon) you might want to set this number high enough to ensure your transaction from the source chain doesn't get lost after the guardians sign it.
```solidity
function sendMsg(bytes memory str) public returns (uint64 sequence) {
sequence = core_bridge.publishMessage(nonce, str, 1);
nonce = nonce+1;
}
```
## ReceiveEncodedMsg
The receive encoded message takes in a VAA as bytes. Then it calls the Core Bridge to verify the signatures match those of the gaurdians, check that it's from a contract on a foreign chain that we actually want to listen to, and that the message hasn't been processed already. If all those checks pass, we can decode the payload (in this case we know it's a string) and set the current_msg for the contract to that payload.
```solidity
function receiveEncodedMsg(bytes memory encodedMsg) public {
(IWormhole.VM memory vm, bool valid, string memory reason) = core_bridge.parseAndVerifyVM(encodedMsg);
//1. Check Wormhole Guardian Signatures
// If the VM is NOT valid, will return the reason it's not valid
// If the VM IS valid, reason will be blank
require(valid, reason);
//2. Check if the Emitter Chain contract is registered
require(_applicationContracts[vm.emitterChainId] == vm.emitterAddress, "Invalid Emitter Address!");
//3. Check that the message hasn't already been processed
require(!_completedMessages[vm.hash], "Message already processed");
_completedMessages[vm.hash] = true;
//Do the thing
current_msg = string(vm.payload);
}
```
## GetCurrentMsg
Simple method that just returns what the current stored message is.
```solidity
function getCurrentMsg() public view returns (string memory){
return current_msg;
}
```
## RegisterApplicationContracts
Generally, you want to register and track what contracts from foreign chains you're accepting VAAs from because anyone could deploy a contract and generate a fake VAA that looks like a real VAA you want to accept.
```solidity
/**
Registers it's sibling applications on other chains as the only ones that can send this instance messages
*/
function registerApplicationContracts(uint16 chainId, bytes32 applicationAddr) public {
require(msg.sender == owner, "Only owner can register new chains!");
_applicationContracts[chainId] = applicationAddr;
}
```

View File

@ -0,0 +1,23 @@
# EVM Messenger
The EVM messenger project is a very simple contract that sends messages from one contract on an EVM chain to it's sibling contract on Chain B. Before you get started with this project, make sure you have a local Wormhole Guardian Network running (either [WLV](../../development/wormhole-local-validator.md) or [Tilt](../../development/tilt/overview.md)). If you're running WLV, you'll also need to spin up EVM0 and EVM1 so we have two EVM chains to send messages back and forth.
Let's break down the files you're going to find in [evm-messenger](https://github.com/certusone/xdapp-book/tree/main/projects/evm-messenger) folder.
### Chains
Firstly, the `chains/` folder contains the source code that's actually being deployed to the EVM chain. The `evm/` folder found inside was generated using [`forge init`](https://getfoundry.sh). There's two files to of note in this folder, `src/Wormhole/IWormhole.sol` and `src/Messenger.sol`.
The IWormhole file is the Wormhole Core Bridge interface and required if your app wants to be able to talk to the Wormhole Core Bridge. It outlines the functions and return values you can expect from the Wormhole contract.
The second file, Messenger, we will cover in our breakdown of the EVM code [here](./messenger.md).
### Tests
We have a very very simple test script written in bash. It's less of a test script and more of a happy path walkthrough. It makes uses of Orchestrator.js (see below) to call the functions on our EVM contract in order.
We first deploy the code, register the applications on each chain, and then send a message.
### Orchestrator
Orchestrator is a very simple js client that takes arguements from the command line to call the various functions on our contract. We'll break down everything orchestator does [here](./client.md).
### xdapp.config.json
This maintains some constants about the chains RPC endpoints, private keys used to deploy code, etc. Also includes the Wormhole RPC endpoint.

View File

@ -1,2 +0,0 @@
# Messenger

View File

@ -1,14 +0,0 @@
# Messenger Preqs
## EVM
- Foundry
## Algorand
- Python3
- pip3 install -r requirements.txt
## Solana
- Rust
- Anchor > 0.24.0
- Solana Cli > 1.9.14

View File

@ -1,5 +1,2 @@
# Projects # Projects
The projects for this repository are located [here](https://github.com/certusone/xdapp-book/tree/main/projects).
## xDapp Starter
## Messenger