LocalTerra was updated to v2.4.0 and the LCD no longer returns
HTTP errors on some transaction failures (not enough gas still returns
an HTTP error). This caused some of the integration tests to break, because
the typescript SDK would throw on those HTTP errors.
https://github.com/terra-money/LocalTerra/releases/tag/v2.4.0
* Tilt devnet deployment for ibc generic messaging
* Address review comments from kcsongor and hendrikhofstadt
* Add IBC channel whitelist updates to wormchain and terra devnet deploy scripts
* VAAs had guardian set index three instead of zero
* ci: update addresses
* Remove message.block_height and message.tx_index from attributes
* Remove unnecessary contracts from terra2 devnet deployment
* Update wormhole-ibc address on terra2
* Update wormhole-ibc guardian set on terra2 devnet deployment
* IBC relayer testnet deployment fixes
* Wormchain update whitelist fix
---------
Co-authored-by: Bruce Riley <briley@jumptrading.com>
Co-authored-by: Evan Gray <battledingo@gmail.com>
* cosmwasm: add wormchain-ibc-receiver and wormhole-ibc contracts
* Address review comments from jynnantonix and hendrikhofstadt
* Fix lint errors and test failures
* Update naming to reflect new mapping of channelId -> chainId
* Return errors in ibc handlers that should never be called
* Remove contract name and version logic from migration handlers
* Add query handlers to wormhole-ibc contract
* Add wormchain channel id whitelisting to wormhole-ibc contract
* Increase packet timeout to 1 year
* Rebase on main, update imports to new names
* Add governance replay protection to both contracts
* wormhole_ibc SubmitUpdateChannelChain should only handle a single VAA
* better error messages
* Better logging and strip null characters from channel_id from governance VAA
* add brackets back for empty query methods
* Update Cargo.lock
* Only send wormhole wasm event attributes via IBC and add attribute whitelist on the receiver end
* tilt: fix terra2 deploy
* Update based on comments from jynnantonix
---------
Co-authored-by: Evan Gray <battledingo@gmail.com>
* sdk: update wormhole-core to wormhole-sdk and fix lib name to be wormhole_sdk
* cosmwasm: update wormhole and token bridge cosmwasm package/lib names
* Fix terra2 deployment script with new artifact names
While sending tokens to another address on the same chain via wormhole
is quite inefficient, it's not strictly disallowed and we do have some
VAAs on solana that do this. Explicitly check for this case and allow
it.
There are no restrictions on native tokens sent in this way but wrapped
transfers are still subject to some checks: the wrapped account for
the token must exist and it must have a balance larger than the amount
being transferred. If either of those checks fails then that means
that the sender acquired some wrapped tokens that did not go through
the accountant and so that transfer should be blocked and require manual
intervention.
Now that we can calculate the digest of an Observation there's no need
to store the whole thing on-chain. Instead only store the observation
digest, tx_hash, and emitter chain (the tx_hash is necessary because
it's not included in the digest and the emitter chain is used for
servicing missing observation queries). When adding new observations
we can check for equality by comparing the digests and tx hashes rather
than comparing the whole object.
This should further reduce the size of the on-chain state.
When submitting observations to the accounting contract, clients sign
the entire batch once. There's no point storing this signature in the
on-chain data for each observation because it's already stored as part
of the chain's transaction history and the signature would be different
if an observation was submitted as part of a different batch (or the
same batch in a different order) even if the observation itself didn't
change.
Also, nothing actually made use of this signature data. (Yes,
technically it was returned by some queries but the usefulness of
the signature by itself is questionable without the full batch of
observations that were signed).
All we really care about is the index of the guardian anyway so use
a bitset to keep track of the indices of all the guardians that have
signed an observation. We use a u128 for the bitset out of an abundance
of caution in case the number of guardians increases in the future.
Dealing with more than 128 guardians is left as a problem for future
wormhole contributors if we ever get to that point.
When submitting a batch of observations, we don't want an observation
for an already committed transfer to fail the entire batch. This leads
to more complexity in the guardian and also delays all the legitimate
observations by at least one more block (~5 seconds).
Fix this by returning the transfer status of each observation as part
of the response data. Observations for committed transfers will get
a `TransferStatus::Committed` response without failing the tx as long
as the digest of the observation matches the digest of the committed
transfer. Digest mismatches are still an error and will fail the entire
batch.
Add the payload as an explicit field to the `TransferWithPayload` enum
variant. This is a generic parameter that defaults to `Box<RawMessage>`
for maximum flexibility (and to avoid leaking lifetimes higher up the
stack) but users are encouraged to replace this default type parameter
with an explicit `&RawMessage` in places where the serde_wormhole data
format is used.
The main benefit of this change is that the payload is now included as
part of the actual message and no longer requires callers to awkwardly
append it after serialization. This is especially useful in human-
readable formats like JSON (see the `transfer_with_payload` test in
token.rs for an example of this simplification).
The main downside is that this now requires explicit type annotations
when using the non-payload3 variants so that the compiler will pick up
the default generic parameter. This is a relatively minor inconvenience
and the benefit appears to be worth the cost.
There should be no functional change.
- updates terra2 devnet chain timeout_commit to "1s" since the timeout_commit of "0.5s" is too fast and leads to Terra2's clock going into the future.
- updates terra2 devnet chain unbonding_time to "1814400s" which is the default value and translates to a valid trusting period for IBC connectivity.
The RawMessage type provides a more flexible way to handle trailing
payloads so replace all usage of the `*_with_payload` functions to use
`RawMessage` instead.
There should be no functional change.
Add a RawMessage type that can be used to defer parsing parts of a
payload, similar to the `json.RawMessage` from Go. The implementation
is inspired by `serde_json::RawValue`, which does a similar thing.
When serializing, RawMessage will serialize to a base64-encoded string
if it detects that the data format is human readable (like JSON).
Otherwise it will simply forward the raw bytes to the serializer.
RawMessage has both borrowed and boxed versions. The borrowed version
is the most efficient as it enables zero-copy handling of the input data
but also requires that the input data already contains raw bytes and is
not suitable when dealing with human-readable formats like JSON.
The boxed version is more flexible as it supports byte slices, base64-
encoded strings, and byte sequences but is slightly less efficient as it
requires copying or decoding the input data.
Use cw_transcode to ensure that event attribute values are always
encoded as proper json, making it easier for clients to parse them back
into structured data.
This also lets us reuse the input messages for the events, reducing the
number of different structs that we need to track.
Rather than forcing clients to guess whether a transfer is pending or
committed use a single `TransferStatus` query that will return whether
the transfer is still pending or already committed.
This will make it easier for clients to keep the pending and committed
transfer state in sync to avoid unnecessary overhead.
The cw_transcode crate provides a way to transcode any arbitrary rust
struct into a `cosmwasm_std::Event` via that struct's `Serialize` impl,
ensuring that the event attribute values are encoded as proper json.
This will make it easier for client code to parse the event back into
structured data without having to write custom parsing code for each
individual event type.
When we fail to handle an observation in a batch, include the transfer
key as part of the error context so that it's easier to figure out which
observation caused the error.
Add a query for guardians to check if there are any pending transfers
with missing observations. The guardians can use this information to
trigger re-observations of those transactions.
Ensure that converting `Chain` to/from a u16 or to/from a string is
always isomorphic. This requires changing the `FromStr` impl so that in
can handle strings like "Unknown(27)".
Now that we're keeping track of transfer digests, initializing any on-
chain state through the `InstantiateMsg` doesn't make a lot of sense:
any state initialized this way is unverified and this message doesn't
contain enough information to generate the transfer digests.
Rather than trying to add in the necessary fields, just drop the message
completely since it won't be used in production. It's currently only
used to initialize on-chain state for tests but the same thing can be
accomplished through the `ModifyBalance` and `SubmitVAAs` methods.
Keep track of the digests of committed transfers so that they can be
used later when handling duplicate observations / VAAs. When processing
an observation or VAA with the same (chain, address, sequence) tuple as
a committed transfer, return a "message already processed" error when
the digests match and a "digest mismatch" error when they don't. The
latter implies a very serious issue because transfer details shouldn't
change once they have been observed by a quorum of guardians.
Now that the accounting contract can handle chain registrations on
its own, there's no need to query the tokenbridge contract. Remove
references to it from `InstantiateMsg` and the internal state.
Add support for handling chain registration VAAs for the tokenbridge
contract. This will let us deploy accounting without also having to
deploy the tokenbridge.