rm dupe bigtable docs

This commit is contained in:
justinschuldt 2022-03-10 14:02:45 -06:00 committed by Justin Schuldt
parent 062b339365
commit 9c2485b676
3 changed files with 0 additions and 143 deletions

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 21 KiB

View File

@ -1,87 +0,0 @@
## Wormhole event BigTable schema
### Row Keys
Row keys contain the MessageID, delimited by colons, like so: `EmitterChain:EmitterAddress:Sequence`.
- `EmitterAddress` left padded with `0`s to 32 bytes, then hex encoded.
- `Sequence` left padded with `0`s to 16 characters, so rows are ordered in the sequence they occured. BigTable Rows are sorted lexicographically by row key.
BigTable can only be queried for data in the row key. Only row key data is indexed. You cannot query based on the value of a column; however you may filter based on column value.
### Column Families
BigTable requires that columns are within a "Column family". Families group columns that store related data. Grouping columns is useful for efficient reads, as you may specify which families you want returned.
The column families listed below represent data unique to a phase of the attestation lifecycle.
- `MessagePublication` holds data about a user's interaction with a Wormhole contract. Contains data from the Guardian's VAA struct.
- `QuorumState` stores the signed VAA once quorum is reached.
- `TokenTransferPayload` stores the decoded payload of transfer messages.
- `AssetMetaPayload` stores the decoded payload of asset metadata messages.
- `NFTTransferPayload` stores the decoded payload of NFT transfer messages.
- `TokenTransferDetails` stores information about the transfer.
- `ChainDetails` stores chain-native data supplimented from external source(s).
### Column Qualifiers
Each column qualifier below is prefixed with its column family.
#### MessagePublication
- `MessagePublication:Version` Version of the VAA schema.
- `MessagePublication:GuardianSetIndex` The index of the active Guardian set.
- `MessagePublication:Timestamp` Timestamp when the VAA was created by the Guardian.
- `MessagePublication:Nonce` Nonce of the user's transaction.
- `MessagePublication:Sequence` Sequence from the interaction with the Wormhole contract.
- `MessagePublication:EmitterChain` The chain the message was emitted on.
- `MessagePublication:EmitterAddress` The address of the contract that emitted the message.
- `MessagePublication:InitiatingTxID` The transaction identifier of the user's interaction with the contract.
- `MessagePublication:Payload` The payload of the user's message.
#### QuorumState
- `QuorumState:SignedVAA` the VAA with the signatures that contributed to quorum.
#### TokenTransferPayload
- `TokenTransferPayload:PayloadId` the payload identifier of the payload.
- `TokenTransferPayload:Amount` the amount of the transfer.
- `TokenTransferPayload:OriginAddress` the address the transfer originates from.
- `TokenTransferPayload:OriginChain` the chain identifier of the chain the transfer originates from.
- `TokenTransferPayload:TargetAddress` the destination address of the transfer.
- `TokenTransferPayload:TargetChain` the destination chain identifier of the transfer.
#### AssetMetaPayload
- `AssetMetaPayload:PayloadId` the payload identifier of the payload.
- `AssetMetaPayload:TokenAddress` the address of the token. left padded with `0`s to 32 bytes.
- `AssetMetaPayload:TokenChain` the chain identifier of the chain the transfer originates from.
- `AssetMetaPayload:Decimals` the number of decimals of the token.
- `AssetMetaPayload:Symbol` the ticker symbol of the token.
- `AssetMetaPayload:Name` the name of the token.
#### NFTTransferPayload
- `NFTTransferPayload:PayloadId` the payload identifier of the payload.
- `NFTTransferPayload:OriginAddress` the address the transfer originates from.
- `NFTTransferPayload:OriginChain` the chain identifier of the chain the transfer originates from.
- `NFTTransferPayload:Symbol` the symbol of the nft.
- `NFTTransferPayload:Name` the name of the nft.
- `NFTTransferPayload:TokenId` the token identifier of the nft.
- `NFTTransferPayload:URI` the URI of the nft.
- `NFTTransferPayload:TargetAddress` the destination address of the transfer.
- `NFTTransferPayload:TargetChain` the destination chain identifier of the transfer.
#### TokenTransferDetails
- `TokenTransferDetails:Amount` the amount transfered.
- `TokenTransferDetails:NotionalUSD` the notional value of the transfer in USD.
- `TokenTransferDetails:OriginSymbol` the symbol of the token sent to wormhole.
- `TokenTransferDetails:OriginName` the name of the token sent to wormhole.
- `TokenTransferDetails:OriginTokenAddress` the address of the token sent to wormhole.
#### ChainDetails
- `ChainDetails:SenderAddress` the native address that sent the message.
- `ChainDetails:ReceiverAddress` the native address that received the message.

View File

@ -1,53 +0,0 @@
# Centralized datastore for Wormhole visualizations
## Objective
Persist transient Guardian events in a database along with on-chain data, for easier introspection via a block-explorer style GUI.
## Background
Events observed and broadcast between Guardians are transient. Before a message is fully attested by the Guardians, an end user has no way to determine where within the lifecycle of attestation their event is. Saving the attestation state along with the message identifiers would allow the development of discovery interfaces.
Building a GUI that would allow querying and viewing Wormhole data by a single on-chain identifier would make using the Wormhole a friendlier experience. Building such a GUI would be difficult without an off-chain datastore that captures the entire lifecycle of Wormhole events.
## Goals
- Persist user intent with the relevant metadata (sender address, transaction hash/signature).
- Expose the Guardian network's Verifiable Action Approval state. Individual Signatures and if/when quorum was reached.
- Record the transaction hash/signature of all transactions performed by Guardians relevant to the User's intent.
- Allow querying by a transaction identifier and retrieving associated data.
## Non-Goals
- Centrally persisted Wormhole data does not aim to be a source of truth.
- Centrally persisted Wormhole data will not be publicly available for programmatic consumption.
## Overview
A Guardian can be configured to publish Wormhole events to a database. This will enable a discovery interface for users to query for Wormhole events, along with querying for message counts and statistics.
![Wormhole data flow](Wormhole-data-flow.svg)
## Detailed Design
A Google Cloud BigTable instance will be setup to store data about Wormhole events, with the schema described in the following section. BigTable is preferred because it does not require a global schema, along with its ability to efficiently deal with large amounts of historic data by row key sharding.
A block-explorer style web app will use BigTable to retrieve VAA state to create a discovery interface for Wormhole. The explorer web app could allow users to query for Wormhole events by a single identifier, similar to other block explorers, where a user may enter an address or a transaction identifier and see the relevant data.
### API / database schema
BigTable schema: [Wormhole event schema](./bigtable_event_schema.md)
## Caveats
It is undetermined how costly it will be to query for multiple transactions (rows) in the case of bridging tokens. For example, querying to retrieve the `assetMeta` transaction along with `transfer` message transaction.
## Alternatives Considered
### Database schema
Saving each Protobuf SignedObservation as its own row was considered. However, building a picture of the state of the user's intent with only SignedObservations is not ideal, as the logic to interpret the results would need come from somewhere, and additional data would need to be sourced.
Using VAA "digest" as BigTable RowKey was considered. Using the VAA digest would make database writes easy within the existing codebase. However, indexing on digest would heavily penalize reads as the digest will not be known to the user, so a full table scan would be required for every user request.