* add cache to protocols stats endpoint
add mock for cache
add unit test case for cache miss
change from to current
* add configs
* add missing config for api-service.yaml
* add cache ttl for staging-mainnet
* [Issue:1052] Create job for fetching contributor stats and storing in db
revert unnecessary changes on api/handlers/stats
revert changes in go.mod and go.sum
revert change in go.work
add schedule for contributors stats job
change response parsing order
changes due to draft-pr review
move on with contributors activity implementation
change to every hour
fix typo
change contributor stats implementation to do a single write transaction
normalize to UTC contributors activity timestamp
add cronjob schedule for contributors
[Issue:1052][Part 2] Create endpoint to expose contributors stats and activities (#1123)
* add endpoint for retrieving stats and activity
* remove model.go file and move types to service file
* add unit tests to contributors service
* integrate new contributors controller
* fix more stuff
fix unit-tests
changes due to pr review
fix query
fix unit-tests
fix total_value_secure
move constantes to common pkg
remove extra changes
rename contributor to protocols
finish renames
Changes for deployment
adjust different response types from different protocols contributors
fix controller test
big refactor in activty job and stats job since protocols are returning different formats
api responding fine
remove uneccessary generics
target dbconsts
fix
Delete deploy/common/env/staging-mainnet.env
undo unwanted changes
readd staging-mainnet.env
fix unit-tests
add missing protocols_stats/activity_version
remove property protocols_json
fix JOB_ID env var in protocols-activity.yaml
fix typos in env vars configs
change tu numbers
changes due to own review
add new line
* add swagger docs
* migrate vaa to globaltransaction origintx
* Add deploy configuration for job to migrate vaas to originTx
* Add option to run migration process by range of date
* Update go dependencies for jobs
* Fix kind of job in deployment
---------
Co-authored-by: Fernando Torres <fert1335@gmail.com>
### Description
This pull request removes duplicated code related to MongoDB connection/disconnection attempts. This code was copied across all 8 microservices.
The functionality is now unified under the `common/dbutil` package.
### Description
Tracking issue: https://github.com/wormhole-foundation/wormhole-explorer/issues/569
This pull request adds support for the "Base" blockchain in different parts of the codebase:
* The functions `domain.TranslateEmitterAddress`, `domain.EncodeTrxHashByChainID` and `domain.DecodeNativeAddressToHex`.
* The `tx-tracker` service: it now connects to a Base RPC node to fetch origin transaction metadata.
Parse amount from metadata fields
Modify parser client to ParseWithMetadata
Add support to new vaa payload parser parse endpoint
Co-authored-by: Fernando Torres <fert1335@gmail.com>
* redis prefix support for caches
* fly support for prefix
* unit tests
* redis prefix for notional cache updater
* fix test
* fix tests
* use redis-prfix from config map
### Description
Tracking issue: https://github.com/wormhole-foundation/wormhole-explorer/issues/451.
The Wormhole Scan UI needs to provide a link to the emitter address for each VAA.
This pull request adds a field containing decoded emitter addresses in the following endpoints:
* `GET /api/v1/transactions`: field `emitterNativeAddress`
* `GET /api/v1/vaas*`: field `emiterNativeAddr`
Add txHash encondig backfiller
Handle txHash base58 encoding for solana in tx-tracker
Add temporary field _originTxHash in vaas and vaaIdTxHash collections by backup
Add cobra to fly backfiller
Co-authored-by: walker-16 <agpazos85@gmail.com>
### Summary
Tracking issue: https://github.com/wormhole-foundation/wormhole-explorer/issues/385
This pull request implements a new endpoint, `GET /api/v1/transactions`, which will be consumed by the wormhole explorer UI.
The endpoint returns a paginated list of transactions, in which each element contains a brief overview of the transaction (ID, txHash, status, etc.).
It exposes offset-based pagination via the parameters `page` and `pageSize`. Also, results can be obtained for a specific address by using the `address` query parameter.
The response model looks like this:
```json
{
"transactions": [
{
"id": "1/5ec18c34b47c63d17ab43b07b9b2319ea5ee2d163bce2e467000174e238c8e7f/12965",
"timestamp": "2023-06-08T19:30:19Z",
"txHash": "a302c4ab2d6b9a6003951d2e91f8fdbb83cfa20f6ffb588b95ef0290aab37066",
"originChain": 1,
"status": "ongoing"
},
{
"id": "22/0000000000000000000000000000000000000000000000000000000000000001/18308",
"timestamp": "2023-06-08T19:17:14Z",
"txHash": "00000000000000000000000000000000000000000000000000000000000047e7",
"originChain": 22,
"destinationAddress": "0x00000000000000000000000067e8a40816a983fbe3294aaebd0cc2391815b86b",
"destinationChain": 5,
"tokenAmount": "0.12",
"usdAmount": "0.12012",
"symbol": "USDC",
"status": "completed"
},
...
]
}
```
### Limitations of the current implementation
1. Doesn't return the total number of results (this may result in a performance issue when we filter by address)
2. Can only filter by receiver address (we don't have sender information in the database yet)
When initializing the notional cache, if there was more than one page of results, the client would run into an infinite loop.
This issue was causing several services to stall on startup in the staging environment (API, analytics, etc).
## Description
Previously, the notional cache was using the `float64` type to manipulate price data. Since floating point types can't represent price data accurately, this commit changes the codebase to use a lossless representation (i.e.: `decimal.Decimal`).
### Summary
Tracking issue: https://github.com/wormhole-foundation/wormhole-explorer/issues/344
Before this pull request, there were two separate token databases (one being used by the InfluxDB backfiller, and another one in the `common/` module being used by the analytics service).
Having two different token databases resulted in inconsistencies, due to each of these databases containing different tokens.
This PR unifies those two databases into a single one, under the `common/` module.
### Summary
Tracking issue: https://github.com/wormhole-foundation/wormhole-explorer/issues/276
This pull request implements the endpoint `GET /api/v1/top-assets-by-volume`, which returns the assets that have the highest volume. Internally, the endpoint uses data summarized daily to speed up query execution times.
This endpoint has a mandatory query parameter named `timerange`, which must be set to `7d`, `15d` or `30d`.
### Summary
In order to compute volume metrics for each token, we need metadata about the token that is not present in the VAAs (e.g.: decimals, symbol).
That information is statically defined in `common/domain/tokenbridge.go`. This pull request adds the most relevant tokens (i.e.: the tokens that contribute the most volume) to the existing definitions. It is very likely that more token definitions will be added in the future as needed.
The token metadata that was previously defined in the `notional` package (symbols, coingecko IDs) was also moved to `common/domain/tokenbridge.go`.
Tracking issue https://github.com/wormhole-foundation/wormhole-explorer/issues/281
### Summary
This pull request adds volume metrics to influxdb. Also, it adds the 24h volume metric to `GET /api/v1/scorecards`.
Tracking issues: https://github.com/wormhole-foundation/wormhole-explorer/issues/221, https://github.com/wormhole-foundation/wormhole-explorer/issues/280
### Changes:
* The `parser` service no longer generates metrics for influxdb. All metrics-related code was removed from that service, that code was moved to the analytics service instead.
* New volume metrics were added to the analytics service.
* The notional cache was modified to use token names (i.e.: ticker symbols) as keys instead of chain IDs.
* The notional cache reader was moved to the `common/client/cache` package.
* A little bit of duplicated code between the cache reader and writer was removed.
* A 24h volume metric was added to `GET /api/v1/scorecards`.
* A dictionary that stores token metadata was added under `common/domain/tokenbridge.go`. More tokens will be added to it in the near future.