- changelog updated
- block stream errors are now handled as a special case of error, retry logic is triggered but at most 3-times in case of service being truly down
- the failure is not passed to the clients so ideally the false positive errors are reduced as well as the delay in the sync time
[#1351] Recover from block stream issues (#1352)
- typo fixed
- changelog update
- the sync time has been reduced by ~33%. The progress reporting frequency has been lowered down 5-times
- this is just first step and a quick improvement before we introduce advanced solution, covered in #1353
[#1346] Troubleshooting synchronization (#1354)
- typo fixed
- the logs are split so it's not a huge string
- the log method is async
- added a new log with balances
[#1336] Tweaks for sdk metrics
- wait a bit so the logs are sorted in time
[#1336] Tweaks for sdk metrics
- wait a bit so the logs are sorted in time
[#1336] Tweaks for sdk metrics
- wait a bit so the logs are sorted in time
[#1336] Tweaks for sdk metrics
- cleanup
[#1336] Tweaks for sdk metrics
- changelog update
[#1336] Tweaks for sdk metrics
- checkpoints updated
[#1336] Tweaks for sdk metrics
- changelog typos fixed
[#1336] Tweaks for sdk metrics
- mocks generated
- the logger has been extended to log the level as well
- there is only partial match of levels between SDK logger levels, OSLogEntryLogLevel and OSLogType so only debug, info, error are fully matched
- this is a base for the exporter on client's side
[#1325] Log metrics
- typos fixed
[#1325] Log metrics
- scan metric logs added
[#1325] Log metrics
- Scan & Enhance logs
[#1325] Log metrics
- checkpoints updated
- every CBP action is measured separately and collects the data, when the sync is done it dumps overview of the run to the logger
- next run clears out the previous data and starts to collect fresh reports for the run
[#1325] Log metrics (#1327)
- changelog update
[#1325] Log metrics (#1327)
- SDKMetrics updated to be mockable
- unit test updated
[#1325] Log metrics (#1327)
- performance tests cleaned out
[#1325] Log metrics (#1327)
- Network tests buildable again
Closes#1315
This PR introduces small changes on each commit.
Things done:
rename Checkpoint+Constants to Checkpoint+helpers
Move `Checkpoint` from Model folder to Checkpoint folder
Remove unused function `ofLatestCheckpoint` from BlockHeight
Create a protocol called `CheckpointSource` that contains the
relevant functionality to get checkpoints from Bundle
Create a set of tests that check that functionality is maintained
when a `CheckpointSource` is used instead of Checkpoint helpers
Implement `BundleCheckpointSource` and add Tests
Code clean up: move `BundleCheckpointURLProvider` to its own file
Code clean up: `Checkpoint+helpers` match file header
Replace use of `Checkpoint.birthday(with:network)` with CheckpointSource
Revert "Remove unused function `ofLatestCheckpoint` from BlockHeight"
addresses PR comment from @daira
This reverts commit d0e154ded7, since it
modifies a public API and it was not the goal of this PR.
Update Sources/ZcashLightClientKit/Checkpoint/BundleCheckpointSource.swift
Use a decent Date Format
Co-authored-by: Daira Emma Hopwood <daira@jacaranda.org>
Improve documentation on BundleCheckpointURLProvider
Co-authored-by: Daira Emma Hopwood <daira@jacaranda.org>
Improve documentation on BundleCheckpointURLProvider
Co-authored-by: Daira Emma Hopwood <daira@jacaranda.org>
use YYYY-mm-dd on file header
author: @daira
Co-authored-by: Daira Emma Hopwood <daira@jacaranda.org>
Add test that verifies that the exact height is returned if available
- checkpoints updated
[#1310] Release 2-0-3
- FFI version bumped
- other dependencies bumped as well
[#1310] Release 2-0-3
- checkpoints mentioned in the changelog
- The enhance action is driven by lastEnhancedHeight value. The range is computed from it and every 1000 blocks are enhanced. The value hasn't been reseted with the new suggested ScanRanges so when some higher ranges were processed first, all lower heights were skipped
- fixed and covered with the unit test
[#1308] Enhancing seems to not process all ranges (#1309)
- changelog update
- the most simple fix for this issue is to set the number of attempts to the "infinity"
- smarter solution will require a better retry logic in general, covered in #1304
- There was a reset missing in the rewind() call. This method is a direct call affecting the state of the compact block processor but the ActionContext has been handled only in the synchronization pipeline. Manual reset was needed to reset the last enhanced height.
- tests added
- Removed deprecated zip-313 fee of 1000 Zatoshi
- default is now 10k zatoshi, the minimum defined by zip-317
[#1294] Remove all uses of the incorrect 1000-ZAT fee
- changelog update
- InternalSyncStatus as well as SyncStatus were extended with stopped state, this is needed to distinguish between upToDate and stopped via stop() method states as previously stopped state of block processor was mapped to upToDate
- Execution of any query in TransactionSQLDAO was not thread safe. Since all queries goes through execute() method, this is now using a lock
[#1281] Database is locked
- global lock used, all rust backend code accessing data DB is under lock
- transactionsqldao is also under same lock
[#1281] Database is locked
- refactor + fbDB locked as well
[#1281] Database is locked
- lock around scalars added
[#1281] Database is locked
- comments addressed
- scalarLocked helper implemented
- connection().run and .transition locked as well
[#1281] Database is locked
- db used so it's not called twice
[#1281] Database is locked
- AccountEntity protected via globalDBLock as well
This fixes an issue where the transaction index was incorrectly being
used to filter out transactions from the contents of the
`v_transactions` view, and updates the query to account for the fact
that both the block time and the transaction index may be NULL in the
results of this view.
- latestBlockHeight (chain tip) reported in the SynchronizerState to the clients is now updated every 10mins at most, typically with every sync run (10-30s)
- fully and max scanned heights update fixed
- new mocks provided and tests fixed
The improved performance provided by `shardtree` is sufficient to allow
100-block scan ranges throughout the sandblasted range, and limiting to
10 blocks results in significant overhead.
A future release will switch to an adaptive strategy which can
dynamically adjust download and scan range sizes based upon observed
output counts and scanning times to provide consistent throughput.
Although is not the best way to solve it this addresses the issue
in a single statement and following existing pattern in the code.
The error should be thrown earlier in RustBackend itself if so,
or fallback to some specific value that makes sense to the domain
Closes#1267
- more comments resolved
- totalProgressRange removed from the SDK
- ScanRange now takes into account the given value and properly initializes, + added tests
- tests fixed
- cbp_state_machine.png as well as .puml files updated to reflect the State Machine changes after SBS
- one small cleanup of clearCache, no longer needed to be called twice, only after enhance (missed removal of linear sync)
- CompactBlockProgress has been update to use syncProgress only
- CompactBlockProgressUpdate removed
- BlockProgress removed
- enhance and fetch progresses removed
- progressPartialUpdate refactored to syncProgress
- tests updated
- getScanProgress used for reporting of the syncing progress
[#1238] Report sync progress with the new getScanProgress
- comment added
[#1238] Report sync progress with the new getScanProgress (#1239)
- TODOs + offline tests fixed
- concept of linear syncing fully removed from the SDK, it's fully replaced with Spend-before-Sync
- BlockDAO - table blocks is no longer used, removed from the SDK and all it's associated getLastBlocks/ScannedHeights as with it
- concept of pending transactions removed from the SDK
- unit tests refactored
- My assumption was right, the way State Machine is done requires .celarCache to be called with every "restart"
- I was able to reproduce errors when clearCache wasn't called so the updateChainTipAction needed to be updated to prepare clean conditions for the scan suggest ranges
- tests updated
- the end of scan range is now properly filled
[#1223] Requested height [over latestheight] does not exist in the block cache
- the end of scan range is now properly filled
- unit tests for this change, checking if range.upperBound is set as expected
[#1223] Requested height [over latestheight] does not exist in the block cache
- code cleanup
- when getSubtreeRoots call fails on timeout, the connectivity is not present and the action must be terminated (throw) an error, this way when the connectivity is back, the State Machine starts over and getSubtreeRoots get a chance to properly decide if SBS is supported
[#1179] Handle false positive getSubtreeRoots when connectivity is down
- unit test for the timeout getSubtreeRoot added
[#1179] Handle false positive getSubtreeRoots when connectivity is down (#1222)
- warning fixed
- WIP
[#1176] Cover Spend before Sync with tests
- next batch of updates
[#1176] Cover Spend before Sync with tests
- last batch of fixes and new tests
[#1176] Cover Spend before Sync with tests
- package.resolved updated
[#1176] Cover Spend before Sync with tests (#1212)
- added tests for brand new actions related Spend before Sync
- RewindActionTests
- UpdateChainTipActionTests
- UpdateSubtreeRootsActionTests
- ProcessSuggestedScanRangesActionTests
- the computation of progress changed, the total range is computed, that way it works for any kind of sync algorithm
- the progress depends on finished scan action, whenever it processes some blocks, it's counted up
- the final progress is a ratio between these new values
- The State Machine has been slightly updated so it measures time when it lastly updated chain tip. If it happened more than 10mins ago, it calls the .updateChainTip action once again before the download-scan-enhance loop continues
- updated unit test
[#1206] Frequent call of update chain tip (#1207)
- whenever updateChainTip is called, it's followed by suggestScanRanges logic
- the computation of progress changed, the total range is computed, that way it works for any kind of sync algorithm
- the progress depends on finished scan action, whenever it processes some blocks, it's counted up
- the final progress is a ratio between these new values
- RewindAction added
- rust's isContinuityError() emulated on iOS side
- verify scan range is now properly handled with rewind as well as check for continuity error
[#1189] Implement continuity check and RewindAction (#1195)
- TODO cleanup
- cleaned up the code
- ScanAlgorithm enum added to the SDK
- preferred sync algorithm set to .linear as default but can be changed to Spend before Sync as the Initializer.init parameter
[#1188] Working prototype of SbS
- error codes for failure states in the SbS State Machine changes added
[#1188] Working prototype of SbS (#1192)
- offline tests fixed