* SVM: hoist `program_modification_slot` up from bank
* SVM: add `check_program_modification_slot` to processing config
* SVM: hoist `program_match_criteria` up from bank
* SVM: drop `get_program_match_critera` from callbacks
There is often a desire to examine/replay/etc older blocks. If the
blocks are recent enough, they can be pulled from an actively running
node. Otherwise, the blocks must be pulled down from warehouse node
archives. These archives are uploaded on a per-epoch basis so they are
quite large, and can take multiple hours to download and decompress.
With the addition of Entry data to BigTable, blocks can be recreated
from BigTable data. Namely, we can recreate the Entries with proper PoH
and transaction data. We can then shred them such that they are the
same format as blocks that are produced from the cluster.
This change introduces a new command that will read BigTable data and
insert shreds into a local Blockstore. The new command is:
$ agave-ledger-tool bigtable shreds ...
Several important notes about the change:
- Shred for some slot S will not be signed by the actual leader for
slot S. Instead, shreds will be signed with a "dummy" keypair. The
shred signatures does not affect the ability to replay the block.
- Entry PoH data does not go back to genesis in BigTable. This data
could be extracted and uploaded from the existing rocksdb archives;
however, that work is not planned as far as I know. --allow-mock-poh
can be passed to generate filler PoH data. Blocks created with this
flag are replayable by passing --skip-poh-verify to ledger-tool.
- A snapshot will be unpacked to determine items such as the shred
version, tick hash rate and ticks per slot. This snapshot must be in
the same epoch as the requested slots
The bank-hash command in ledger-tool was recently deprecated. However,
the command is used by some of the scripts that coordinate starting up
a fresh cluster. So, the deprecation of bank-hash broke those scripts.
This change fixes the scripts by doing the following:
- Makes --print-bank-hash support --output json
- Updates scripts to install jq on provisioned nodes
- Update remote-node.sh to parse the bank hash from json using jq
* Rename ComputeBudget::max_invoke_stack_height to max_instruction_stack_depth
The new name is consistent with the existing
ComputeBudget::max_instruction_trace_length.
Also expose compute_budget:MAX_INSTRUCTION_DEPTH.
* bpf_loader: use an explicit thread-local pool for stack and heap memory
Use a fixed thread-local pool to hold stack and heap memory. This
mitigates the long standing issue of jemalloc causing TLB shootdowns to
serve such frequent large allocations.
Because we need 1 stack and 1 heap region per instruction, and the
current max instruction nesting is hardcoded to 5, the pre-allocated
size is (MAX_STACK + MAX_HEAP) * 5 * NUM_THREADS. With the current
limits that's about 2.5MB per thread. Note that this is memory that
would eventually get allocated anyway, we're just pre-allocating it now.
* programs/sbf: add test for stack/heap zeroing
Add TEST_STACK_HEAP_ZEROED which tests that stack and heap regions are
zeroed across reuse from the memory pool.
* Adjust replay-related metrics for unified schduler
* Fix grammar
* Don't compute slowest for unified scheduler
* Rename to is_unified_scheduler_enabled
* Hoist uses to top of file
* Conditionally disable replay-slot-end-to-end-stats
* Remove the misleading fairly balanced text
* port join from itertools and use it in program_stubs.rs
* move itertools to dev-dependencies of solana-program
* add comment to join fn
* more concise replacement for join fn
Co-authored-by: Jon C <me@jonc.dev>
* remove join fn
---------
Co-authored-by: Jon C <me@jonc.dev>
* Removes ProgramTest from simulation tests.
* Removes ProgramTest from sysvar syscall tests.
* Workaround for rustc crash caused by 16 byte aligned memcpy.
* Deduplicates test_program_sbf_sanity.
* Moves mem and remaining_compute_units into test_program_sbf_sanity().
* Removes unused dev-dependencies in Cargo.toml.
* Removes crate-type = lib from Cargo.tomls.
* Adds SBF_OUT_DIR env to CI script.
* Adds "sysvar" to build.rs.
* transaction-status: Use string instead of int for `amount`
* Add a changelog entry
* Update CHANGELOG.md
Co-authored-by: Tyera <teulberg@gmail.com>
---------
Co-authored-by: Tyera <teulberg@gmail.com>
Consider this scenario:
- Program increases length of an account
- Program start CPI and adds this account as a read-only account
- In fn update_callee_account() we resize account, which may change
the pointer
- Once CPI finishes, the program continues and may read/write from
the account. The mapping must be up-to-date else we use stale
pointers.
Note that we always call callee_account.set_data_length(), which
may change the pointer. In testing I found that resizing a vector
from 10240 down to 127 sometimes changes its pointer. So, always
update the pointer.
* extract curve25519 crate
* remove obsolete comment
* fix Cargo.toml files
* fix imports
* update lock file
* remove unused deps from zk-token-sdk
* fmt
* add solana-curve25519 patch
* add missing override to programs/sbf/Cargo.toml
* copy over an allow()
* move new crate to curves dir
* use workspace version
* add back missing dev dep
* add missing dependencies to programs/sbf
* fmt
* move dep to the correct dependency table
* remove #[cfg(not(target_os = "solana"))] above errors mod
AccountsBackgroundService performs several operations that can take a
long time to complete and do not check the exit flag mid-operation.
Thus, ledger-tool can get hung up for a while waiting for ABS to
finish. However, many ledger-tool command do not ABS to have finished.
So, return a handle to the ABS thread and allow the caller to decide
whether to join ABS or not. As of right now, create-snapshot is the
only command that requires ABS to have finished before continuing.
There are several arguments to control snapshot configuration in the
various ledger-tool commands. The inclusion of args in each command
is inconsistent, especially for commands outside of main.rs
This change consolidates the snapshot related arguments into a single
function to help create consistency and reduce duplicate code
* Add num_partitions field to Rewards proto definition
* Add type to hold rewards plus num_partitions
* Add Bank method to get rewards plus num_partitions for recording
* Update Blockstore::write_rewards to use num_partitions
* Update RewardsRecorderService to handle num_partitions
* Populate num_partitions in ReplayStage::record_rewards
* Write num_partitions to Bigtable
* Reword KeyedRewardsAndNumPartitions method
* Clone immediately
* Determine epoch boundary by checking parent epoch
* Rename UiConfirmedBlock field
* nit: fix comment typo
* Add test_get_rewards_and_partitions
* Add pre-activation test
* Add should_record unit test