quic: use smallvec, save one allocation per packet
Use smallvec to hold chunks. Streams are packet-sized so we don't expect
them to have many chunks. This saves us an allocation for each packet.
* Find the bank hash of the heaviest fork, replay if necessary.
* Make it more explicit how heaviest fork slot is selected.
* Use process_single_slot instead of process_blockstore_from_root, the latter
may re-insert banks already frozen.
* Put BlockstoreProcessError into the error message.
* Check that all existing blocks link to correct parent before replay.
* Use the default number of threads instead.
* Check whether block is full and other small fixes.
* Fix root_bank and move comments to function level.
* Remove the extra parent link check.
* Introduce SchedulingStateMachine
* Apply all typo fixes from code review
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
* Update word wrapping
* Clarify Token::assume_exclusive_mutating_thread()
* Use slice instead of &Vec<_>
* Improve non-const explanation
* Document consecutive readonly rescheduling opt.
* Make test_gradual_locking terminate for miri
* Avoid unnecessary Task::clone()
* Rename: lock_{status,result} and no attempt_...()
* Add safety comment for get_account_locks_unchecked
* Reduce and comment about Page::blocked_tasks cap.
* Document SchedulingStateMachine::schedule_task()
* Add justification of closure in create_task
* Use the From trait for PageUsage
* Replace unneeded if-let with .expect()
* Add helpful comments for peculiar crossbeam usage
* Fix typo
* Make bug-bounty-exempt statement more clear
* Add test_enfoced_get_account_locks_verification
* Fix typos...
* Big rename: Page => UsageQueue
* Document UsageQueueLoader
* Various minor cleanings for beautifier diff
* Ensure reinitialize() is maintained for new fields
* Remove uneeded impl Send for TokenCell & doc upd.
* Apply typo fixes from code review
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
* Merge similar tests into one
* Remove test_debug
* Remove assertions of task_index()
* Fix UB in TokenCell
* Make schedule_task doc comment simpler
* Document deschedule_task
* Simplify unlock_usage_queue() args
* Add comment for try_unblock() -> None
* Switch to Option<Usage> for fewer assert!s
* Add assert_matches!() to UsageQueue methods
* Add panicking test case for ::reinitialize()
* Use UsageFromTask
* Rename: LockAttempt => LockContext
* Move locking and unlocking methods to usage queue
* Remove outdated comment...
* Remove redundant fn: pop_unblocked_usage_from_task
* Document the index of task
* Clarifty comment a bit
* Update .current_usage inside try_lock()
* Use inspect_err to simplify code
* fix ci...
* Use ()...
* Rename: schedule{,_next}_unblocked_task()
* Rename: {try_lock,unlock}_{for_task,usage_queues}
* Test solana-unified-scheduler-logic under miri
* Test UB to illustrate limitation of TokenCell
* Test UB of using multiple tokens at the same time
---------
Co-authored-by: Andrew Fitzgerald <apfitzge@gmail.com>
The IP echo server currently spins up a worker thread for every thread
on the machine. Observing some data for nodes,
- MNB validators and RPC nodes look to get several hundred of these
requests per day
- MNB entrypoint nodes look to get 2-3 requests per second on average
In both instances, the current threadpool is severely overprovisioned
which is a waste of resources. This PR plumnbs a flag to control the
number of worker threads for this pool as well as setting a default of
two threads for this server. Two threads allow for one thread to always
listen on the TCP port while the other thread processes requests
Previously, entry verification had a dedicated threadpool used to verify
PoH hashes as well as some basic transaction verification via
Bank::verify_transaction(). It should also be noted that the entry
verification code provides logic to offload to a GPU if one is present.
Regardless of whether a GPU is present or not, some of the verification
must be done on a CPU. Moreso, the CPU verification of entries and
transaction execution are serial operations; entry verification finishes
first before moving onto transaction execution.
So, tx execution and entry verification are not competing for CPU cycles
at the same time and can use the same pool.
One exception to the above statement is that if someone is using the
feature to replay forks in parallel, then hypothetically, different
forks may end up competing for the same resources at the same time.
However, that is already true given that we had pools that were shared
between replay of multiple forks. So, this change doesn't really change
much for that case, but will reduce overhead in the single fork case
which is the vast majority of the time.
* save progress
* rename threads handler
* added writer for txs
* after extracting structure to handle tx confirmations
* extract LogWriter
* Replace pair TimestampedTransaction with struct
* add compute_unit_price to TimestampedTransaction
* add cu_price to LogWriter
* add block time to the logs
* Fix warnings
* add comments and restructure code
* some small improvements
* Renamed conformation_processing.rs to log_transaction_service.rs
* address numerous PR comments
* split LogWriter into two structs
* simplify code of LogWriters
* extract process_blocks
* specify commitment in LogTransactionService
* break thread loop if receiver happens to be dropped
* update start_slot when processing blocks
* address pr comments
* fix clippy error
* minor changes
* fix ms problem
* fix bug with time in clear transaction map
* add set compute units arg for program deploy
* update master changes
* remove duplicates
* fixes and tests
* remove extra lines
* feedback
* Use simulation to determine compute units consumed
* feedback
---------
Co-authored-by: NagaprasadVr <nagaprasadvr246@gmail.com>
* runtime: do fewer syscalls in remap_append_vec_file
Use renameat2(src, dest, NOREPLACE) as an atomic version of if
statx(dest).is_err() { rename(src, dest) }.
We have high inode contention during storage rebuild and this saves 1
fs syscall for each appendvec.
* Address review feedback
The default value was previously being determined down where the thread
pool is being created. Providing a default value at the CLI level is
consistent with other args, and gives an operator better visibility into
what the default will actually be
RuntimeConfig doesn't use anything SVM specific and logically belongs
in program runtime rather than SVM. This change moves the definition
of RuntimeConfig struct from the SVM crate to program-runtime and
adjusts `use` statements accordingly.