Commit Graph

1351 Commits

Author SHA1 Message Date
Andrew Fitzgerald 257ba2f0b1
Add benchmark for execute_batch (#34717) 2024-01-13 09:09:04 -08:00
Justin Starry 5f74fc4f16
Update genesis processing to have a fallback collector id for tests (#34135)
* Update genesis processing to have a fallback collector id for tests

* DCOU-ify the collector id for tests parameter (#1902)

* wrap test_collector_id in DCOU

* rename param to collector_id_for_tests

* fix program test

* fix dcou

---------

Co-authored-by: Brooks <brooks@prumo.org>
2024-01-10 08:34:41 +08:00
Brooks abe699b7b4
Adds newline to fastboot's CLI help (#34712) 2024-01-09 15:28:39 -05:00
steviez dce3ce3734
Adjust blockstore open logs to say blockstore instead of database (#34672)
Additionally, make the log before/after the open more similar so it is
more clear while skimming logs that they correspond to each other.
2024-01-05 21:23:39 -06:00
Ashwin Sekar 19088411ff
blockstore: populate duplicate shred proofs for merkle root conflicts (#34270)
* blockstore: populate duplicate shred proofs for merkle root conflicts

* pr feedback: check test case

* pr feedback: comment

* pr feedback: match statement, shred_id, comment

* add feature flag

* pr feedback: rename ff var and perform_merkle_check

* pr feedback: move panic to callers in get_shred_from_just_inserted_or_db

* avoid unecessary write if proof is already present
2024-01-03 12:15:52 -05:00
Nick Frostbutter fc2a8794be
[docs] updated readme and fix links (#34565)
* feat: updated readme

* fix: updated links

* fix: proposal links

* fix: more links

* fix: json-rpc links

* fix: more links

* fix: zk links

* fix: managing forks

* fix: links for deprecated methods
2024-01-03 09:06:06 -05:00
Ashwin Sekar cc584a0c19
blockstore: write only dirty erasure meta and merkle root metas (#34269)
* blockstore: write only dirty erasure meta and merkle root metas

* pr feedback: use enum to distinguish clean/dirty

* pr feedback: comments, rename

* pr feedback: use AsRef
2023-12-22 16:26:50 -05:00
GoodDaisy 03386cc7b9
Fix typos (#34459)
* Fix typos

* Fix typos

* fix typo
2023-12-21 13:06:00 -07:00
Ryo Onodera d2b5afc410
Finish unified scheduler plumbing with min impl (#34300)
* Finalize unified scheduler plumbing with min impl

* Fix comment

* Rename leftover type name...

* Make logging text less ambiguous

* Make PhantomData simplyer without already used S

* Make TaskHandler stateless again

* Introduce HandlerContext to simplify TaskHandler

* Add comment for coexistence of Pool::{new,new_dyn}

* Fix grammar

* Remove confusing const for upcoming changes

* Demote InstalledScheduler::context() into dcou

* Delay drop of context up to return_to_pool()-ing

* Revert "Demote InstalledScheduler::context() into dcou"

This reverts commit 049a126c905df0ba8ad975c5cb1007ae90a21050.

* Revert "Delay drop of context up to return_to_pool()-ing"

This reverts commit 60b1bd2511a714690b0b2331e49bc3d0c72e3475.

* Make context handling really type-safe

* Update comment

* Fix grammar...

* Refine type aliases for boxed traits

* Swap the tuple order for readability & semantics

* Simplify PooledScheduler::result_with_timings type

* Restore .in_sequence()

* Use where for aesthetics

* Simplify if...

* Fix typo...

* Polish ::schedule_execution() a bit

* Fix rebase conflicts..

* Make test more readable

* Fix test failures after rebase...
2023-12-19 09:50:41 +09:00
behzad nouri 750023530c
makes last erasure batch size >= 64 shreds (#34330) 2023-12-13 06:48:00 +00:00
steviez 70cab76495
Remove deletion of TransactionStatusIndex entries (#34023)
These entries are legacy code at this point; however, older release
branches require these entries to be present. Also, while it would be
nice to clean up these entries immediately, they only occupy a small
amount of space so having them linger a little longer isn't a big deal.
2023-12-12 22:53:51 -06:00
behzad nouri d5eee01950
adds feature gated code to drop legacy shreds (#34328) 2023-12-06 22:47:46 +00:00
Lucas Steuernagel 1877fdb273
Use BankForks on tests - Part 4 (#34271)
* Use BankForks on tests - Part 4

* Ensure the correct slot is set
2023-12-06 13:32:04 -03:00
Andrew Fitzgerald 2294801954
Do not derive Copy for EpochSchedule and Rent (#32767) 2023-12-01 07:57:25 -08:00
Ashwin Sekar d84dcd37bc
blockstore: use u32 for fec_set_index in erasure set index store key (#34268)
* blockstore: use u32 for fec_set_index in erasure set index store key

* pr feedback u64::from
2023-11-30 19:17:49 -05:00
steviez 479b7ee9f2
Bubble up errors in bank_fork_utils instead of exiting process (#34277)
There are operations in bank_fork_utils that may fail; we explicitly
call std::process::exit() on several of these. Granted we may end up
exiting the process higher up the callstack, bubbling the errors up
allow a caller that could handle the error to do so.
2023-11-30 16:35:59 -06:00
steviez 71c1782c74
Allow Blockstore to open unknown columns (#34174)
As we develop new features or modifications, we occassionally need to
introduce new columns to the Blockstore. Adding a new column introduces
a compatibility break given that opening the database in Primary mode
(R/W access) requires opening all columns. Reverting to an old software
version that is unaware of the new column is obviously problematic.

In the past, we have addressed by backporting minimal "stub" PR's to
older versions. This is annoying, and only allow compatibility for the
single version or two that we backport to.

This PR adds a change to automatically detect all columns, and create
default column descriptors for columns we were unaware of. As a result,
older software versions can open a Blockstore that was modified by a
newer software version, even if that new version added columns that the
old version is unaware of.
2023-11-30 13:24:56 -06:00
Ashwin Sekar e1165aaf00
blockstore: populate merkle root metas column (#34097) 2023-11-29 11:14:24 -05:00
Tyera 573ec81fbb
storage-bigtable: Upload entries (#34099)
* Add entries table to bt init

* Add entries to storage-proto

* Use new Blockstore method in bigtable_upload

* Add LedgerStorage::upload_confirmed_block_with_entries and use in bigtable_upload

* Upload entries to bigtable
2023-11-28 11:47:22 -07:00
Brooks 5c7ab5dc08
ledger-tool does *not* fastboot by default (#34228) 2023-11-27 13:48:28 -05:00
steviez 9a7b681f0c
Remove key_size() method from Column trait (#34021)
This helper simply called std::mem::size_of<Self::Index>(). However, all
of the underlying functions that create keys manually copy fields into a
byte array. The fields are copied in end-to-end whereas size_of() might
include alignment bytes.

For example, a (u64, u32) only has 12 bytes of "data", but it would
have size 16 due to the 4 alignment padding bytes that would be
added to get the u32 (size 4) aligned with the u64 (size 8).
2023-11-19 23:05:32 -06:00
Brooks e02f25d5a2
Removes filler accounts (#34115) 2023-11-19 20:36:57 -05:00
Ryo Onodera 07f17096eb
Correctly store ALTs themselves into minimized snapshot (#34134) 2023-11-18 13:20:37 +09:00
steviez 29947ba867
Consolidate repeated code in Rocks::open() (#34131)
The function matches the access type and calls a different RocksDB
function depending on whether we have primary or secondary access. But,
lots of the code is the same for these two paths so de-duplicate the
repeated sections.
2023-11-17 10:18:08 -06:00
Ashwin Sekar 6a5b8e86f3
shred: expose merkle root for use in blockstore (#34063)
* shred: expose merkle root for use in blockstore

* pr feedback: sorted, keep Result return type

* convert Result<Hash> -> Option<Hash>
2023-11-15 20:13:50 +00:00
Ashwin Sekar 693d5768c8
blockstore: make merkle root Optional in MerkleRootMeta column (#34091) 2023-11-15 13:09:18 -05:00
steviez d1d4c1c654
Condense Blockstore RPC API datapoints (#34045)
Currently, the RPC API that touch the Blockstore emit a datapoint for
each call. For an RPC node serving many requests, these datapoints
could get quite noisy, both in logs as well as traffic to the metrics
agent.

So, instead of submitting a datapoint for every call, accumulate the
number of calls in a struct and report that entire struct periodically.
2023-11-14 12:19:14 -06:00
Tyera 0e91e96967
Geyser: add starting entry to ReplicaEntryInfo(V2) (#33963)
* Add ReplicaEntryInfoV2

* Add starting_transaction_index field to EntryNotification

* Populate starting_transaction_index in replay stage

* Cache and populate starting_transaction_index in banking stage

* Build ReplicaEntryInfoV2
2023-11-14 09:49:26 -07:00
Brooks 725ab37bf4
clippy: Replaces .get(0) with .first() (#34048) 2023-11-13 17:22:17 -05:00
Ashwin Sekar e457c02879
add merkle root meta column to blockstore (#33979)
* add merkle root meta column to blockstore

* pr feedback: remove write/reads to column

* pr feedback: u64 -> u32 + revert

* pr feedback: fec_set_index u32, use Self::Index

* pr feedback: key size 16 -> 12
2023-11-11 21:14:18 -05:00
steviez b91da2242d
Change Blockstore max_root from RwLock<Slot> to AtomicU64 (#33998)
The Blockstore currently maintains a RwLock<Slot> of the maximum root
it has seen inserted. The value is initialized during
Blockstore::open() and updated during calls to Blockstore::set_roots().
The max root is queried fairly often for several use cases, and caching
the value is cheaper than constructing an iterator to look it up every
time.

However, the access patterns of these RwLock match that of an atomic.
That is, there is no critical section of code that is run while the
lock is head. Rather, read/write locks are acquired in order to read/
update, respectively. So, change the RwLock<u64> to an AtomicU64.
2023-11-10 17:27:43 -06:00
steviez 1057ba8406
Use is_trusted bool in insert_shreds() instead manually adjusting root (#34010)
The test_duplicate_with_pruned_ancestor test needs to get around a
limitation where the shreds with a parent older than the latest root are
discarded. The previous approach manually adjusted the root value in the
blockstore; this is not ideal in that it is fiddling with the inner
working of Blockstore.

So, use the is_trusted argument in Blockstore::insert_shreds(); setting
is_trusted=true bypasses the sanity checks (including the parent >=
latest root check).
2023-11-09 22:56:48 -06:00
Tyera 28e08ac141
Add Blockstore::get_rooted_block_with_entries method (#33995)
* Add helper structs to hold block and entry summaries

* Add Blockstore::get_rooted_block_with_entries and dedupe innards

* Review comments
2023-11-09 10:03:56 -07:00
steviez 230779d459
Revert " Remove redundant bounds check from getBlock and getBlockTime… (#33996)
Revert " Remove redundant bounds check from getBlock and getBlockTime (#33901)"

This reverts commit 03a456e7bb.
2023-11-08 18:16:51 -06:00
steviez 03a456e7bb
Remove redundant bounds check from getBlock and getBlockTime (#33901)
JsonRpcRequestProcessor::check_blockstore_root() contained some logic
that performed duplicate sanity checking on a Blockstore fetch result.
The checking involved creating rocksdb iterators, which has non-trivial
overhead.

This PR removes the duplicate checking, and also adds comments to help
reason about how JsonRpcRequestProcessor interprets the Blockstore
result.
2023-11-08 12:09:10 -06:00
steviez 73815aee51
Move and rename ledger services from core to ledger (#33947)
These services currently live in core/; however, they operate on the
ledger. Mores so, these two services operate on the blockstore only,
and not necessarily the entire ledger. So, it makes sense to move these
services out of core and into ledger. We've recently been doing similar
changes with breaking things out into individual crates in order to
reduce the scope of core.

So, this change moves the services from core/ to ledger/, and replaces
ledger with blockstore.
2023-11-08 11:58:31 -06:00
steviez ee29647f67
Remove Option<_> from Blockstore::get_rooted_block_time() return type (#33955)
Instead of returning Result<Option<UnixTimestamp>>, return
Result<UnixTimestamp> and map None to an error. This makes the return
type similar to that of Blockstore::get_rooted_block().
2023-11-06 12:56:10 -06:00
Liam Vovk e840b9759a
Remove RWLock from EntryNotifier because it causes perf degradation (#33797)
* Remove RWLock from EntryNotifier because it causes perf degradation when entry notifications are enabled on geyser

* remove unused RWLock

* Remove RWLock
2023-11-06 00:55:36 -08:00
Ryo Onodera a4a66026e1
Introduce InstalledSchedulerPool trait (#33934)
* Introduce InstalledSchedulerPool

* Use type alias

* Remove log_prefix for now...

* Simplify return_to_pool()

* Simplify InstalledScheduler's context methods

* Reorder trait methods semantically

* Simplify Arc<Bank> handling
2023-11-03 16:02:12 +09:00
Ryo Onodera 136ab21f34
Define InstalledScheduler::wait_for_termination() (#33922)
* Define InstalledScheduler::wait_for_termination()

* Rename to wait_for_scheduler_termination

* Comment wait_for_termination and WaitReason better
2023-10-31 14:33:36 +09:00
Ryo Onodera 950ca5ea86
Add InstalledScheduler for blockstore_processor (#33875)
* Add InstalledScheduler for blockstore_processor

* Reverse if clauses

* Add more comments for process_batches()

* Elaborate comment

* Simplify schedule_transaction_executions type
2023-10-27 21:42:18 +09:00
Brooks d04ad6557d
Fastboots by default (#33883) 2023-10-27 07:23:29 -04:00
Tyera 7048e72d81
Blockstore: only return block times for rooted slots (#33871)
* Add Blockstore::get_rooted_block_time method and use in RPC

* Un-pub get_block_time
2023-10-26 11:38:58 -06:00
Tyera 22503f0ae9
BigtableUploadService: increment start_slot to prevent rechecks (#33870)
Increment start_slot
2023-10-26 09:21:20 -06:00
steviez a799a90a62
Update upload_confirmed_blocks() return value when no blocks to upload (#33861)
upload_confirmed_blocks() states that it will return the passed in
ending_slot when there are no blocks to upload. This is enforced in one
early return but not the other. The result is that BigTableUploadService
could potentially get stuck in a loop of trying to upload the same slot.

While this case seems to be caused when an operator restarts their node
without --no-snapshot-fetch (which can cause a gap in blockstore), we
can still be friendly and allow them to break out of this loop.
2023-10-26 10:34:07 +02:00
Pankaj Garg 9d42cd7efe
Initialize fork graph in program cache during bank_forks creation (#33810)
* Initialize fork graph in program cache during bank_forks creation

* rename BankForks::new to BankForks::new_rw_arc

* fix compilation

* no need to set fork_graph on insert()

* fix partition tests
2023-10-23 09:32:41 -07:00
Tao Zhu af9c754690
Crates have identical build.rs to frozen-abi can just be symlink (#33787)
crates have identical build.rs to frozen-abi can just be symlink
2023-10-21 13:33:10 -05:00
steviez 56ccffdaa5
Replace get_tmp_ledger_path!() with self cleaning version (#33702)
This macro is used a lot for tests to create a ledger path in order to
open a Blockstore. Files will be left on disk unless the test remembers
to call Blockstore::destroy() on the directory. So, instead of requiring
this, use the get_tmp_ledger_path_auto_delete!() macro that creates a
TempDir (which automatically deletes itself when it goes out of scope).
2023-10-21 11:38:31 +02:00
Ryo Onodera 5a963529a8
Add BankWithScheduler for upcoming scheduler code (#33704)
* Add BankWithScheduler for upcoming scheduler code

* Remove too confusing insert_without_scheduler()

* Add doc comment as a bonus

* Simplify BankForks::banks()

* Add derive(Debug) on BankWithScheduler
2023-10-21 15:56:43 +09:00
Alexander Meißner a5c7c999e2
Bump solana_rbpf to v0.8.0 (#33679)
* Bumps solana_rbpf to v0.8.0

* Adjustments:
Replaces declare_syscall!() with declare_builtin_function!().
Removes Config::encrypt_runtime_environment.
Simplifies error propagation.
2023-10-20 21:39:50 +02:00