Commit Graph

1333 Commits

Author SHA1 Message Date
Tyera 573ec81fbb
storage-bigtable: Upload entries (#34099)
* Add entries table to bt init

* Add entries to storage-proto

* Use new Blockstore method in bigtable_upload

* Add LedgerStorage::upload_confirmed_block_with_entries and use in bigtable_upload

* Upload entries to bigtable
2023-11-28 11:47:22 -07:00
Brooks 5c7ab5dc08
ledger-tool does *not* fastboot by default (#34228) 2023-11-27 13:48:28 -05:00
steviez 9a7b681f0c
Remove key_size() method from Column trait (#34021)
This helper simply called std::mem::size_of<Self::Index>(). However, all
of the underlying functions that create keys manually copy fields into a
byte array. The fields are copied in end-to-end whereas size_of() might
include alignment bytes.

For example, a (u64, u32) only has 12 bytes of "data", but it would
have size 16 due to the 4 alignment padding bytes that would be
added to get the u32 (size 4) aligned with the u64 (size 8).
2023-11-19 23:05:32 -06:00
Brooks e02f25d5a2
Removes filler accounts (#34115) 2023-11-19 20:36:57 -05:00
Ryo Onodera 07f17096eb
Correctly store ALTs themselves into minimized snapshot (#34134) 2023-11-18 13:20:37 +09:00
steviez 29947ba867
Consolidate repeated code in Rocks::open() (#34131)
The function matches the access type and calls a different RocksDB
function depending on whether we have primary or secondary access. But,
lots of the code is the same for these two paths so de-duplicate the
repeated sections.
2023-11-17 10:18:08 -06:00
Ashwin Sekar 6a5b8e86f3
shred: expose merkle root for use in blockstore (#34063)
* shred: expose merkle root for use in blockstore

* pr feedback: sorted, keep Result return type

* convert Result<Hash> -> Option<Hash>
2023-11-15 20:13:50 +00:00
Ashwin Sekar 693d5768c8
blockstore: make merkle root Optional in MerkleRootMeta column (#34091) 2023-11-15 13:09:18 -05:00
steviez d1d4c1c654
Condense Blockstore RPC API datapoints (#34045)
Currently, the RPC API that touch the Blockstore emit a datapoint for
each call. For an RPC node serving many requests, these datapoints
could get quite noisy, both in logs as well as traffic to the metrics
agent.

So, instead of submitting a datapoint for every call, accumulate the
number of calls in a struct and report that entire struct periodically.
2023-11-14 12:19:14 -06:00
Tyera 0e91e96967
Geyser: add starting entry to ReplicaEntryInfo(V2) (#33963)
* Add ReplicaEntryInfoV2

* Add starting_transaction_index field to EntryNotification

* Populate starting_transaction_index in replay stage

* Cache and populate starting_transaction_index in banking stage

* Build ReplicaEntryInfoV2
2023-11-14 09:49:26 -07:00
Brooks 725ab37bf4
clippy: Replaces .get(0) with .first() (#34048) 2023-11-13 17:22:17 -05:00
Ashwin Sekar e457c02879
add merkle root meta column to blockstore (#33979)
* add merkle root meta column to blockstore

* pr feedback: remove write/reads to column

* pr feedback: u64 -> u32 + revert

* pr feedback: fec_set_index u32, use Self::Index

* pr feedback: key size 16 -> 12
2023-11-11 21:14:18 -05:00
steviez b91da2242d
Change Blockstore max_root from RwLock<Slot> to AtomicU64 (#33998)
The Blockstore currently maintains a RwLock<Slot> of the maximum root
it has seen inserted. The value is initialized during
Blockstore::open() and updated during calls to Blockstore::set_roots().
The max root is queried fairly often for several use cases, and caching
the value is cheaper than constructing an iterator to look it up every
time.

However, the access patterns of these RwLock match that of an atomic.
That is, there is no critical section of code that is run while the
lock is head. Rather, read/write locks are acquired in order to read/
update, respectively. So, change the RwLock<u64> to an AtomicU64.
2023-11-10 17:27:43 -06:00
steviez 1057ba8406
Use is_trusted bool in insert_shreds() instead manually adjusting root (#34010)
The test_duplicate_with_pruned_ancestor test needs to get around a
limitation where the shreds with a parent older than the latest root are
discarded. The previous approach manually adjusted the root value in the
blockstore; this is not ideal in that it is fiddling with the inner
working of Blockstore.

So, use the is_trusted argument in Blockstore::insert_shreds(); setting
is_trusted=true bypasses the sanity checks (including the parent >=
latest root check).
2023-11-09 22:56:48 -06:00
Tyera 28e08ac141
Add Blockstore::get_rooted_block_with_entries method (#33995)
* Add helper structs to hold block and entry summaries

* Add Blockstore::get_rooted_block_with_entries and dedupe innards

* Review comments
2023-11-09 10:03:56 -07:00
steviez 230779d459
Revert " Remove redundant bounds check from getBlock and getBlockTime… (#33996)
Revert " Remove redundant bounds check from getBlock and getBlockTime (#33901)"

This reverts commit 03a456e7bb.
2023-11-08 18:16:51 -06:00
steviez 03a456e7bb
Remove redundant bounds check from getBlock and getBlockTime (#33901)
JsonRpcRequestProcessor::check_blockstore_root() contained some logic
that performed duplicate sanity checking on a Blockstore fetch result.
The checking involved creating rocksdb iterators, which has non-trivial
overhead.

This PR removes the duplicate checking, and also adds comments to help
reason about how JsonRpcRequestProcessor interprets the Blockstore
result.
2023-11-08 12:09:10 -06:00
steviez 73815aee51
Move and rename ledger services from core to ledger (#33947)
These services currently live in core/; however, they operate on the
ledger. Mores so, these two services operate on the blockstore only,
and not necessarily the entire ledger. So, it makes sense to move these
services out of core and into ledger. We've recently been doing similar
changes with breaking things out into individual crates in order to
reduce the scope of core.

So, this change moves the services from core/ to ledger/, and replaces
ledger with blockstore.
2023-11-08 11:58:31 -06:00
steviez ee29647f67
Remove Option<_> from Blockstore::get_rooted_block_time() return type (#33955)
Instead of returning Result<Option<UnixTimestamp>>, return
Result<UnixTimestamp> and map None to an error. This makes the return
type similar to that of Blockstore::get_rooted_block().
2023-11-06 12:56:10 -06:00
Liam Vovk e840b9759a
Remove RWLock from EntryNotifier because it causes perf degradation (#33797)
* Remove RWLock from EntryNotifier because it causes perf degradation when entry notifications are enabled on geyser

* remove unused RWLock

* Remove RWLock
2023-11-06 00:55:36 -08:00
Ryo Onodera a4a66026e1
Introduce InstalledSchedulerPool trait (#33934)
* Introduce InstalledSchedulerPool

* Use type alias

* Remove log_prefix for now...

* Simplify return_to_pool()

* Simplify InstalledScheduler's context methods

* Reorder trait methods semantically

* Simplify Arc<Bank> handling
2023-11-03 16:02:12 +09:00
Ryo Onodera 136ab21f34
Define InstalledScheduler::wait_for_termination() (#33922)
* Define InstalledScheduler::wait_for_termination()

* Rename to wait_for_scheduler_termination

* Comment wait_for_termination and WaitReason better
2023-10-31 14:33:36 +09:00
Ryo Onodera 950ca5ea86
Add InstalledScheduler for blockstore_processor (#33875)
* Add InstalledScheduler for blockstore_processor

* Reverse if clauses

* Add more comments for process_batches()

* Elaborate comment

* Simplify schedule_transaction_executions type
2023-10-27 21:42:18 +09:00
Brooks d04ad6557d
Fastboots by default (#33883) 2023-10-27 07:23:29 -04:00
Tyera 7048e72d81
Blockstore: only return block times for rooted slots (#33871)
* Add Blockstore::get_rooted_block_time method and use in RPC

* Un-pub get_block_time
2023-10-26 11:38:58 -06:00
Tyera 22503f0ae9
BigtableUploadService: increment start_slot to prevent rechecks (#33870)
Increment start_slot
2023-10-26 09:21:20 -06:00
steviez a799a90a62
Update upload_confirmed_blocks() return value when no blocks to upload (#33861)
upload_confirmed_blocks() states that it will return the passed in
ending_slot when there are no blocks to upload. This is enforced in one
early return but not the other. The result is that BigTableUploadService
could potentially get stuck in a loop of trying to upload the same slot.

While this case seems to be caused when an operator restarts their node
without --no-snapshot-fetch (which can cause a gap in blockstore), we
can still be friendly and allow them to break out of this loop.
2023-10-26 10:34:07 +02:00
Pankaj Garg 9d42cd7efe
Initialize fork graph in program cache during bank_forks creation (#33810)
* Initialize fork graph in program cache during bank_forks creation

* rename BankForks::new to BankForks::new_rw_arc

* fix compilation

* no need to set fork_graph on insert()

* fix partition tests
2023-10-23 09:32:41 -07:00
Tao Zhu af9c754690
Crates have identical build.rs to frozen-abi can just be symlink (#33787)
crates have identical build.rs to frozen-abi can just be symlink
2023-10-21 13:33:10 -05:00
steviez 56ccffdaa5
Replace get_tmp_ledger_path!() with self cleaning version (#33702)
This macro is used a lot for tests to create a ledger path in order to
open a Blockstore. Files will be left on disk unless the test remembers
to call Blockstore::destroy() on the directory. So, instead of requiring
this, use the get_tmp_ledger_path_auto_delete!() macro that creates a
TempDir (which automatically deletes itself when it goes out of scope).
2023-10-21 11:38:31 +02:00
Ryo Onodera 5a963529a8
Add BankWithScheduler for upcoming scheduler code (#33704)
* Add BankWithScheduler for upcoming scheduler code

* Remove too confusing insert_without_scheduler()

* Add doc comment as a bonus

* Simplify BankForks::banks()

* Add derive(Debug) on BankWithScheduler
2023-10-21 15:56:43 +09:00
Alexander Meißner a5c7c999e2
Bump solana_rbpf to v0.8.0 (#33679)
* Bumps solana_rbpf to v0.8.0

* Adjustments:
Replaces declare_syscall!() with declare_builtin_function!().
Removes Config::encrypt_runtime_environment.
Simplifies error propagation.
2023-10-20 21:39:50 +02:00
Pankaj Garg 59cb3b57ee
Set a global fork graph in program cache (#33776)
* Set a global fork graph in program cache

* fix deadlock

* review feedback
2023-10-20 08:47:03 -07:00
steviez 383aef218d
Make Blockstore populate TransactionStatusIndex entries (#33756)
A previous change removed logic that populated the
TransactionStatusIndex entries at each of the legacy primary index keys
(0 and 1). While these entries will not be read or written in the
future, these entries are necessary for backwards compatibility. Namely,
branches <= v1.17 expect these entries to be present and .unwrap()'s
could fail if they are not.

So, add the initialization of these entries back into Blockstore logic.
We can remove initialization of these entries once our stable and beta
branches are both versions that do not expect these entries to be
present (should be v1.18).
2023-10-19 23:28:53 +02:00
Tyera 01a3b1b52f
Blockstore: clean/save old TransactionMemos sensibly (#33678)
* Convert OldestSlot to named struct

* Add clean_slot_0 to OldestSlot

* Set AtomicBool to true when all primary-index keys returning slot 0 should be purged

* Add PurgedFilter::clean_slot_0

* Use clean_slot_0 to preserve deprecated TransactionMemos

* Also set AtomicBool to true immediately on boot, if highest_primary_index_slot.is_none

* Add test

* Fixup test
2023-10-12 22:43:27 -06:00
Brooks 452fd5d384
Adds `--no-skip-initial-accounts-db-clean` *hidden* CLI flag (#33664) 2023-10-12 13:32:40 -04:00
Tyera d286c00a30
Blockstore: track when all primary-index data has been purged (#33668)
* Fix typo

* Add Blockstore::highest_primary_index_slot

* Add getter

* Populate highest_primary_index_slot on boot

* Wipe highest_primary_index_slot when surpassed by oldest_slot

* Update highest_primary_index_slot in exact purge

* Return indexes early if highest_primary_index_slot has been cleared

* Limit read_transaction_status based on highest_primary_index_slot

* Limit read_transaction_memos based on highest_primary_index_slot

* Use highest_primary_index_slot to add early return to get_transaction_status_with_counter

* Fixup tests

* Use existing getter for highest_primary_index_slot

Co-authored-by: steviez <stevecz@umich.edu>

---------

Co-authored-by: steviez <stevecz@umich.edu>
2023-10-12 07:12:33 +00:00
steviez 6009d49cc3
Remove dummy entries in Blockstore special columns (part 2) (#33649)
* Always call initialize_transaction_status_index() at startup,
  doing so will ensure dummy entries are actually cleaned
* Rename initialize_transaction_status_index()
* Stop initializing TransactionStatusIndex column entries, these
  are no longer needed and old software will initialize if needed
2023-10-11 13:13:10 -05:00
steviez 33e1dd71f3
Remove dated Blockstore PurgeType::PrimaryIndex code (#33631)
* Update instances of PurgeType::PrimaryIndex to PurgeType::Exact
* Remove now unused functions
* Remove unused active_transaction_status_index field
2023-10-10 16:35:42 -05:00
Tyera 509d6acd2b
Remove primary index from Blockstore special-column keys (#33419)
* Add helper trait for column key deprecation

* Add WriteBatch::delete_raw

* Add ProtobufColumn::get_raw_protobuf_or_bincode

* Add ColumnIndexDeprecation iterator methods

* Impl ColumnIndexDeprecation for TransactionStatus (doesn't build)

* Update TransactionStatus put

* Update TransactionStatus purge_exact

* Fix read_transaction_status

* Fix get_transaction_status_with_counter

* Fix test_all_empty_or_min (builds except tests)

* Fix test_get_rooted_block

* Fix test_persist_transaction_status

* Fix test_get_transaction_status

* Fix test_get_rooted_transaction

* Fix test_get_complete_transaction

* Fix test_lowest_cleanup_slot_and_special_cfs

* Fix test_map_transactions_to_statuses

* Fix test_transaction_status_protobuf_backward_compatability

* Fix test_special_columns_empty

* Delete test_transaction_status_index

* Delete test_purge_transaction_status

* Ignore some tests until both special columns are dealt with (all build)

* Impl ColumnIndexDeprecation for AddressSignatures (doesn't build)

* Add BlockstoreError variant

* Update AddressSignatures put

* Remove unneeded active_transaction_status_index column lock

* Update AddressSignatures purge_exact

* Fix find_address_signatures_for_slot methods

* Fix get_block_signatures methods

* Fix get_confirmed_signatures_for_address2

* Remove unused method

* Fix test_all_empty_or_min moar (builds except tests)

* Fix tests (all build)

* Fix test_get_confirmed_signatures_for_address

* Fix test_lowest_cleanup_slot_and_special_cfs moar

* Unignore tests (builds except tests)

* Fix test_purge_transaction_status_exact

* Fix test_purge_front_of_ledger

* Fix test_purge_special_columns_compaction_filter (all build)

* Move some test-harness stuff around

* Add test cases for purge_special_columns_with_old_data

* Add test_read_transaction_status_with_old_data

* Add test_get_transaction_status_with_old_data

* Review comments

* Move rev of block-signatures into helper

* Improve deprecated_key impls

* iter_filtered -> iter_current_index_filtered

* Add comment to explain why use the smallest (index, Signature) to seed the iterator

* Impl ColumnIndexDeprecation for TransactionMemos (doesn't build)

* Update TransactionMemos put

* Add LedgerColumn::get_raw

* Fix read_transaction_memos

* Add TransactionMemos to purge_special_columns_exact

* Add TransactionMemos to compaction filter

* Take find_address_signatures out of service

* Remove faulty delete_new_column_key logic

* Simplify comments
2023-10-10 10:40:36 -06:00
Ryo Onodera 1704789247
Define tick related helper test methods (#33537)
* Define tick related helper methods

* dcou VoteSimulator

* blacklist ledger-tool for dcou

* fix dcou ci...

* github
2023-10-10 09:23:18 +09:00
Tyera f075867ceb
Blockstore::get_sigs_for_addr2: ensure lowest_slot >= first_available_block (#33556)
* Set empty lowest_slot to first_available_block and remove check in loop

* Ensure get_transaction_status on_with_counter returns slots >= first_available_block

* Actually cleanup ledger
2023-10-06 15:12:08 -06:00
Tyera 6f1922b4fd
Add early return to Blockstore::find_address_signatures methods (#33545)
Add early return to find_address_signatures methods
2023-10-05 19:57:35 +00:00
steviez 666ce9b3be
Fix blockstore-purge delete_files_in_range_us metric (#33512)
This field was being filled with the wrong value
2023-10-05 13:34:04 -05:00
steviez fac0c3c0fc
Make Blockstore::purge_special_columns_exact() bail if columns empty (#33534)
The special columns, TransactionStatus and AddressSignatures, are only
populated if --enable-rpc-transaction-history is passed. Cleaning these
columns for a range of slots is very expensive, as the block for each
slot must be read, deserialized, and then parsed to extract all of the
transaction signatures and address pubkeys.

This change adds a simple check to see if there are any values at all in
the special columns. If there are not, then the whole process described
above can be skipped for nodes that are not storing the special columns.
2023-10-05 13:15:24 -05:00
steviez 6b96a2259f
Remove unused code in Blockstore underlying impl (#33538)
* Remove LedgerColumn::delete_slot() method
* Remove primary_index() function from Trait column
2023-10-05 13:14:09 -05:00
steviez 402e9a5fff
Use copy_from_slice() over clone_from_slice() for u8 slice copies (#33536)
clone_from_slice() would hypothetically visit each item in the slice and
clone it whereas copy_from_slice() can memcpy the whole slice in one go.

Technically, Rust does the right thing for us by making
clone_from_slice() defer to copy_from_slice() for types that implement
Copy trait. However, we should still use the more efficient method
directly to show intent.
2023-10-05 13:13:09 -05:00
kirill lykov a25bb2e303
Add error messages for BlockstoreError (#33427)
* Add error messages for BlockstoreError

* display underlying errors

* address PR comments: remove unnecessary error from msg

Co-authored-by: steviez <stevecz@umich.edu>

* fix typo

Co-authored-by: steviez <stevecz@umich.edu>

---------

Co-authored-by: steviez <stevecz@umich.edu>
2023-10-04 18:17:42 +02:00
Andrew Fitzgerald 5a95e5676e
Manually add lookup table addresses instead of sanitizing (#33273) 2023-10-04 08:04:43 -07:00
Tyera 144e6d6eec
Blockstore special columns: minimize deletes in PurgeType::Exact (#33498)
* Adjust test_purge_transaction_status_exact to test slots that cross primary indexes

* Minimize deletes by checking the primary-index range

* Fix test_purge_special_columns_compaction_filter
2023-10-04 00:58:30 +00:00