This is a followup to
23991ee53 / https://github.com/bitcoin/bitcoin/pull/15600
to also use madvise(2) on FreeBSD to avoid sensitive data allocated
with secure_allocator ending up in core files in addition to preventing
it from going to the swap.
The test uses reinterpret_cast<void*> on unallocated memory. Using this
memory in printchunk as char* causes a segfault, so have printchunk take
void* instead.
Zcash: Includes change from bitcoin/bitcoin#13163
Changes in #12048 cause a compilation error in Arena::walk() when
ARENA_DEBUG is defined. Specifically, Arena's chunks_free map was
changed to have a different value type.
Additionally, missing includes cause other compilation errors when
ARENA_DEBUG is defined.
Reproduced with:
make CPPFLAGS=-DARENA_DEBUG
This replaces the first-fit algorithm used in the Arena with a best-fit. According to "Dynamic Storage Allocation: A Survey and Critical Review", Wilson et. al. 1995, http://www.scs.stanford.edu/14wi-cs140/sched/readings/wilson.pdf, both startegies work well in practice.
The advantage of using best-fit is that we can switch the slow O(n) algorithm to O(log(n)) operations. Additionally, some previously O(log(n)) operations are now replaced with O(1) operations by using a hash map. The end effect is that the benchmark runs about 2.5 times faster on my machine:
old: BenchLockedPool, 5, 530, 5.25749, 0.00196938, 0.00199755, 0.00198172
new: BenchLockedPool, 5, 1300, 5.11313, 0.000781493, 0.000793314, 0.00078606
I've run all unit tests and benchmarks.
Zcash: Excludes change to benchmark.
In the case of CKey's destructor, it seems to have been an oversight in
f4d1fc259 not to delete it. At this point, it results in the move
constructors/assignment operators for CKey being deleted, which may have
a performance impact.
Check for unreasonable alloc size in LockedPool rather than lancing through new
Arenas until we improbably find one worthy of the quixotic request or the system
can support no more Arenas.
```
getmemoryinfo
Returns an object containing information about memory usage.
Result:
{
"locked": { (json object) Information about locked memory manager
"used": xxxxx, (numeric) Number of bytes used
"free": xxxxx, (numeric) Number of bytes available in current arenas
"total": xxxxxxx, (numeric) Total number of bytes managed
"locked": xxxxxx, (numeric) Amount of bytes that succeeded locking. If this number is smaller than total, locking pages failed at some point and key data could be swapped to disk.
}
}
Examples:
> bitcoin-cli getmemoryinfo
> curl --user myusername --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getmemoryinfo", "params": [] }' -H 'content-type: text/plain;' http://127.0.0.1:8332/
```
Add a pool for locked memory chunks, replacing LockedPageManager.
This is something I've been wanting to do for a long time. The current
approach of locking objects where they happen to be on the stack or heap
in-place causes a lot of mlock/munlock system call overhead, slowing
down any handling of keys.
Also locked memory is a limited resource on many operating systems (and
using a lot of it bogs down the system), so the previous approach of
locking every page that may contain any key information (but also other
information) is wasteful.
Replace these with vectors allocated from the secure allocator.
This avoids mlock syscall churn on stack pages, as well as makes
it possible to get rid of these functions.
Please review this commit and the previous one carefully that
no `sizeof(vectortype)` remains in the memcpys and memcmps usage
(ick!), and `.data()` or `&vec[x]` is used as appropriate instead of
&vec.
Change CCrypter to use vectors with secure allocator instead of buffers
on in the object itself which will end up on the stack. This avoids
having to call LockedPageManager to lock stack memory pages to prevent the
memory from being swapped to disk. This is wasteful.
Replace OpenSSL AES with ctaes-based version
Backported from upstream PR https://github.com/bitcoin/bitcoin/pull/7689.
This is backported primarily to remove merge conflicts for a subsequent
backport, and also helps us towards removing OpenSSL. Its actual usage
in wallet encryption would be replaced by a more modern construction
before we make wallet encryption a supported feature, but for now this
does not affect anyone using the experimental feature.
flush witness cache (SetBestChain()) on clean shutdown
Closes#4596, follow-on to #4573. In addition to flushing witness data on shutdown, fix the RPC test that was preventing this change from being part of #4573.
metrics: Collect general stats before clearing screen
This prevents the metrics screen from flashing if locks are being held
by long-running processes, specifically cs_main during block validation.
We split up locking on cs_main and cs_vNodes to make obtaining the locks
easier, at the expense of potentially having slightly out-of-sync
statistics (which doesn't really matter, as all we are fetching from the
latter lock is the number of connected peers).
Send alert to put pre-Heartwood nodes into safe mode.
The alert targets nodes running protocol version <= 170010.
Heartwood-compatible nodes run protocol version >= 170011.
Fix "--disable-mining" build regression
closes#4634
Test by building with:
* `CONFIGURE_FLAGS="--disable-tests --disable-mining --disable-bench" zcutil/build.sh`
* `zcutil/distclean.sh`
* `CONFIGURE_FLAGS="--disable-mining" zcutil/build.sh`
After the second build, run `qa/zcash/full-test-suite.py`. Stop when it gets to the RPC tests, which will hang. The preceding parts of the test suite are all expected to pass.
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
Pass HistoryNode through Rust FFI as a C array
`std::array<T>` is guaranteed to store `T` contiguously. However, there is
no guarantee that `sizeof(std::array<unsigned char, N>) == N`, which
prevents us from interpreting `std::array<std::array<unsigned char, N>, 32>`
as `&[[u8; N]]` on the Rust side of the FFI.
Instead, we define `HistoryNode` as a struct wrapping a C array, which
(as checked by `static_assert`) contains no padding.
This is equivalent to 82fe37d22b, which
fixed this issue when passing a slice of `HistoryEntry`s from C++ to Rust;
the bug fixed here is writing `HistoryNodes` from Rust into C++ memory.
This prevents the metrics screen from flashing if locks are being held
by long-running processes, specifically cs_main during block validation.
We split up locking on cs_main and cs_vNodes to make obtaining the locks
easier, at the expense of potentially having slightly out-of-sync
statistics (which doesn't really matter, as all we are fetching from the
latter lock is the number of connected peers).