Each subsection has to have `serde(default)` to get the behaviour we want
(delete all fields except the ones that have been changed); otherwise, we can
delete only entire sections.
This test doesn't compile in a way that reveals a problem with the design. The
verification service takes a `Request<'msg>` parameterized by the message
lifetime, and returns a future unconstrained by the message lifetime (it hashes
upfront to avoid requiring that `'msg` outlive `call`). But the `Batch`
middleware has the verification service working on its own task, so how can we
ensure that the message lives long enough to be read by the worker task?
The name "Buffer" is changed to "Batch" everywhere, and the worker task is rewritten.
Instead of having Worker implement Future directly, we have a consuming async run() function.
There's a lot of functional overlap between the batch design and tower-buffer's
existing internals, so we'll just vendor its source code and modify it.
If/when we upstream it, we can deduplicate common components.
Prior to this change, we required that services that are canceled do not
have a cancel handle in the `cancel_handles` list, based on the
assumption that the handle must have been removed in the process of
canceling this service.
This doesn't holding up though, because it is currently possible for us
to have the same peer connect to us multiple times, the second connect
removes the cancel handle of the original connect and inserts it's own
cancel handle in its place. In this scenario, when the first service is
polled for readiness it will see that it has been canceled and go to
clean itself up, but when it asserts that it doesn't have a cancel
handle it will see the cancel handle of the second connect event, which
uses the same key as the first connect, and fail its debug assertion.
This change removes that debug assert on the assumption that it is okay
for a peer to connect multiple times consecutively, and that the correct
behavior in that case is to just cancel the first connection and
continue as normal.
The maximum block size is 2,000,000 bytes. This commit also limits the
maximum transaction size in parsed blocks. (See #484 for the
corresponding limit on mempool transactions.)
The proptests might test the maximum block size, but they are
randomised. So we also want to explicitly test large block sizes.
(See #482 for these test cases and tests.)
Part of #477.
Verify the value of the equihash solution size field in block headers.
This field isn't stored in the BlockHeader struct, so we need to verify
it at parse time.
Part of #477.
Only hash block headers in the lowest-level block index code.
This design has a few benefits:
- failures are obvious, because the hash is not available,
- get_tip() returns a smaller object,
- we avoid re-hashing block headers multiple times.
These efficiency changes may be needed to support chain reorganisations,
multiple tips, and heavy query loads.
Move block header hashing from zebra-consensus to zebra-state.
Handle zebra-state AddBlock errors in zebra-consensus BlockVerifier.
Add unit tests for BlockVerifier state error handling.
Part of #428.
Placing bounds on the service's future is less ideal, because the future is
already constrained by the `Service` trait, so the bounds can be expressed more
directly and simply by bounding the service itself.
If the verification service already has to have a generic parameter for the
future (the `ZSF`), it could instead be generic over `S`, the storage service.
This has the upside that it's no longer required for the verification service
to box the storage service, so we don't add any extra layers of indirection,
and the where bounds become more straightforward, since they're centered on the
requirements for the storage service itself, not the future it returns.
Finally, we can simplify the bounds by using the request / response types
directly rather than defining wrapper types.
The reason the test failed is that the future returned by `call` on the state
service was immediately dropped, rather than being driven to completion.
Instead, we link the state update future with the verification future by
.awaiting it in an async block.
Note that the state update future is constructed outside of the async block
returned by the verification service. Why? Because calling
`self.state_service.call` requires mutable access to `state_service`, which we
only have in the body of the verification service's `call` method, and not
during execution of the async block it returns (which could happen at some
later point, or never).
In Tower's model, the `call` method itself only does the work necessary to
construct a future that represents completion of the service call, and the rest
of the work is done in the future itself. In particular, the fact that
`Service::call` takes `&mut self` means two things:
1. the service's state can be mutated while setting up the future, but not
during the future's subsequent execution,
2. any nested service calls made *by* the service *to* sub-services (e.g., the
verification service calling the state service) must either be made upfront,
while constructing the response future, or must be made to a clone of the
sub-service owned by the the response future.