Commit Graph

13 Commits

Author SHA1 Message Date
Conrado Gouvea e8e58e37a1 fix documentation about batching 2023-03-14 15:50:28 -04:00
Conrado Gouvea c079b0e507 update curve25519-dalek to 4.0.0-pre.5; sha2 to 0.10 2023-01-17 15:59:35 -05:00
Christian Poveda 15e028616c
add `no_std` support (#57) 2022-05-05 10:40:29 -03:00
Henry de Valence 71f276e32a Add missing mul_by_cofactor in batch verification.
This should have been added as part of the ZIP 215 work but I missed it.
2020-07-30 10:18:19 -07:00
Henry de Valence a62038f8f9
Add batch::Item::verify_single and Item: Clone + Debug. (#27)
* Add batch::Item::verify_single and Item: Clone + Debug.

This closes a gap in the API where it was impossible to retry items in a failed
batch, because the opaque Item type could not be verified individually.
2020-07-15 12:25:46 -07:00
Henry de Valence d0a430b5e4
Implement ZIP 215 validation rules. (#24)
* Implement ZIP 215 validation rules.

These have the effect that batched and singleton verification are now
equivalent.

* Add ZIP 215 conformance tests.

This test constructs signatures on the message "Zcash" using small-order
verification keys, some with canonical and some with non-canonical encodings of
points.  All of these signatures should pass verification under the ZIP 215
rules, but most of them should fail verification under legacy rules.

These tests exercise all of the special-case behaviors from the specific
version of libsodium used by Zcashd:

* the all-zero check for the verification key;

* the excluded point encodings for the signature's R value;

* the choice to test equality of the encoded bytes of the recomputed R value
  rather than on the projective coordinates of the two points.

Running
```
cargo test -- --nocapture
```
will print a hex-formatted list of the test cases, which can also be found here:

https://gist.github.com/hdevalence/93ed42d17ecab8e42138b213812c8cc7

* Update spec links.

Thanks to @ebfull for pointing this out.

* No ... there is another.

@ebfull pointed out that two test cases were duplicates.  The cause was that I
misread the RFC8032 check was checking for the non-canonical encoding of
the identity point that NCC Group apparently brought up.  Carefully analyzing all
the cases instead of assuming reveals there is another non-canonically encoded
point (of order 2).

* Change formatting of printed test cases.
2020-07-06 19:40:20 -07:00
Henry de Valence c0091419c5
Fix batch verification API to be usable in async contexts. (#22)
It's essential to be able to separate the lifetime of the batch item from the
lifetime of associated data (in this case, the message).  The previous API did
this mixed in to the Tower implementation, but it was removed along with that
code.

Making the `queue` function take `I: Into<Item>` means that users who don't
care about lifetimes just need to wrap the function arguments in an extra
tuple.
2020-06-16 13:44:34 -07:00
Henry de Valence 8bc82108f4
Change terminology to signing and verification keys. (#20)
These are better names than secret and public keys, because they concisely
describe the functional *role* of the key material, not just whether or not the
key is revealed.
2020-06-15 20:45:25 -07:00
Henry de Valence cacd50d992
Remove futures-based batch verification API. (#19)
The futures-based batch verification design is still good, but it turns out
that things like the latency bounds and control of when to flush the batch
should be common across different kinds of batchable verification.  This should
be done by the forthcoming `tower-batch` middleware currently in the Zebra
repo.
2020-06-15 18:47:49 -07:00
Henry de Valence bd5efba032 Use in-band signaling to flush batch verification requests.
This changes the `VerificationRequest` type alias (now called `batch::Request`)
to an enum containing either a verification request or a request to flush the
batch.  This allows both automatic and manual control of batch sizes, either by
setting a low batch limit on service creation or by setting a high limit and
manually sending flush commands.

To keep things ergonomic, the `Request` enum now has a `impl From` the previous
tuple, so to send a request, all that's necessary is to assemble a
pubkey-sig-message tuple and call `.into()` on it.
2020-01-30 17:44:09 -08:00
Henry de Valence 2663474cd1 Add motivational documentation to the batch module. 2020-01-30 17:44:09 -08:00
Henry de Valence 96a4ef2481 Add non-batched verification with an identical Service interface.
This will hopefully allow things like building a Tower layer with a timeout and
a retry policy that retries timed out requests (not a big enough concurrent
batch) with singleton verification, or retries a failed batch by falling back
to singleton verification to detect which element of a batch failed.

However, there are still some rough spots in the API, and it's not clear that
manually dropping the service is an adequate way to flush requests (see comment).
2020-01-30 17:44:09 -08:00
Henry de Valence d9d64fd050 Initial implementation of futures-based batch verification.
This commit incidentally includes an optimization for batch verification that
improves performance when verifying multiple signatures from the same public
key.

I'm not totally happy with a few things about this API, however.  Currently,
the actual batch computation is performed only when the inherent `finalize()`
method is called on the service.  However, this doesn't work well with `tower`
layering, because once the service is wrapped, the inherent method is no longer
available.  Another option would be for the batching service to be created with
a batch size parameter, automatically resolving the batch computation whenever
the batch size was reached.  This improves latency but does not solve the
problem of finalizing the batch, since unless there's guaranteed to be a
constant stream of verification requests, a tail of requests may be left
unresolved.  A third option would be for the service to return some kind of
control handle on creation that would allow shutdown via a channel or
something, but this is also unsatisfying because it means that the service has
to be listening for a shutdown signal.  A fourth option would be to have a
batch size parameter but customize the `Drop` impl to finalize all pending
requests on drop; this would potentially allow "flushing" pending requests by
manually dropping the service in a way that would still be possible when using
tower wrappers.
2020-01-28 21:36:37 -08:00