266 lines
15 KiB
HTML
266 lines
15 KiB
HTML
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
|
<HTML>
|
|
<HEAD>
|
|
<TITLE> [zapps-wg] Eliminating the possibility of backdoors with high probability
|
|
</TITLE>
|
|
<LINK REL="Index" HREF="/pipermail/zapps-wg/2017/index.html" >
|
|
<LINK REL="made" HREF="mailto:zapps-wg%40lists.zfnd.org?Subject=Re%3A%20%5Bzapps-wg%5D%20Eliminating%20the%20possibility%20of%20backdoors%20with%20high%0A%20probability&In-Reply-To=%3CCAKazn3%3DamEgHi6RRKkfJobfeG6A5Y82s8X-NiREeRnkxWvkFtA%40mail.gmail.com%3E">
|
|
<META NAME="robots" CONTENT="index,nofollow">
|
|
<style type="text/css">
|
|
pre {
|
|
white-space: pre-wrap; /* css-2.1, curent FF, Opera, Safari */
|
|
}
|
|
</style>
|
|
<META http-equiv="Content-Type" content="text/html; charset=us-ascii">
|
|
<LINK REL="Previous" HREF="000021.html">
|
|
<LINK REL="Next" HREF="000039.html">
|
|
</HEAD>
|
|
<BODY BGCOLOR="#ffffff">
|
|
<H1>[zapps-wg] Eliminating the possibility of backdoors with high probability</H1>
|
|
<B>Sean Bowe</B>
|
|
<A HREF="mailto:zapps-wg%40lists.zfnd.org?Subject=Re%3A%20%5Bzapps-wg%5D%20Eliminating%20the%20possibility%20of%20backdoors%20with%20high%0A%20probability&In-Reply-To=%3CCAKazn3%3DamEgHi6RRKkfJobfeG6A5Y82s8X-NiREeRnkxWvkFtA%40mail.gmail.com%3E"
|
|
TITLE="[zapps-wg] Eliminating the possibility of backdoors with high probability">sean at z.cash
|
|
</A><BR>
|
|
<I>Mon Nov 13 22:26:04 EST 2017</I>
|
|
<P><UL>
|
|
<LI>Previous message (by thread): <A HREF="000021.html">[zapps-wg] Eliminating the possibility of backdoors with high probability
|
|
</A></li>
|
|
<LI>Next message (by thread): <A HREF="000039.html">[zapps-wg] Eliminating the possibility of backdoors with high probability
|
|
</A></li>
|
|
<LI> <B>Messages sorted by:</B>
|
|
<a href="date.html#22">[ date ]</a>
|
|
<a href="thread.html#22">[ thread ]</a>
|
|
<a href="subject.html#22">[ subject ]</a>
|
|
<a href="author.html#22">[ author ]</a>
|
|
</LI>
|
|
</UL>
|
|
<HR>
|
|
<!--beginarticle-->
|
|
<PRE>><i> Q: What exactly happens if one participant fails to destroy that secret and/or
|
|
</I>><i> inputs a low-entropy secret? What about N participants?
|
|
</I>><i>
|
|
</I>><i> The paper states that "We show that security holds even if an adversary has
|
|
</I>><i> limited influence on the beacon." but it's unclear what exactly "limited
|
|
</I>><i> influence" means.
|
|
</I>
|
|
My understanding was that we lose N bits of security when an attacker
|
|
can influence N bits of the randomness beacon.
|
|
|
|
This MPC is of the "only one has to be honest" kind. It is irrelevant
|
|
if N-1 of the participants have low entropy / known secrets, so long
|
|
as just one has high entropy w.r.t. the security parameter.
|
|
|
|
><i> As N increases you open up a new exfiltration route: the unused N-1 responses
|
|
</I>><i> could themselves be the exfiltration route, and thus need to be
|
|
</I>><i> deterministically verified against the N-1 unused secrets. This isn't
|
|
</I>><i> particularly user-friendly, and it's easy to imagine how this could be skipped.
|
|
</I>
|
|
Note that the compute process prints out a hash of the response file,
|
|
and so we can "cut-and-choose" in the same way to guard against these
|
|
exfiltration routes. As an example, if we use DVDs, we can burn N
|
|
response files and note each's respective hash and entropy. Then,
|
|
reveal N-1's hash/entropy, but destroy those N-1 DVDs.
|
|
|
|
><i> Finally, it's interesting how there's a whole class of "sham" participant
|
|
</I>><i>strategies, where someone who runs the computation and uploads an audit
|
|
</I>><i>response w/ revealed secret, but does not actually participate in that round,
|
|
</I>><i>still frustrates attackers who can not tell in advance if that particular
|
|
</I>><i>participant will or will not actually participate. This suggests that the
|
|
</I>><i>current round's challenge should be made public.
|
|
</I>
|
|
That's very interesting. Right now the transcript is public and so the
|
|
current challenge can be computed by anyone, but it would be a little
|
|
better if I put the "current" challenge file up for download.
|
|
|
|
Sean
|
|
|
|
On Mon, Nov 13, 2017 at 6:22 PM, Peter Todd <<A HREF="/mailman/listinfo/zapps-wg">pete at petertodd.org</A>> wrote:
|
|
><i> On Mon, Nov 13, 2017 at 02:16:18PM -0700, Sean Bowe via zapps-wg wrote:
|
|
</I>>><i> There are three ways that a participants' toxic waste can be compromised:
|
|
</I>>><i>
|
|
</I>>><i> 1. the participant is dishonest and keeps the toxic waste around
|
|
</I>>><i> 2. the toxic waste is extracted from the machine, either from a side
|
|
</I>>><i> channel attack or because the toxic waste still "exists" in the
|
|
</I>>><i> machine somewhere
|
|
</I>>><i> 3. the participant's code, compiler, operating system or hardware are backdoored
|
|
</I>>><i>
|
|
</I>>><i> Our solution to #1 is to have large numbers of diverse participants,
|
|
</I>>><i> to virtually eliminate the chance that all of them are dishonest and
|
|
</I>>><i> secretly colluding with each other. I am very confident in this
|
|
</I>>><i> approach.
|
|
</I>><i>
|
|
</I>><i> Ditto
|
|
</I>><i>
|
|
</I>>><i> Many of us are solving #2 by performing the computations on hardware
|
|
</I>>><i> we have randomly plucked from a store somewhere, in an environment
|
|
</I>>><i> (like a Faraday cage, or out in a field somewhere) where side-channel
|
|
</I>>><i> attacks are unlikely. And of course, completely destroying the machine
|
|
</I>>><i> afterward. I am very confident in this approach.
|
|
</I>><i>
|
|
</I>><i> I also agree that this is easily achieved.
|
|
</I>><i>
|
|
</I>><i> While ~all the hardware we have available to us is likely backdoored, the bad
|
|
</I>><i> guys can't backdoor the laws of physics.
|
|
</I>><i>
|
|
</I>>><i> However, we don't really have a good handle on #3. Right now,
|
|
</I>>><i> participants are using the `powersoftau` code that I've written in
|
|
</I>>><i> Rust. It is possible to change the code or even make an alternative
|
|
</I>>><i> implementation, but that only gets you so far. You still have to hope
|
|
</I>>><i> your OS/compiler/hardware are not backdoored.
|
|
</I>><i>
|
|
</I>><i> So to be clear, a simple example of such a backdoor attack would be to take the
|
|
</I>><i> secret k that was supposed to be used in the computation and replace it with
|
|
</I>><i> truncate(k,n), where n is low enough to brute force, and high enough to not get
|
|
</I>><i> caught by the birthday paradox; something like 24-bits is probably feasible.
|
|
</I>><i> The attacker would then brute-force the truncated k from the transcripts,
|
|
</I>><i> recovering the toxic waste.
|
|
</I>><i>
|
|
</I>>><i> I think there's a nice solution to this problem which is inspired by
|
|
</I>>><i> an idea that Brian Warner had.
|
|
</I>>><i>
|
|
</I>>><i> Currently, the code I've written takes some randomness from the system
|
|
</I>>><i> and mixes it with some user-supplied randomness. Instead, imagine
|
|
</I>>><i> using randomness supplied by the participant exclusively. (One way the
|
|
</I>>><i> participant can obtain it is with a boggle set.)
|
|
</I>><i>
|
|
</I>><i> Note that this is better described as a user-supplied *secret*
|
|
</I>><i>
|
|
</I>><i>
|
|
</I>><i> Q: What exactly happens if one participant fails to destroy that secret and/or
|
|
</I>><i> inputs a low-entropy secret? What about N participants?
|
|
</I>><i>
|
|
</I>><i> The paper states that "We show that security holds even if an adversary has
|
|
</I>><i> limited influence on the beacon." but it's unclear what exactly "limited
|
|
</I>><i> influence" means.
|
|
</I>><i>
|
|
</I>>><i> The trick is that the participant performs the computation N times,
|
|
</I>>><i> each time with different randomness. This produces N response files.
|
|
</I>>><i> Now, the participant randomly chooses N-1 of the response files and
|
|
</I>>><i> reveals the randomness for them, and destroys the randomness of the
|
|
</I>>><i> last response file -- which is their contribution to the ceremony. The
|
|
</I>>><i> participant (and the general public) can perform the computations
|
|
</I>>><i> again on their machines to check that the same response files are
|
|
</I>>><i> produced for the ones we've revealed randomness for.
|
|
</I>><i>
|
|
</I>><i> To be exact, what you mean to say here is that by re-doing the computations on
|
|
</I>><i> a trusted setup implementation that is *not* backdoored, you can detect the
|
|
</I>><i> existence of the backdoor because the results won't match.
|
|
</I>><i>
|
|
</I>>><i> As N increases, the probability that any backdoor in the code,
|
|
</I>>><i> compiler, hardware, operating system etc. could have tampered with the
|
|
</I>>><i> entropy approaches zero. Now there is just one remaining problem: how
|
|
</I>>><i> do we get the response files out of the machine without the backdoor
|
|
</I>>><i> potentially sneaking the entropy over the channel?
|
|
</I>><i>
|
|
</I>><i> As N increases you open up a new exfiltration route: the unused N-1 responses
|
|
</I>><i> could themselves be the exfiltration route, and thus need to be
|
|
</I>><i> deterministically verified against the N-1 unused secrets. This isn't
|
|
</I>><i> particularly user-friendly, and it's easy to imagine how this could be skipped.
|
|
</I>><i>
|
|
</I>><i>
|
|
</I>><i> I'd suggest instead that we ask participants to simply run the computation N>1
|
|
</I>><i> times, and pick at random which output to actually use. If they re-use hardware
|
|
</I>><i> for each run, ask them to do their best at wiping all non-volatile memory; if
|
|
</I>><i> they have the ability to use different hardware for each run, even better.
|
|
</I>><i>
|
|
</I>><i> Note that a variety of procedures to pick outputs at random are desirable. For
|
|
</I>><i> example, one person might decide to flip a coin after each run, and stop when
|
|
</I>><i> it comes up tails; another might do something different. Diversity is good.
|
|
</I>><i>
|
|
</I>><i> After they've completed their runs, simply upload one or more dummy runs and
|
|
</I>><i> associated initial secrets for peer auditing, as well as their official
|
|
</I>><i> contribution.
|
|
</I>><i>
|
|
</I>><i>
|
|
</I>><i> Secondly, once we do get some dummy runs, I'd suggest that future participants
|
|
</I>><i> consider testing their compute nodes against those challenges and secrets to
|
|
</I>><i> verify that they also get the same results.
|
|
</I>><i>
|
|
</I>><i>
|
|
</I>><i> Finally, it's interesting how there's a whole class of "sham" participant
|
|
</I>><i> strategies, where someone who runs the computation and uploads an audit
|
|
</I>><i> response w/ revealed secret, but does not actually participate in that round,
|
|
</I>><i> still frustrates attackers who can not tell in advance if that particular
|
|
</I>><i> participant will or will not actually participate. This suggests that the
|
|
</I>><i> current round's challenge should be made public.
|
|
</I>><i>
|
|
</I>>><i> DVDs are a good
|
|
</I>>><i> approach if it's possible to create many of them and then analyze them
|
|
</I>>><i> for any differences or hidden information.
|
|
</I>><i>
|
|
</I>><i> The experience of the previous trusted setup is that no-one bothers to audit
|
|
</I>><i> evidence collected after the fact. For example, I appear to have been the
|
|
</I>><i> *only* non-Zcash team member who ever bothered to even do the basic step of
|
|
</I>><i> recreating the deterministic builds, readily apparent by the fact that they
|
|
</I>><i> were broken about a month after the setup due to two different (still unfixed)
|
|
</I>><i> bugs in the deterministic build scripts(1). So I'd be cautious about putting
|
|
</I>><i> too much emphasis on "paranoid measures" like this when there are more
|
|
</I>><i> fundamental attack vectors to solve.
|
|
</I>><i>
|
|
</I>><i>
|
|
</I>><i> In any case for this to be effective you really need to completely fill the
|
|
</I>><i> storage medium with a known pattern. You also need to bypass standard
|
|
</I>><i> filesystems, which contain massive amounts of metadata in the form of file
|
|
</I>><i> timestamps and the like - better to write a single file to the medium, and fill
|
|
</I>><i> the rest with zeros.
|
|
</I>><i>
|
|
</I>><i> As I noted after the prior trusted setup, CD's/DVD's/etc. are *not* read-only
|
|
</I>><i> once written, as the writable medium can continue to be written to after the
|
|
</I>><i> initial data has been written to it. This means that an attacker could create a
|
|
</I>><i> DVD that, e.g., compromises the drive at a firmware level, exfiltrates the
|
|
</I>><i> secret, and then uses the laser in the DVD reader to erase the evidence.
|
|
</I>><i>
|
|
</I>><i> But as this setup is multi-participant, this can easily be defeated by using a
|
|
</I>><i> wide variety of techniques. For instance, it turns out that USB drives with
|
|
</I>><i> (allegedly) hardware write-protect switches are readily available, such as the
|
|
</I>><i> Kanguru FlashBlu series:
|
|
</I>><i>
|
|
</I>><i> <A HREF="https://store.kanguru.com/pages/flash-blu-2">https://store.kanguru.com/pages/flash-blu-2</A>
|
|
</I>><i>
|
|
</I>><i> Secondly, using a semi-trusted "firewall" machine that reads the medium, and
|
|
</I>><i> then copies it to a new medium can also avoid unintended data leaks between
|
|
</I>><i> those two mediums.
|
|
</I>><i>
|
|
</I>><i> 1) <A HREF="https://github.com/zcash/mpc/pull/9">https://github.com/zcash/mpc/pull/9</A>
|
|
</I>><i>
|
|
</I>>><i> (The original idea that Brian and Zooko briefly considered for the
|
|
</I>>><i> Zcash ceremony last year was similar, except it involved one of the
|
|
</I>>><i> participants revealing all their entropy at the end, and the rest
|
|
</I>>><i> destroying theirs. This is because the previous protocol couldn't
|
|
</I>>><i> support participants performing multiple computations, because they
|
|
</I>>><i> had to commit to their entropy at the very beginning. The new protocol
|
|
</I>>><i> does support participants performing multiple computations with
|
|
</I>>><i> different entropy, though!)
|
|
</I>><i>
|
|
</I>><i> Looks like both myself and Saleem Rashid had similar ideas as well, so either
|
|
</I>><i> it's a good one or we're all wrong. :)
|
|
</I>><i>
|
|
</I>><i> <A HREF="https://twitter.com/spudowiar/status/919615633121300481">https://twitter.com/spudowiar/status/919615633121300481</A>
|
|
</I>><i> <A HREF="https://twitter.com/petertoddbtc/status/919615731615989760">https://twitter.com/petertoddbtc/status/919615731615989760</A>
|
|
</I>><i>
|
|
</I>><i> --
|
|
</I>><i> <A HREF="https://petertodd.org">https://petertodd.org</A> 'peter'[:-1]@petertodd.org
|
|
</I>
|
|
</PRE>
|
|
|
|
<!--endarticle-->
|
|
<HR>
|
|
<P><UL>
|
|
<!--threads-->
|
|
<LI>Previous message (by thread): <A HREF="000021.html">[zapps-wg] Eliminating the possibility of backdoors with high probability
|
|
</A></li>
|
|
<LI>Next message (by thread): <A HREF="000039.html">[zapps-wg] Eliminating the possibility of backdoors with high probability
|
|
</A></li>
|
|
<LI> <B>Messages sorted by:</B>
|
|
<a href="date.html#22">[ date ]</a>
|
|
<a href="thread.html#22">[ thread ]</a>
|
|
<a href="subject.html#22">[ subject ]</a>
|
|
<a href="author.html#22">[ author ]</a>
|
|
</LI>
|
|
</UL>
|
|
|
|
<hr>
|
|
<a href="/mailman/listinfo/zapps-wg">More information about the zapps-wg
|
|
mailing list</a><br>
|
|
</body></html>
|