[zapps-wg] Eliminating the possibility of backdoors with high probability
Sean Bowe
sean at z.cash
Mon Nov 13 22:26:04 EST 2017
> Q: What exactly happens if one participant fails to destroy that secret and/or
> inputs a low-entropy secret? What about N participants?
>
> The paper states that "We show that security holds even if an adversary has
> limited influence on the beacon." but it's unclear what exactly "limited
> influence" means.
My understanding was that we lose N bits of security when an attacker
can influence N bits of the randomness beacon.
This MPC is of the "only one has to be honest" kind. It is irrelevant
if N-1 of the participants have low entropy / known secrets, so long
as just one has high entropy w.r.t. the security parameter.
> As N increases you open up a new exfiltration route: the unused N-1 responses
> could themselves be the exfiltration route, and thus need to be
> deterministically verified against the N-1 unused secrets. This isn't
> particularly user-friendly, and it's easy to imagine how this could be skipped.
Note that the compute process prints out a hash of the response file,
and so we can "cut-and-choose" in the same way to guard against these
exfiltration routes. As an example, if we use DVDs, we can burn N
response files and note each's respective hash and entropy. Then,
reveal N-1's hash/entropy, but destroy those N-1 DVDs.
> Finally, it's interesting how there's a whole class of "sham" participant
>strategies, where someone who runs the computation and uploads an audit
>response w/ revealed secret, but does not actually participate in that round,
>still frustrates attackers who can not tell in advance if that particular
>participant will or will not actually participate. This suggests that the
>current round's challenge should be made public.
That's very interesting. Right now the transcript is public and so the
current challenge can be computed by anyone, but it would be a little
better if I put the "current" challenge file up for download.
Sean
On Mon, Nov 13, 2017 at 6:22 PM, Peter Todd <pete at petertodd.org> wrote:
> On Mon, Nov 13, 2017 at 02:16:18PM -0700, Sean Bowe via zapps-wg wrote:
>> There are three ways that a participants' toxic waste can be compromised:
>>
>> 1. the participant is dishonest and keeps the toxic waste around
>> 2. the toxic waste is extracted from the machine, either from a side
>> channel attack or because the toxic waste still "exists" in the
>> machine somewhere
>> 3. the participant's code, compiler, operating system or hardware are backdoored
>>
>> Our solution to #1 is to have large numbers of diverse participants,
>> to virtually eliminate the chance that all of them are dishonest and
>> secretly colluding with each other. I am very confident in this
>> approach.
>
> Ditto
>
>> Many of us are solving #2 by performing the computations on hardware
>> we have randomly plucked from a store somewhere, in an environment
>> (like a Faraday cage, or out in a field somewhere) where side-channel
>> attacks are unlikely. And of course, completely destroying the machine
>> afterward. I am very confident in this approach.
>
> I also agree that this is easily achieved.
>
> While ~all the hardware we have available to us is likely backdoored, the bad
> guys can't backdoor the laws of physics.
>
>> However, we don't really have a good handle on #3. Right now,
>> participants are using the `powersoftau` code that I've written in
>> Rust. It is possible to change the code or even make an alternative
>> implementation, but that only gets you so far. You still have to hope
>> your OS/compiler/hardware are not backdoored.
>
> So to be clear, a simple example of such a backdoor attack would be to take the
> secret k that was supposed to be used in the computation and replace it with
> truncate(k,n), where n is low enough to brute force, and high enough to not get
> caught by the birthday paradox; something like 24-bits is probably feasible.
> The attacker would then brute-force the truncated k from the transcripts,
> recovering the toxic waste.
>
>> I think there's a nice solution to this problem which is inspired by
>> an idea that Brian Warner had.
>>
>> Currently, the code I've written takes some randomness from the system
>> and mixes it with some user-supplied randomness. Instead, imagine
>> using randomness supplied by the participant exclusively. (One way the
>> participant can obtain it is with a boggle set.)
>
> Note that this is better described as a user-supplied *secret*
>
>
> Q: What exactly happens if one participant fails to destroy that secret and/or
> inputs a low-entropy secret? What about N participants?
>
> The paper states that "We show that security holds even if an adversary has
> limited influence on the beacon." but it's unclear what exactly "limited
> influence" means.
>
>> The trick is that the participant performs the computation N times,
>> each time with different randomness. This produces N response files.
>> Now, the participant randomly chooses N-1 of the response files and
>> reveals the randomness for them, and destroys the randomness of the
>> last response file -- which is their contribution to the ceremony. The
>> participant (and the general public) can perform the computations
>> again on their machines to check that the same response files are
>> produced for the ones we've revealed randomness for.
>
> To be exact, what you mean to say here is that by re-doing the computations on
> a trusted setup implementation that is *not* backdoored, you can detect the
> existence of the backdoor because the results won't match.
>
>> As N increases, the probability that any backdoor in the code,
>> compiler, hardware, operating system etc. could have tampered with the
>> entropy approaches zero. Now there is just one remaining problem: how
>> do we get the response files out of the machine without the backdoor
>> potentially sneaking the entropy over the channel?
>
> As N increases you open up a new exfiltration route: the unused N-1 responses
> could themselves be the exfiltration route, and thus need to be
> deterministically verified against the N-1 unused secrets. This isn't
> particularly user-friendly, and it's easy to imagine how this could be skipped.
>
>
> I'd suggest instead that we ask participants to simply run the computation N>1
> times, and pick at random which output to actually use. If they re-use hardware
> for each run, ask them to do their best at wiping all non-volatile memory; if
> they have the ability to use different hardware for each run, even better.
>
> Note that a variety of procedures to pick outputs at random are desirable. For
> example, one person might decide to flip a coin after each run, and stop when
> it comes up tails; another might do something different. Diversity is good.
>
> After they've completed their runs, simply upload one or more dummy runs and
> associated initial secrets for peer auditing, as well as their official
> contribution.
>
>
> Secondly, once we do get some dummy runs, I'd suggest that future participants
> consider testing their compute nodes against those challenges and secrets to
> verify that they also get the same results.
>
>
> Finally, it's interesting how there's a whole class of "sham" participant
> strategies, where someone who runs the computation and uploads an audit
> response w/ revealed secret, but does not actually participate in that round,
> still frustrates attackers who can not tell in advance if that particular
> participant will or will not actually participate. This suggests that the
> current round's challenge should be made public.
>
>> DVDs are a good
>> approach if it's possible to create many of them and then analyze them
>> for any differences or hidden information.
>
> The experience of the previous trusted setup is that no-one bothers to audit
> evidence collected after the fact. For example, I appear to have been the
> *only* non-Zcash team member who ever bothered to even do the basic step of
> recreating the deterministic builds, readily apparent by the fact that they
> were broken about a month after the setup due to two different (still unfixed)
> bugs in the deterministic build scripts(1). So I'd be cautious about putting
> too much emphasis on "paranoid measures" like this when there are more
> fundamental attack vectors to solve.
>
>
> In any case for this to be effective you really need to completely fill the
> storage medium with a known pattern. You also need to bypass standard
> filesystems, which contain massive amounts of metadata in the form of file
> timestamps and the like - better to write a single file to the medium, and fill
> the rest with zeros.
>
> As I noted after the prior trusted setup, CD's/DVD's/etc. are *not* read-only
> once written, as the writable medium can continue to be written to after the
> initial data has been written to it. This means that an attacker could create a
> DVD that, e.g., compromises the drive at a firmware level, exfiltrates the
> secret, and then uses the laser in the DVD reader to erase the evidence.
>
> But as this setup is multi-participant, this can easily be defeated by using a
> wide variety of techniques. For instance, it turns out that USB drives with
> (allegedly) hardware write-protect switches are readily available, such as the
> Kanguru FlashBlu series:
>
> https://store.kanguru.com/pages/flash-blu-2
>
> Secondly, using a semi-trusted "firewall" machine that reads the medium, and
> then copies it to a new medium can also avoid unintended data leaks between
> those two mediums.
>
> 1) https://github.com/zcash/mpc/pull/9
>
>> (The original idea that Brian and Zooko briefly considered for the
>> Zcash ceremony last year was similar, except it involved one of the
>> participants revealing all their entropy at the end, and the rest
>> destroying theirs. This is because the previous protocol couldn't
>> support participants performing multiple computations, because they
>> had to commit to their entropy at the very beginning. The new protocol
>> does support participants performing multiple computations with
>> different entropy, though!)
>
> Looks like both myself and Saleem Rashid had similar ideas as well, so either
> it's a good one or we're all wrong. :)
>
> https://twitter.com/spudowiar/status/919615633121300481
> https://twitter.com/petertoddbtc/status/919615731615989760
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
More information about the zapps-wg
mailing list