Book: cosmetics and minor corrections / wording improvements.

Signed-off-by: Daira Hopwood <daira@jacaranda.org>
This commit is contained in:
Daira Hopwood 2021-02-17 17:10:11 +00:00
parent 07af9ea3e7
commit a73560c842
5 changed files with 25 additions and 25 deletions

View File

@ -40,13 +40,13 @@ equality constraints to copy values from other cells of the circuit into that co
offset references, we not only need fewer columns; we also do not need equality constraints to
be supported for all of those columns, which improves efficiency.
In R1CS (which may be more familiar to some readers, but don't worry if it isn't), a circuit
consists of a "sea of gates" with no semantically significant ordering. Because of offset
references, the order of rows in a UPA circuit, on the other hand, *is* significant. We're
going to make some simplifying assumptions and define some abstractions to tame the resulting
complexity: the aim will be that, [at the gadget level](gadgets.md) where we do most of our
circuit construction, we will not have to deal with relative references or with gate layout
explicitly.
In R1CS (another arithmetization which may be more familiar to some readers, but don't worry
if it isn't), a circuit consists of a "sea of gates" with no semantically significant ordering.
Because of offset references, the order of rows in a UPA circuit, on the other hand, *is*
significant. We're going to make some simplifying assumptions and define some abstractions to
tame the resulting complexity: the aim will be that, [at the gadget level](gadgets.md) where
we do most of our circuit construction, we will not have to deal with relative references or
with gate layout explicitly.
We will partition a circuit into ***regions***, where each region contains a disjoint subset
of cells, and relative references only ever point *within* a region. Part of the responsibility
@ -59,7 +59,7 @@ planner that implements a very general algorithm, but you can write your own flo
you need to.
Floor planning will in general leave gaps in the matrix, because the gates in a given row did
not use all available columns. These are filled in ---as far as possible--- by gates that do
not use all available columns. These are filled in —as far as possible— by gates that do
not require offset references, which allows them to be placed on any row.
Cores can also define lookup tables. If more than one table is defined for the same lookup

View File

@ -79,8 +79,8 @@ precisely how the proof is generated, must be able to compute the witness.
If a proof yields no information about the witness (other than that a witness exists and was
known to the prover), then we say that the proof system is ***zero knowledge***.
If a proof system produces short proofs ---i.e. of length polylogarithmic in the circuit
size--- then we say that it is ***succinct***. A succinct NARK is called a ***SNARK***
If a proof system produces short proofs i.e. of length polylogarithmic in the circuit
size then we say that it is ***succinct***. A succinct NARK is called a ***SNARK***
(***Succinct Non-Interactive Argument of Knowledge***).
> By this definition, a SNARK need not have verification time polylogarithmic in the circuit

View File

@ -35,7 +35,7 @@ lookups independent. Then, the prover commits to the permutations for each looku
follows:
- Given a lookup with input column polynomials $[A_0(X), \dots, A_{m-1}(X)]$ and table
column polynomials $[S_0(X), \dots, S_{m-1}]$, the prover constructs two compressed
column polynomials $[S_0(X), \dots, S_{m-1}(X)]$, the prover constructs two compressed
polynomials
$$A_\text{compressed}(X) = \theta^{m-1} A_0(X) + \theta^{m-2} A_1(X) + \dots + \theta A_{m-2}(X) + A_{m-1}(X)$$

View File

@ -105,7 +105,7 @@ ways:
were implemented.
These generalizations are similar to those in sections 4 and 5 of the
[Plookup paper](https://eprint.iacr.org/2020/315.pdf) That is, the differences from
[Plookup paper](https://eprint.iacr.org/2020/315.pdf). That is, the differences from
Plookup are in the subset argument. This argument can then be used in all the same ways;
for instance, the optimized range check technique in section 5 of the Plookup paper can
also be used with this subset argument.

View File

@ -34,18 +34,18 @@ For instance, say we want to map a 2-bit value to a "spread" version interleaved
with zeros. We first precompute the evaluations at each point:
$$
\begin{array}{cc}
00 &\rightarrow 0000 \implies 0 \rightarrow 0 \\
01 &\rightarrow 0001 \implies 1 \rightarrow 1 \\
10 &\rightarrow 0100 \implies 2 \rightarrow 4 \\
11 &\rightarrow 0101 \implies 3 \rightarrow 5
\begin{array}{rcl}
00 \rightarrow 0000 &\implies& 0 \rightarrow 0 \\
01 \rightarrow 0001 &\implies& 1 \rightarrow 1 \\
10 \rightarrow 0100 &\implies& 2 \rightarrow 4 \\
11 \rightarrow 0101 &\implies& 3 \rightarrow 5
\end{array}
$$
Then, we construct the Lagrange basis polynomial for each point using the
identity:
$$\mathcal{l}_j(X) = \prod_{0 \leq m \leq k, m \neq j} \frac{x - x_m}{x_j - x_m},$$
where $k + 1$ is the number of data points. ($k = 3$ in our example above.)
$$\mathcal{l}_j(X) = \prod_{0 \leq m < k,\; m \neq j} \frac{x - x_m}{x_j - x_m},$$
where $k$ is the number of data points. ($k = 4$ in our example above.)
Recall that the Lagrange basis polynomial $\mathcal{l}_j(X)$ evaluates to $1$ at
$X = x_j$ and $0$ at all other $x_i, j \neq i.$
@ -54,9 +54,9 @@ Continuing our example, we get four Lagrange basis polynomials:
$$
\begin{array}{ccc}
l_0(X) &=& \frac{(X - 3)(X - 2)(X - 1)}{(-3)(-2)(-1)} \\
l_1(X) &=& \frac{(X - 3)(X - 2)(X)}{(-2)(-1)(1)} \\
l_2(X) &=& \frac{(X - 3)(X - 1)(X)}{(-1)(1)(2)} \\
l_0(X) &=& \frac{(X - 3)(X - 2)(X - 1)}{(-3)(-2)(-1)} \\[1ex]
l_1(X) &=& \frac{(X - 3)(X - 2)(X)}{(-2)(-1)(1)} \\[1ex]
l_2(X) &=& \frac{(X - 3)(X - 1)(X)}{(-1)(1)(2)} \\[1ex]
l_3(X) &=& \frac{(X - 2)(X - 1)(X)}{(1)(2)(3)}
\end{array}
$$
@ -64,8 +64,8 @@ $$
Our polynomial constraint is then
$$
\begin{array}{ccccccccc}
&&f(0)l_0(X) &+& f(1)l_1(X) &+& f(2)l_2(X) &+& f(3)l_3(X) - f(X) &=& 0 \\
&\implies& 0 \cdot l_0(X) &+& 1 \cdot l_1(X) &+& 4 \cdot l_2(X) &+& 5 \cdot l_3(X) - f(X) &=& 0. \\
\begin{array}{cccccccccccl}
&f(0) \cdot l_0(X) &+& f(1) \cdot l_1(X) &+& f(2) \cdot l_2(X) &+& f(3) \cdot l_3(X) &-& f(X) &=& 0 \\
\implies& 0 \cdot l_0(X) &+& 1 \cdot l_1(X) &+& 4 \cdot l_2(X) &+& 5 \cdot l_3(X) &-& f(X) &=& 0. \\
\end{array}
$$