Merge pull request #91 from zcash/book-multipoint

[book] Add multipoint opening + small set interpolation
This commit is contained in:
str4d 2020-12-22 20:42:01 +00:00 committed by GitHub
commit 3f856e3066
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 125 additions and 0 deletions

View File

@ -2,6 +2,7 @@
[halo2](README.md)
- [Concepts](concepts.md)
- [Multipoint opening argument](concepts/multipoint-opening.md)
- [User Documentation](user.md)
- [A simple example](user/simple-example.md)
- [Lookup tables](user/lookup-tables.md)

View File

@ -0,0 +1,80 @@
# Multipoint opening argument
Consider the commitments $A, B, C, D$ to polynomials $a(X), b(X), c(X), d(X)$.
Let's say that $a$ and $b$ were queried at the point $x$, while $c$ and $d$
were queried at both points $x$ and $\omega x$. (Here, $\omega$ is the primitive
root of unity in the multiplicative subgroup over which we constructed the
polynomials).
We can group the commitments in terms of the sets of points at which they were
queried:
$$
\begin{array}{cccc}
&\{x\}& &\{x, \omega x\}& \\
&A& &C& \\
&B& &D&
\end{array}
$$
The multipoint opening optimisation proceeds as such:
1. Sample random $x_3$, at which we evaluate $a(X), b(X), c(X), d(X)$.
2. The prover provides evaluations of each polynomial at each point of interest:
$a(x_3), b(x_3), c(x_3), d(x_3), c(\omega x_3), d(\omega x_3)$
3. Sample random $x_4$, to keep $a, b, c, d$ linearly independent.
4. Accumulate polynomials and their corresponding evaluations according
to the point set at which they were queried:
`q_polys`:
$$
\begin{array}{rccl}
q_1(X) &=& a(X) &+& x_4 b(X) \\
q_2(X) &=& c(X) &+& x_4 d(X)
\end{array}
$$
`q_eval_sets`:
```math
[
[a(x_3) + x_4 b(x_3)],
[
c(x_3) + x_4 d(x_3),
c(\omega x_3) + x_4 d(\omega x_3)
]
]
```
NB: `q_eval_sets` is a vector of sets of evaluations, where the outer vector
goes over the point sets, and the inner vector goes over the points in each set.
5. Interpolate each set of values in `q_eval_sets`:
`r_polys`:
$$
\begin{array}{cccc}
r_1(X) s.t.&&& \\
&r_1(x_3) &=& a(x_3) + x_4 b(x_3) \\
r_2(X) s.t.&&& \\
&r_2(x_3) &=& c(x_3) + x_4 d(x_3) \\
&r_2(\omega x_3) &=& c(\omega x_3) + x_4 d(\omega x_3) \\
\end{array}
$$
6. Construct `f_polys` which check the correctness of `q_polys`:
`f_polys`
$$
\begin{array}{rcl}
f_1(X) &=& \frac{ q_1(X) - r_1(X)}{X - x_3} \\
f_2(X) &=& \frac{ q_2(X) - r_2(X)}{(X - x_3)(X - \omega x_3)} \\
\end{array}
$$
If $q_1(x_3) = r_1(x_3)$, then $f_1(X)$ should be a polynomial.
If $q_2(x_3) = r_2(x_3)$ and $q_2(\omega x_3) = r_2(\omega x_3)$
then $f_2(X)$ should be a polynomial.
7. Sample random $x_5$ to keep the `f_polys` linearly independent.
8. Construct $f(X) = f_1(X) + x_5 f_2(X)$.
9. Sample random $x_6$, at which we evaluate $f(X)$:
$$
\begin{array}{rcccl}
f(x_6) &=& f_1(x_6) &+& x_5 f_2(x_6) \\
&=& \frac{q_1(x_6) - r_1(x_6)}{x_6 - x_3} &+& x_5\frac{q_2(x_6) - r_2(x_6)}{(x_6 - x_3)(x_6 - \omega x_3)}
\end{array}
$$
10. Sample random $x_7$ to keep $f(X)$ and `q_polys` linearly independent.
11. Construct `final_poly`, $$final\_poly(X) = f(X) + x_7 q_1(X) + x_7^2 q_2(X),$$
which is the polynomial we commit to in the inner product argument.

View File

@ -25,3 +25,47 @@ $$(7 - c) \cdot (13 - c) = 0$$
> higher-degree constraints are more expensive to use.
Note that the roots don't have to be constants; for example $(a - x) \cdot (a - y) \cdot (a - z) = 0$ will constrain $a$ to be equal to one of $\{ x, y, z \}$ where the latter can be arbitrary polynomials, as long as the whole expression stays within the maximum degree bound.
## Small set interpolation
We can use Lagrange interpolation to create a polynomial constraint that maps
$f(X) = Y$ for small sets of $X \in \{x_i\}, Y \in \{y_i\}$.
For instance, say we want to map a 2-bit value to a "spread" version interleaved
with zeros. We first precompute the evaluations at each point:
$$
\begin{array}{cc}
00 &\rightarrow 0000 \implies 0 \rightarrow 0 \\
01 &\rightarrow 0001 \implies 1 \rightarrow 1 \\
10 &\rightarrow 0100 \implies 2 \rightarrow 4 \\
11 &\rightarrow 0101 \implies 3 \rightarrow 5
\end{array}
$$
Then, we construct the Lagrange basis polynomial for each point using the
identity:
$$\mathcal{l}_j(X) = \prod_{0 \leq m \leq k, m \neq j} \frac{x - x_m}{x_j - x_m},$$
where $k + 1$ is the number of data points. ($k = 3$ in our example above.)
Recall that the Lagrange basis polynomial $\mathcal{l}_j(X)$ evaluates to $1$ at
$X = x_j$ and $0$ at all other $x_i, j \neq i.$
Continuing our example, we get four Lagrange basis polynomials:
$$
\begin{array}{ccc}
l_0(X) &=& \frac{(X - 3)(X - 2)(X - 1)}{(-3)(-2)(-1)} \\
l_1(X) &=& \frac{(X - 3)(X - 2)(X)}{(-2)(-1)(1)} \\
l_2(X) &=& \frac{(X - 3)(X - 1)(X)}{(-1)(1)(2)} \\
l_3(X) &=& \frac{(X - 2)(X - 1)(X)}{(1)(2)(3)}
\end{array}
$$
Our polynomial constraint is then
$$
\begin{array}{ccccccccc}
&&f(0)l_0(X) &+& f(1)l_1(X) &+& f(2)l_2(X) &+& f(3)l_3(X) - f(X) &=& 0 \\
&\implies& 0 \cdot l_0(X) &+& 1 \cdot l_1(X) &+& 4 \cdot l_2(X) &+& 5 \cdot l_3(X) - f(X) &=& 0. \\
\end{array}
$$