As shown in [[GT20]]((https://eprint.iacr.org/2020/1351)) (Theorem 1) state
restoration soundness is tightly related to soundness after applying the
Fiat-Shamir transformation.
#### Knowledge Soundness
We will show that our protocol satisfies a strengthened notion of knowledge
soundness known as _witness extended emulation_. Informally, this notion states
that for any successful prover algorithm there exists an efficient _emulator_
that can extract a witness from it by rewinding it and supplying it with fresh
randomness.
However, we must slightly adjust our definition of witness extended emulation to
account for the fact that our provers are state restoration provers and can
rewind the verifier. Further, to avoid the need for rewinding the state
restoration prover during witness extraction we study our protocol in the
algebraic group model.
> **Algebraic Group Model (AGM).** An adversary $\alg{\prover}$ is said to be
> _algebraic_ if whenever it outputs a group element $X$ it also outputs a
> _representation_ $\mathbf{x} \in \field^n$ such that $\langle \mathbf{x}, \mathbf{G} \rangle = X$ where $\mathbf{G} \in \group^n$ is the vector of group
where $a_0, a_1, ..., a_{n_a - 1}$ are (multivariate) polynomials with degree $n - 1$ in $X$ and $g$ has degree $n_g(n - 1)$ at most in any indeterminates $X, C_0, C_1, ...$. In order to establish zero knowledge, any of the polynomials $a_i$ that aren't known to the verifier should have $n_e + 1$ random blinding factors as evaluations over $D$.
In the following protocol, we take it for granted that each polynomial $a_i(X, \cdots)$ is defined such that $n_e + 1$ blinding factors are freshly sampled by the prover and are each present as an evaluation of $a_i(X, \cdots)$ over the domain $D$. In all of the following, the verifier's challenges cannot be zero or an element in $D$, and some additional limitations are placed on specific challenges as well.
* $\prover$ sends a hiding commitment $A_j = \innerprod{\mathbf{a'}}{\mathbf{G}} + [\cdot] W$ where $\mathbf{a'}$ are the coefficients of the univariate polynomial $a'_j(X)$ and $\cdot$ is some random, independently sampled blinding factor elided for exposition. (This elision notation is used throughout this protocol description to simplify exposition.)
3. $\prover$ sends a commitment $R = \innerprod{\mathbf{r}}{\mathbf{G}} + [\cdot] W$ where $\mathbf{r} \in \field^n$ are the coefficients of a randomly sampled univariate polynomial $r(X)$ of degree $n - 1$.
5. $\prover$ computes at most $n - 1$ degree polynomials $h_0(X), h_1(X), ..., h_{n_g - 2}(X)$ such that $h(X) = \sum\limits_{i=0}^{n_g - 2} X^{ni} h_i(X)$.
6. $\prover$ sends commitments $H_i = \innerprod{\mathbf{h_i}}{\mathbf{G}} + [\cdot] W$ for all $i$ where $\mathbf{h_i}$ denotes the vector of coefficients for $h_i(X)$.
7. $\verifier$ responds with challenge $x$ and computes $H' = \sum\limits_{i=0}^{n_g - 2} [x^{ni}] H_i$.
9. $\prover$ sends $r = r(x)$ and for all $i \in [0, n_a)$ sends $\mathbf{a_i}$ such that $(\mathbf{a_i})_j = a'_i(\omega^{(\mathbf{p_i})_j} x)$ for all $j \in [0, n_e - 1]$.
10. For all $i \in [0, n_a)$ $\prover$ and $\verifier$ set $s_i(X)$ to be the lowest degree univariate polynomial defined such that $s_i(\omega^{(\mathbf{p_i})_j} x) = (\mathbf{a_i})_j$ for all $j \in [0, n_e - 1)$.
* Finally $\prover$ and $\verifier$ set $r_0 := x_1^2 r_0 + x_1 h + r$ and where $h$ is computed by $\verifier$ as $\frac{g'(x)}{t(x)}$ using the values $r, \mathbf{a}$ provided by $\prover$.
14. $\prover$ sends $Q' = \innerprod{\mathbf{q'}}{\mathbf{G}} + [\cdot] W$ where $\mathbf{q'}$ defines the coefficients of the polynomial
$$q'(X) = \sum\limits_{i=0}^{n_q - 1}
x_2^i
\left(
\frac
{q_i(X) - r_i(X)}
{\prod\limits_{j=0}^{n_e - 1}
\left(
X - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
\right)
$$
15. $\verifier$ responds with challenge $x_3$.
16. $\prover$ sends $\mathbf{u} \in \field^{n_q}$ such that $\mathbf{u}_i = q_i(x_3)$ for all $i \in [0, n_q)$.
20. $\prover$ samples a random polynomial $s(X)$ of degree $n - 1$ with a root at $x_3$ and sends a commitment $S = \innerprod{\mathbf{s}}{\mathbf{G}} + [\cdot] W$ where $\mathbf{s}$ defines the coefficients of $s(X)$.
21. $\verifier$ responds with challenges $\xi, z$.
24. Initialize $\mathbf{p'}$ as the coefficients of $p'(X)$ and $\mathbf{G'} = \mathbf{G}$ and $\mathbf{b} = (x_3^0, x_3^1, ..., x_3^{n - 1})$. $\prover$ and $\verifier$ will interact in the following $k$ rounds, where in the $j$th round starting in round $j=0$ and ending in round $j=k-1$:
* $\prover$ sends $L_j = \innerprod{\mathbf{p'}_\hi}{\mathbf{G'}_\lo} + [z \innerprod{\mathbf{p'}_\hi}{\mathbf{b}_\lo}] U + [\cdot] W$ and $R_j = \innerprod{\mathbf{p'}_\lo}{\mathbf{G'}_\hi} + [z \innerprod{\mathbf{p'}_\lo}{\mathbf{b}_\hi}] U + [\cdot] W$.
We claim that this protocol is _perfectly complete_. This can be verified by
inspection of the protocol; given a valid witness $a_i(X, \cdots) \forall i$ the
prover succeeds in convincing the verifier with probability $1$.
We claim that this protocol is _perfect special honest-verifier zero knowledge_.
We do this by showing that a simulator $\sim$ exists which can produce an
accepting transcript that is equally distributed with a valid prover's
interaction with a verifier with the same public coins. The simulator will act
as an honest prover would, with the following exceptions:
1. In step $1$ of the protocol $\sim$ chooses random degree $n - 1$ polynomials (in $X$) $a_i(X, \cdots) \forall i$.
2. In step $5$ of the protocol $\sim$ chooses a random $n - 1$ degree polynomials $h_0(X), h_1(X), ..., h_{n_g - 2}(X)$.
3. In step $14$ of the protocol $\sim$ chooses a random $n - 1$ degree polynomial $q'(X)$.
4. In step $20$ of the protocol $\sim$ uses its foreknowledge of the verifier's choice of $\xi$ to produce a degree $n - 1$ polynomial $s(X)$ conditioned only such that $p(X) - v + \xi s(X)$ has a root at $x_3$.
First, let us consider why this simulator always succeeds in producing an
_accepting_ transcript. $\sim$ lacks a valid witness and simply commits to
random polynomials whenever knowledge of a valid witness would be required by
the honest prover. The verifier places no conditions on the scalar values in the
transcript. $\sim$ must only guarantee that the check in step $26$ of the
protocol succeeds. It does so by using its knowledge of the challenge $\xi$ to
produce a polynomial which interferes with $p'(X)$ to ensure it has a root at
$x_3$. The transcript will thus always be accepting due to perfect completeness.
In order to see why $\sim$ produces transcripts distributed identically to the
honest prover, we will look at each piece of the transcript and compare the
distributions. First, note that $\sim$ (just as the honest prover) uses a
freshly random blinding factor for every group element in the transcript, and so
we need only consider the _scalars_ in the transcript. $\sim$ acts just as the
prover does except in the mentioned cases so we will analyze each case:
1. $\sim$ and an honest prover reveal $n_e$ openings of each polynomial $a_i(X, \cdots)$, and at most one additional opening of each $a_i(X, \cdots)$ in step $16$. However, the honest prover blinds their polynomials $a_i(X, \cdots)$ (in $X$) with $n_e + 1$ random evaluations over the domain $D$. Thus, the openings of $a_i(X, \cdots)$ at the challenge $x$ (which is prohibited from being $0$ or in the domain $D$ by the protocol) are distributed identically between $\sim$ and an honest prover.
2. Neither $\sim$ nor the honest prover reveal $h(x)$ as it is computed by the verifier. However, the honest prover may reveal $h'(x)$ --- which has a non-trivial relationship with $h(X)$ --- were it not for the fact that the honest prover also commits to a random degree $n - 1$ polynomial $r(X)$ in step $3$, producing a commitment $R$ and ensuring that in step $12$ when the prover sets $q_0(X) := x_1^2 q_0(X) + x_1 h'(X) + r(X)$ the distribution of $q_0(x)$ is uniformly random. Thus, $h'(x_3)$ is never revealed by the honest prover nor by $\sim$.
3. The expected value of $q'(x_3)$ is computed by the verifier (in step $18$) and so the simulator's actual choice of $q'(X)$ is irrelevant.
4. $p(X) - v + \xi s(X)$ is conditioned on having a root at $x_3$, but otherwise no conditions are placed on $s(X)$ and so the distribution of the degree $n - 1$ polynomial $p(X) - v + \xi s(X)$ is uniformly random whether or not $s(X)$ has a root at $x_3$. Thus, the distribution of $c$ produced in step $25$ is identical between $\sim$ and an honest prover. The synthetic blinding factor $f$ also revealed in step $25$ is a trivial function of the prover's other blinding factors and so is distributed identically between $\sim$ and an honest prover.
Notes:
1. In an earlier version of our protocol, the prover would open each individual commitment $H_0, H_1, ...$ at $x$ as part of the multipoint opening argument, and the verifier would confirm that a linear combination of these openings (with powers of $x^n$) agreed to the expected value of $h(x)$. This was done because it's more efficient in recursive proofs. However, it was unclear to us what the expected distribution of the openings of these commitments $H_0, H_1, ...$ was and so proving that the argument was zero-knowledge is difficult. Instead, we changed the argument so that the _verifier_ computes a linear combination of the commitments and that linear combination is opened at $x$. This avoided leaking $h_i(x)$.
2. As mentioned, in step $3$ the prover commits to a random polynomial as a way of ensuring that $h'(x_3)$ is not revealed in the multiopen argument. This is done because it's unclear what the distribution of $h'(x_3)$ would be.
3. Technically it's also possible for us to prove zero-knowledge with a simulator that uses its foreknowledge of the challenge $x$ to commit to an $h(X)$ which agrees at $x$ to the value it will be expected to. This would obviate the need for the random polynomial $s(X)$ in the protocol. This may make the analysis of zero-knowledge for the remainder of the protocol a little bit tricky though, so we didn't go this route.
4. Group element blinding factors are _technically_ not necessary after step $23$ in which the polynomial is completely randomized. However, it's simpler in practice for us to ensure that every group element in the protocol is randomly blinded to make edge cases involving the point at infinity harder.
5. It is crucial that the verifier cannot challenge the prover to open polynomials at points in $D$ as otherwise the transcript of an honest prover will be forced to contain what could be portions of the prover's witness. We therefore restrict the space of challenges to include all elements of the field except $D$ and, for simplicity, we also prohibit the challenge of $0$.
## Witness-extended Emulation
Let $\protocol = \protocol[\group]$ be the interactive argument described above for relation $\relation$ and some group $\group$ with scalar field $\field$. We can always construct an extractor $\extractor$ such that for any non-uniform algebraic prover $\alg{\prover}$ making at most $q$ queries to its oracle, there exists a non-uniform adversary $\dlreladv$ with the property that for any computationally unbounded distinguisher $\distinguisher$
where $\frac{(n_g - 1) \cdot (n - 1)}{|\ch|} \leq \epsilon$.
_Proof._ We will prove this by invoking Theorem 1 of [[GT20]](https://eprint.iacr.org/2020/1351). First, we note that the challenge space for all rounds is the same, i.e. $\forall i \ \ch = \ch_i$. Theorem 1 requires us to define:
- $\badch(\tr') \in \ch$ for all partial transcripts $\tr' = (\pp, x, [a_0], c_0, \ldots, [a_i])$ such that $|\badch(\tr')| / |\ch| \leq \epsilon$.
- an extractor function $e$ that takes as input an accepting extended transcript $\tr$ and either returns a valid witness or fails.
- a function $\pfail(\protocol, \alg{\prover}, e, \relation)$ returning a probability.
We say that an accepting extended transcript $\tr$ contains "bad challenges" if and only if there exists a partial extended transcript $\tr'$, a challenge $c_i \in \badch(\tr')$, and some sequence of prover messages and challenges $([a_{i+1}], c_{i+1}, \ldots [a_j])$ such that $\tr = \tr' \,||\, (c_i, [a_{i+1}], c_{i+1}, \ldots [a_j])$.
Theorem 1 requires that $e$, when given an accepting extended transcript $\tr$ that does not contain "bad challenges", returns a valid witness for that transcript except with probability bounded above by $\pfail(\protocol, \alg{\prover}, e, \relation)$.
Our strategy is as follows: we will define $e$, establish an upper bound on $\pfail$ with respect to an adversary $\dlreladv$ that plays the $\dlrel_{\group,n+2}$ game, substitute these into Theorem 1, and then walk through the protocol to determine the upper bound of the size of $\badch(\tr')$. The adversary $\dlreladv$ plays the $\dlrel_{\group,n+2}$ game as follows: given the inputs $U, W \in \mathbb{G}, \mathbf{G} \in \mathbb{G}^n$, the adversary $\dlreladv$ simulates the game $\srwee_{\protocol, \relation}$ to $\alg{\prover}$ using the inputs from the $\dlrel_{\group,n+2}$ game as public parameters. If $\alg{\prover}$ manages to produce an accepting extended transcript $\tr$, $\dlreladv$ invokes a function $h$ on $\tr$ and returns its output. We shall define $h$ in such a way that for an accepting extended transcript $\tr$ that does not contain "bad challenges", $e(\tr)$ _always_ returns a valid witness whenever $h(\tr)$ does _not_ return a non-trivial discrete log relation. This means that the probability $\pfail(\protocol, \alg{\prover}, e, \relation)$ is no greater than $\adv^\dlrel_{\group,n+2}(\dlreladv, \sec)$, establishing our claim.
#### Helpful substitutions
We will perform some substitutions to aid in exposition. First, let us define the polynomial
where $f(i, j)$ returns $1$ when the $j$th bit of $i$ is set, and $0$ otherwise. We can also write $\mathbf{G'}_0 = \innerprod{\mathbf{s}}{\mathbf{G}}$.
### Description of function $h$
Recall that an accepting transcript $\tr$ is such that
$$
\sum_{i=0}^{k - 1} [u_j^{-1}] \rep{L_j} + \rep{P'} + \sum_{i=0}^{k - 1} [u_j] \rep{R_j} = [c] \mathbf{G'}_0 + [c z \mathbf{b}_0] U + [f] W
$$
By inspection of the representations of group elements with respect to $\mathbf{G}, U, W$ (recall that $\alg{\prover}$ is algebraic and so $\dlreladv$ has them), we obtain the $n$ equalities
+ &\left[ \sum_{i=0}^{k - 1} u_j^{-1} \repr{L_j}{U} + \repr{P'}{U} + \sum_{i=0}^{k - 1} u_j \repr{R_j}{U} - c z \kappa(x_3) \right] & U \\[1ex]
+ &\left[ \sum_{i=0}^{k - 1} u_j^{-1} \repr{L_j}{W} + \repr{P'}{W} + \sum_{i=0}^{k - 1} u_j \repr{R_j}{W} - f \right] & W
\end{array}
$$
which is always a discrete log relation. If any of the equalities above are not satisfied, then this discrete log relation is non-trivial. This is the function invoked by $\dlreladv$.
#### The extractor function $e$
The extractor function $e$ simply returns $a_i(X)$ from the representation $\rep{A_i}$ for $i \in [0, n_a)$. Due to the restrictions we will place on the space of bad challenges in each round, we are guaranteed to obtain polynomials such that $g(X, C_0, C_1, \cdots, a_0(X), a_1(X), \cdots)$ vanishes over $D$ whenever the discrete log relation returned by the adversary's function $h$ is trivial. This trivially gives us that the extractor function $e$ succeeds with probability bounded above by $\pfail$ as required.
#### Defining $\badch(\tr')$
Recall from before that the following $n$ equalities hold:
so that we can rewrite the above (after expanding for $\kappa(x_3)$) as
$$
\mv{G}{i}{k} = c \mathbf{s}_i \forall i \in [0, n)
$$
$$
\m{U}{k} = c z \prod_{j=0}^{k - 1} (1 + u_{k - 1 - j} x_3^{2^j})
$$
We can combine these equations by multiplying both sides of each instance of the first equation by $\mathbf{s}_i^{-1}$ (because $\mathbf{s}_i$ is never zero) and substituting for $c$ in the second equation, yielding the following $n$ equalities:
> **Lemma 1.** If $\m{U}{k} = \mv{G}{i}{k} \cdot \mathbf{s}_i^{-1} z \prod_{j=0}^{k - 1} (1 + u_{k - 1 - j} x_3^{2^j}) \forall i \in [0, n)$ then it follows that $\repr{P'}{U} = z \sum\limits_{i=0}^{2^k - 1} x_3^i \repv{P'}{G}{i}$ for all transcripts that do not contain bad challenges.
>
> _Proof._ It will be useful to introduce yet another abstraction defined starting with
> $$
\z{k}{m}{i} = \mv{G}{i}{m}
$$
> and then recursively defined for all integers $r$ such that $0 \lt r \leq k$
> For all integers $r$ such that $0 \lt r \leq k$ we have that $\mathbf{s}_{i + 2^{r - 1}} = u_{r - 1} \mathbf{s}_i \forall i \in [0, 2^{r - 1})$ by the definition of $\mathbf{s}$. This gives us $\mathbf{s}_{i+2^{r - 1}}^{-1} = \mathbf{s}_i^{-1} u_{r - 1}^{-1} \forall i \in [0, 2^{r - 1})$ as no value in $\mathbf{s}$ nor any challenge $u_r$ are zeroes. We can use this to relate one half of the equalities with the other half as so:
> Notice that $\z{r}{r}{i}$ can be rewritten as $u_{r - 1}^{-1} \repv{L_{r - 1}}{G}{i} + \z{r}{r - 1}{i} + u_{r - 1} \repv{R_{r - 1}}{G}{i}$ for all $i \in [0, 2^{r})$. Thus we can rewrite the above as
> This gives us $2^{r - 1}$ triples of maximal degree-$4$ polynomials in $X$ that agree at $u_{r - 1}$ despite having coefficients determined prior to the choice of $u_{r - 1}$. The probability that two of these polynomials would agree at $u_{r - 1}$ and yet be distinct would be $\frac{4}{|\ch|}$ by the Schwartz-Zippel lemma and so by the union bound the probability that the three of these polynomials agree and yet any of them is distinct from another is $\frac{8}{|\ch|}$. By the union bound again the probability that any of the $2^{r - 1}$ triples have multiple distinct polynomials is $\frac{2^{r - 1}\cdot8}{|\ch|}$. By restricting the challenge space for $u_{r - 1}$ accordingly we obtain $|\badch(\trprefix{\tr'}{u_r})|/|\ch| \leq \frac{2^{r - 1}\cdot8}{|\ch|}$ for integers $0 \lt r \leq k$ and thus $|\badch(\trprefix{\tr'}{u_k})|/|\ch| \leq \frac{4n}{|\ch|} \leq \epsilon$.
>
> We can now conclude an equality of polynomials, and thus of coefficients. Consider the coefficients of the constant terms first, which gives us the $2^{r - 1}$ equalities
> No value of $\mathbf{s}$ is zero, $z$ is never chosen to be $0$ and each $u_j$ is chosen so that $1 + u_{k - 1 - j} x_3^{2^j}$ is nonzero, so we can then conclude
> An identical process can be followed with respect to the coefficients of the $X^4$ term in the equalities to establish $0 = \repv{R_{r - 1}}{G}{i} \forall i \in [0, 2^{r - 1})$ contingent on $x_3$ being nonzero, which it always is. Substituting these in our equalities yields us something simpler
> which is precisely in the form we set out to demonstrate.
>
> We now proceed by induction from the case $r = k$ (which we know holds) to reach $r = 0$, which gives us
$$
\m{U}{0} = \z{0}{0}{0} \cdot \mathbf{s}_0^{-1} z
$$
>
> and because $\m{U}{0} = \repr{P'}{U}$ and $\z{0}{0}{0} = \sum_{i=0}^{2^k - 1} x_3^i \repv{P'}{G}{i}$, we obtain $\repr{P'}{U} = z \sum_{i=0}^{2^k - 1} x_3^i \repv{P'}{G}{i}$, which completes the proof.
Having established that $\repr{P'}{U} = z \sum_{i=0}^{2^k - 1} x_3^i \repv{P'}{G}{i}$, and given that $x_3$ and $\repv{P'}{G}{i}$ are fixed in advance of the choice of $z$, we have that at most one value of $z \in \ch$ (which is nonzero) exists such that $\repr{P'}{U} = z \sum_{i=0}^{2^k - 1} x_3^i \repv{P'}{G}{i}$ and yet $\repr{P'}{U} \neq 0$. By restricting $|\badch(\trprefix{\tr'}{z})|/|\ch| \leq \frac{1}{|\ch|} \leq \epsilon$ accordingly we obtain $\repr{P'}{U} = 0$ and therefore that the polynomial defined by $\repr{P'}{\mathbf{G}}$ has a root at $x_3$.
By construction $P' = P - [v] \mathbf{G}_0 + [\xi] S$, giving us that the polynomial defined by $\repr{P + [\xi] S}{\mathbf{G}}$ evaluates to $v$ at the point $x_3$. We have that $v, P, S$ are fixed prior to the choice of $\xi$, and so either the polynomial defined by $\repr{S}{\mathbf{G}}$ has a root at $x_3$ (which implies the polynomial defined by $\repr{P}{\mathbf{G}}$ evaluates to $v$ at the point $x_3$) or else $\xi$ is the single solution in $\ch$ for which $\repr{P + [\xi] S}{\mathbf{G}}$ evaluates to $v$ at the point $x_3$ while $\repr{P}{\mathbf{G}}$ itself does not. We avoid the latter case by restricting $|\badch(\trprefix{\tr'}{\xi})|/|\ch| \leq \frac{1}{|\ch|} \leq \epsilon$ accordingly and can thus conclude that the polynomial defined by $\repr{P}{\mathbf{G}}$ evaluates to $v$ at $x_3$.
The remaining work deals strictly with the representations of group elements sent previously by the prover and their relationship with $P$ as well as the challenges chosen in each round of the protocol. We will simplify things first by using $p(X)$ to represent the polynomial defined by $\repr{P}{\mathbf{G}}$, as it is the case that this $p(X)$ corresponds exactly with the like-named polynomial in the protocol itself. We will make similar substitutions for the other group elements (and their corresponding polynomials) to aid in exposition, as the remainder of this proof is mainly tedious application of the Schwartz-Zippel lemma to upper bound the bad challenge space size for each of the remaining challenges in the protocol.
Recall that $P = Q' + x_4 \sum\limits_{i=0}^{n_q - 1} [x_4^i] Q_i$, and so by substitution we have $p(X) = q'(X) + x_4 \sum\limits_{i=0}^{n_q - 1} x_4^i q_i(X)$. Recall also that
$$
v = \sum\limits_{i=0}^{n_q - 1}
\left(
x_2^i
\left(
\frac
{ \mathbf{u}_i - r_i(x_3) }
{\prod\limits_{j=0}^{n_e - 1}
\left(
x_3 - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
\right)
\right)
+
x_4 \sum\limits_{i=0}^{n_q - 1} x_4 \mathbf{u}_i
$$
We have already established that $p(x_3) = v$. Notice that the coefficients in the above expressions for $v$ and $P$ are fixed prior to the choice of $x_4 \in \ch$. By the Schwartz-Zippel lemma we have that only at most $n_q + 1$ possible choices of $x_4$ exist such that these expressions are satisfied and yet $q_i(x_3) \neq \mathbf{u}_i$ for any $i$ or
$$
q'(x_3) \neq \sum\limits_{i=0}^{n_q - 1}
\left(
x_2^i
\left(
\frac
{ \mathbf{u}_i - r_i(x_3) }
{\prod\limits_{j=0}^{n_e - 1}
\left(
x_3 - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
\right)
\right)
$$
By restricting $|\badch(\trprefix{\tr'}{x_4})|/|\ch| \leq \frac{n_q + 1}{|\ch|} \leq \epsilon$ we can conclude that all of the aforementioned inequalities are untrue. Now we can substitute $\mathbf{u}_i$ with $q_i(x_3)$ for all $i$ to obtain
$$
q'(x_3) = \sum\limits_{i=0}^{n_q - 1}
\left(
x_2^i
\left(
\frac
{ q_i(x_3) - r_i(x_3) }
{\prod\limits_{j=0}^{n_e - 1}
\left(
x_3 - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
\right)
\right)
$$
Suppose that $q'(X)$ (which is the polynomial defined by $\repr{Q'}{\mathbf{G}}$, and is of degree at most $n - 1$) does _not_ take the form
$$\sum\limits_{i=0}^{n_q - 1}
x_2^i
\left(
\frac
{q_i(X) - r_i(X)}
{\prod\limits_{j=0}^{n_e - 1}
\left(
X - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
\right)
$$
and yet $q'(X)$ agrees with this expression at $x_3$ as we've established above. By the Schwartz-Zippel lemma this can only happen for at most $n - 1$ choices of $x_3 \in \ch$ and so by restricting $|\badch(\trprefix{\tr'}{x_3})|/|\ch| \leq \frac{n - 1}{|\ch|} \leq \epsilon$ we obtain that
$$q'(X) = \sum\limits_{i=0}^{n_q - 1}
x_2^i
\left(
\frac
{q_i(X) - r_i(X)}
{\prod\limits_{j=0}^{n_e - 1}
\left(
X - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
\right)
$$
Next we will extract the coefficients of this polynomial in $x_2$ (which are themselves polynomials in formal indeterminate $X$) by again applying the Schwartz-Zippel lemma with respect to $x_2$; again, this leads to the restriction $|\badch(\trprefix{\tr'}{x_2})|/|\ch| \leq \frac{n_q}{|\ch|} \leq \epsilon$ and we obtain the following polynomials of degree at most $n - 1$ for all $i \in [0, n_q - 1)$
$$
\frac
{q_i(X) - r_i(X)}
{\prod\limits_{j=0}^{n_e - 1}
\left(
X - \omega^{\left(
\mathbf{q_i}
\right)_j} x
\right)
}
$$
Having established that these are each non-rational polynomials of degree at most $n - 1$ we can then say (by the factor theorem) that for each $i \in [0, n_q - 1]$ and $j \in [0, n_e - 1]$ we have that $q_i(X) - r_i(X)$ has a root at $\omega^{\left(\mathbf{q_i}\right)_j} x$. Note that we can interpret each $q_i(X)$ as the restriction of a _bivariate_ polynomial at the point $x_1$ whose degree with respect to $x_1$ is at most $n_a + 1$ and whose coefficients consist of various polynomials $a'_i(X)$ (from the representation $\repr{A'_i}{\mathbf{G}}$) as well as $h'(X)$ (from the representation $\repr{H'_i}{\mathbf{G}}$) and $r(X)$ (from the representation $\repr{R}{\mathbf{G}}$). By similarly applying the Schwartz-Zippel lemma and restricting the challenge space with $|\badch(\trprefix{\tr'}{x_1})|/|\ch| \leq \frac{n_a + 1}{|\ch|} \leq \epsilon$ we obtain (by construction of each $q'_i(X)$ and $r_i(X)$ in steps 12 and 13 of the protocol) that the prover's claimed value of $r$ in step 9 is equal to $r(x)$; that the value $h$ computed by the verifier in step 13 is equal to $h'(x)$; and that for all $i \in [0, n_q - 1]$ the prover's claimed values $(\mathbf{a_i})_j = a'_i(\omega^{(\mathbf{p_i})_j} x)$ for all $j \in [0, n_e - 1]$.
By construction of $h'(X)$ (from the representation $\repr{H'}{\mathbf{G}}$) in step 7 we know that $h'(x) = h(x)$ where by $h(X)$ we refer to the polynomial of degree at most $(n_g - 1) \cdot (n - 1)$ whose coefficients correspond to the concatenated representations of each $\repr{H_i}{\mathbf{G}}$. As before, suppose that $h(X)$ does _not_ take the form $g'(X) / t(X)$. Then because $h(X)$ is determined prior to the choice of $x$ then by the Schwartz-Zippel lemma we know that it would only agree with $g'(X) / t(X)$ at $(n_g - 1) \cdot (n - 1)$ points at most if the polynomials were not equal. By restricting again $|\badch(\trprefix{\tr'}{x})|/|\ch| \leq \frac{(n_g - 1) \cdot (n - 1)}{|\ch|} \leq \epsilon$ we obtain $h(X) = g'(X) / t(X)$ and because $h(X)$ is a non-rational polynomial by the factor theorem we obtain that $g'(X)$ vanishes over the domain $D$.
We now have that $g'(X)$ vanishes over $D$ but wish to show that $g(X, C_0, C_1, \cdots)$ vanishes over $D$ at all points to complete the proof. This just involves a sequence of applying the same technique to each of the challenges; since the polynomial $g(\cdots)$ has degree at most $n_g \cdot (n - 1)$ in any indeterminate by definition, and because each polynomial $a_i(X, C_0, C_1, ..., C_{i - 1}, \cdots)$ is determined prior to the choice of concrete challenge $c_i$ by similarly bounding $|\badch(\trprefix{\tr'}{c_i})|/|\ch| \leq \frac{n_g \cdot (n - 1)}{|\ch|} \leq \epsilon$ we ensure that $g(X, C_0, C_1, \cdots)$ vanishes over $D$, completing the proof.