david wong

Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

How does PLONK work? Part 1: What's PLONK? posted August 2021

I recently got into general-purpose zero-knowledge proof systems (cryptographic primitives that allow you to prove the execution of a program without revealing some of the inputs), specifically the state-of-the-art PLONK proof system. This is a series of video I made to explain what I understood and learned in the past few months. There might be some inaccuracies, so I apologize in advance for that. You can check all the videos via the playlist here: https://www.youtube.com/watch?v=RUZcam_jrz0&list=PLBJMt6zV1c7Gh9Utg-Vng2V6EYVidTFCC

In this first video, I simply explain what general-purpose zero-knowledge proofs are, specifically zk-SNARKs, and what PLONK is.

Part 2 is here.

comment on this story

What is an inner product argument? Part 1 posted August 2021

The inner product argument is the following construction: given the commitments (for now let's say the hash) of two vectors $\vec{a}$ and $\vec{b}$ of size $n$ and with entries in some field $\mathbb{F}$, prove that their inner product $\langle \vec{a}, \vec{b} \rangle$ is equal to $z$.

There exist different variants of this inner product argument. In some versions, none of the values ($\vec{a}$, $\vec{b}$ and $z$) are given, only commitments. In some other version, which is interesting to us and that I will explain here, only $\vec{a}$ is unknown.

How is that useful?

Inner products arguments are useful for several things, but what we're using them for in Mina is polynomial commitments. The rest of this post won't make too much sense if you don't know what a polynomial commitment is, but briefly: it allows you to commit to a polynomial $f$ and then later prove its evaluation at some point $s$. Check my post on Kate polynomial commitments for more on polynomial commitment schemes.

How does that translate to the inner product argument though? First, let's see our polynomial $f$ as a vector of coefficients:

$$ \vec{f} = (f_0, \cdots, f_n) \text{ such that } f(x) = f_0 + f_1 x + f_2 x^2 + \cdots + f_n x^n $$

Then notice that

$$ f(s) = \langle \vec{f}, (1, s, s^2, \cdots, s^{n}) \rangle $$

And here's our inner product again.

The idea behind Bootleproof-type of inner product argument

The inner product argument protocol I'm about to explain was invented by Bootle et al. It was later optimized in the Bulletproof paper (hence why we unofficially call the first paper bootleproof), and then some more in the Halo paper. It's the later optimization that I'll explain here.

A naive approach

So before I get into the weeds, what's the high-level? Well first, what's a naive way to prove that we know the pre-image of a hash $h$, the vector $\vec{a}$, such that $\langle\vec{a}, \vec{b}\rangle = z$? We could just reveal $\vec{a}$ and let anyone verify that indeed, hashing it gives out $h$, and that it also verifies the equation $\langle\vec{a}, \vec{b}\rangle = z$.

$$ \boxed{ \begin{align} & \langle \vec{a}, \vec{b} \rangle = z\ & \text{given } \vec{b} \text{, } z \text{, and a hash of } \vec{a} \end{align} } \; \overleftarrow{\text{open proof}} \; \boxed{\vec{a}} $$

Obliviously, we have to reveal $\vec{a}$ itself, which is not great. But we'll deal with that later, trust me. What we want to tackle first here is the proof size, which is the size of the vector $\vec{a}$. Can we do better?

Reducing the problem to a smaller problem to prove

The inner product argument reduces the opening proof by using an intermediate reduction proof:

$$ \boxed{\begin{aligned} & \langle \vec{a}, \vec{b} \rangle = z\\ & \text{given } \vec{b} \text{, } z \text{, and a hash of } \vec{a} \end{aligned}} \; \overleftarrow{\text{reduction proof}} \; \boxed{\begin{aligned} & \langle \vec{a'}, \vec{b'} \rangle = z'\\ & \text{ given } \vec{b'} \text{, } z' \text{, and a hash of } \vec{a'} \end{aligned}} \; \overleftarrow{\text{open proof}} \; \boxed{\vec{a'}} $$

Where the size of $\vec{a'}$ is half the size of $\vec{a}$, and as such the final opening proof ($\vec{a'}$) is half the size of our naive approach.

The reduction proof is where most of the magic happens, and this reduction can be applied many times ($log_2(n)$ times to be exact) to get a final opening proof of size 1. Of course the entire proof is not just the final opening proof of size 1, but all the elements involved in the reduction proofs. It can still be much smaller than the original proof of size $n$.

So most of the proof size comes from the multiple reduction subproofs that you'll end up creating on the way. Our proof is really a collection of miniproofs or subproofs.

One last thing before we get started: Pedersen hashing and commitments

To understand the protocol, you need to understand commitments. I've used hashing so far, but hashing with a hash function like SHA-3 is not great as it has no convenient mathematical structure. We need algebraic commitments, which will allow us to prove things on the committed value without revealing the value committed. Usually what we want is some homomorphic property that will allow us to either add commitments together or/and multiply them together.

For now, let's see a simple non-hiding commitment: a Pedersen hash. To commit to a single value $x$ simply compute:

$$ x G $$

where the discrete logarithm of $G$ is unknown. To open the commitment, simply reveal the value $x$.

We can also perform multi-commitments with Pedersen hashing. For a vector of values $(x_1, \cdots, x_k)$, compute:

$$ x_1 G_1 + \cdots + x_k G_k $$

where each $G_i$ is distinct and has an unknown discrete logarithm as well. I'll often shorten the last formula as the inner product $\langle \vec{x}, \vec{G} \rangle$ for $\vec{x} = (x_1, \cdots, x_k)$ and $\vec{G} = (G_1, \cdots, G_k)$. To reveal a commitment, simply reveal the values $x_i$.

Pedersen hashing allow commitents that are non-hiding, but binding, as you can't open them to a different value than the originally comitted one. And as you can see, adding the commitment of $x$ and $y$ gives us the commitment of $x+y$:

$$xG + yG = (x+y)G$$

which will be handy in our inner product argument protocol

The protocol

Set up

Here are the settings of our protocol. Known only to the prover, is the secret vector

$$\vec{a} = (a_1, a_2, a_3, a_4)$$

The rest is known to both:

  • $\vec{G} = (G_1, G_2, G_3, G_4)$, a basis for Pedersen hashing
  • $A = \langle \vec{a}, \vec{G} \rangle$, the commitment of $\vec{a}$
  • $\vec{b} = (b_1, b_2, b_3, b_4)$, the powers of some value $s$ such that $\vec{b} = (1, s, s^2, s^3)$
  • the result of the inner product $z = \langle \vec{a}, \vec{b} \rangle$

For the sake of simplicity, let's pretend that this is our problem, and we just want to halve the size of our secret vector $\vec{a}$ before revealing it. As such, we will only perform a single round of reduction. But you can also think of this step as being already the reduction of another problem twice as large.

We can picture the protocol as follows:

  1. The prover first sends a commitment to the polynomial $f$.
  2. The verifier sends a point $s$, asking for the value $f(s)$. To help the prover perform a proof of correct evaluation, they also send a random challenge $x$.
  3. The prover sends the result of the evaluation, $z$, as well as a proof.
Prover->Verifier: com(f) Verifier->Prover: s, random x Prover->Verifier: z = f(s), proof of opening

Does that make sense? Of course what's interesting to us is the proof, and how the prover uses that random $x$.

Reduced problem

First, the prover cuts everything in half. Then they use $x$ to construct linear combinations of these cuts:

  • $\vec{a'} = x^{-1} \begin{pmatrix}a_1 \ a_2\end{pmatrix} + x \begin{pmatrix}a_3 \ a_4\end{pmatrix}$
  • $\vec{b'} = x \begin{pmatrix}b_1 \ b_2\end{pmatrix} + x^{-1} \begin{pmatrix}b_3 \ b_4\end{pmatrix}$
  • $\vec{G'} = x \begin{pmatrix}G_1 \ G_2\end{pmatrix} + x^{-1} \begin{pmatrix}G_3 \ G_4\end{pmatrix}$

This is how the problem is reduced to $\langle \vec{a'}, \vec{b'} \rangle = z'$.

At this point, the prover can send $\vec{a'}$, $\vec{b'}$, and $z'$ and the verifier can check if indeed $\langle \vec{a'}, \vec{b'} \rangle = z'$. But that wouldn't make much sense would it? Here we also want:

  • a proof that proving that statement is the same as proving the previous statement ($\langle \vec{a}, \vec{b} \rangle = z$)
  • a way for the verifier to compute $z'$ and $b'$ and $A'$ (the new commitment) by themselves.

The actual proof

The verifier can compute $\vec{b'}$ as they have everything they need to do so.

What about $A'$, the commitment of $\vec{a'}$ which uses the new $\vec{G'}$ basis. It should be the following value:

$$ \begin{align} \vec{A'} =& \langle \vec{a'}, \vec{G'} \rangle \\ =& (x^{-1} a_1 + x a_3)(x G_1 + x^{-1} G_3) + (x^{-1} a_2 + x a_4)(x G_2 + x^{-1}G_4) \\ =& A + x^{-2} (a_1 G_3 + a_2 G_4) + x^2 (a_3 G_1 + a_4 G_2) \\ =& A + x^{-2} L_a + x^{2} R_a \end{align} $$

So to compute this new commitment, the verifier needs:

  • the previous commitment $A$, which they already have
  • some powers of $x$, which they can compute
  • two curve points $L_a$ and $R_a$, which the prover will have to provide to them

What about $z'$? Recall:

  • $\vec{a'} = \begin{pmatrix}x^{-1} a_1 + x a_3 \ x^{-1} a_2 + x a_4 \end{pmatrix}$
  • $\vec{b'} = \begin{pmatrix}x b_1 + x^{-1} b_3 \ x b_2 + x^{-1} b_4 \end{pmatrix}$

So the new inner product should be:

$$ \begin{align} \vec{z'} =& \langle \vec{a'}, \vec{b'} \rangle \\ =& \langle \begin{pmatrix}x^{-1} a_1 + x a_3 \ x^{-1} a_2 + x a_4 \end{pmatrix}, \begin{pmatrix}x b_1 + x^{-1} b_3 \ x b_2 + x^{-1} b_4 \end{pmatrix} \rangle \\ =& (a_1b_1 + a_2b_2 + a_3b_3 + a_4b_4) + x^{-2} (a_1b_3 + a_2b_4) + x^2 (a_3b_1 + a_4b_2) \\ =& z + x^{-2} (L_z) + x^2 (R_z) \end{align} $$

Similarly to $A'$, the verifier can recompute $z'$ from the previous value $z$ and two scalar values $L_z$ and $R_z$ which the prover needs to provide.

So in the end, the proof has becomes:

  • the vector $\vec{a'}$ which is half the size of $\vec{a}$
  • the $L_a, R_a$ curve points (around two field elements, if compressed)
  • the $L_z, R_z$ scalar values

We can update our previous diagram:

Prover->Verifier: com(f) Verifier->Prover: s, random x Prover->Verifier: z = f(s) Prover->Verifier: a', L_a, R_a, L_z, R_z

In our example, the naive proof was to reveal $\vec{a}$ which was 4 field elements. We are now revealing instead 2 + 2 + 2 = 6 field elements. This is not great, but if $\vec{a}$ was much larger (let's say 128), the reduction in half would still be of 64 + 2 + 2 = 68 field elements. Not bad no? We can do better though... Stay tuned for the next post.

2 comments

Understanding PLONK posted July 2021

PLONK is the state of the art when it comes to general-purpose proof system. While it was released in 2019, the paper has recently received some updates, and the scheme is still evolving (with Aztec announcing an UltraPLONK version coming soon). This is the scheme that we use at Mina to compress the size of the blockchain from gigabytes to a fixed size of a dozen kilobytes.

While I don't think the core ideas are the hardest to understand, the scheme compresses a myriad of optimization which makes it hard to parse. In this post I hope to add some clarity to some aspects of the scheme. Note that I assume that you have some knowledge of how PLONK works.

How PLONK works, the short version

Eventually, the idea of PLONK is to prove that some polynomial $f(x)$ vanishes on some domain $H \subset \mathbb{F}$ (and I will ignore the permutation argument, which is just another proof). To prove that, we reduce the problem to some other problem. Incrementaly, it looks like this:

  • Proving the previous statement is equivalent to proving that the polynomial is divisible by $Z_H(x)$, the polynomial that has all the elements of $H$ as roots (also called vanishing polynomial).
  • Which is equivalent to proving the following identity (for some quotient polynomial $t$): $$f(x) = t(x) \cdot Z_H(x) \; \; \; \forall x \in \mathbb{F}$$
  • Which is equivalent to proving the identity on some random point $z$ (thanks to the Schwartz-Zippel lemma): $$f(z) = t(z) \cdot Z_H(z)$$

To prove the last statement, the prover uses of polynomial commitment scheme (specifically, the KZG scheme) to commit to the polynomial $f$ and $t$. The prover then sends the commitments to the verifier. At that point, the verifier has to check that for some random point $z$

$$ f(z) = t(z) \cdot Z_H(z) $$

This is done by sending a random point $z$ to the prover and doing an "opening" of the commitments at this point: the prover sends the values $f(z)$ and $t(z)$ as well as a proof that these are the correct evaluations.

Prover->Verifier: com(f), com(t) Note right of Verifier: generates random z Verifier->Prover: z Prover->Verifier: f(z), t(z) Prover->Verifier: proofs of opening Note right of Verifier: checks that \n sum f(z) = t(z)z_H(z)

This is in essence the PLONK protocol, except that this is not really what happens in the paper...

More reductions

The newer PLONK actually does one more reduction of the last statement:

  • As per the previous section: we want to prove that $$f(z) = t(z) \cdot Z_H(z)$$
  • Which is equivalent to proving that $z$ is a root of the polynomial $$f(x) - t(x) \cdot Z_H(x)$$
  • Since the verifier already knows one of the polynomial ($Z_H$), they can evaluate it in advance. So the previous statement is equivalent to proving that $z$ is a root of $$r(x) = f(x) - t(x) \cdot Z_H(z)$$

The last two steps is an optimization (called Maller's optimization) that removes the need for the prover to send $t(z)$, as the verifier can use the commitment to $t$ to produce a commitment to $r$ (to verify the opening proof).

These additional reductions moved us from a protocol in which the prover sends openings to let the verifier check an identity by themselves, to a protocol where the prover simply sends openings.

Prover->Verifier: com(f), com(t) Note right of Verifier: generates random z Verifier->Prover: z Prover->Verifier: f(z), r(z) = 0 Prover->Verifier: proofs of opening Note right of Verifier: reconstruct r(x) and \n validate opening proofs

To verify the opening of $r$ for $x = z$, the verifier will have to reconstruct a commitment to $r$ first. That's easy, it is:

$$com(r) = com(f) - com(t) \cdot Z_H(z)$$

which will use:

  • the commitment to $f$ received during the protocol
  • the commitment to $t$ received during the protocol
  • the evaluation of $Z_H(x)$ at $x=z$ which they can do by themselves

Not so fast... t is too large

If you've read PLONK, you've noticed that the prover actually doesn't send a commitment to $t$ directly, because $t$ is too large and polynomial commitment schemes have an upperbound fixed during the trusted setup. (By the way, $t$ is too large because the permutation argument makes it three times as large due to the three witness polynomials.) To circumvent that limitation, the polynomial $t$ is split into three smaller polynomials $t_{lo}, t_{mid}, t_{hi}$ such that:

$$ t(x) = t_{lo}(x) + x^n \cdot t_{mid}(x) + x^{2n} \cdot t_{hi}(x) $$

This means that in our previous protocol, we can't prove directly that $z$ is a root of

$$r(x) = f(x) - t(x) \cdot Z_H(z)$$

instead we have to prove the equivalent that $z$ is a root of

$$r(x) = f(x) - [t_{lo}(x) + x^n \cdot t_{mid}(x) + x^{2n} \cdot t_{hi}(x)] \cdot Z_H(z)$$

This is not great, as the prover cannot produce a commitment to $r$ anymore. The reason is that $x^n$ and $x^{2n}$ cannot be committed as they're larger than the upperbound of our polynomial commitment. Instead, notice that since the verifier already knows these values, so they can pre-evaluate them at $z$ and ask instead for a proof that:

$$r(x) = f(x) - [t_{lo}(x) + z^n \cdot t_{mid}(x) + z^{2n} \cdot t_{hi}(x)] \cdot Z_H(z)$$

which is a fine request, as the verifier can produce the commitment of $r$ needed to verify the opening proof:

$$ com(r) = com(f) - [com(t_{lo}) + z^n \cdot com(t_{mid}) + z^{2n} \cdot com(t_{hi})] \cdot Z_H(z) $$

At this point, the protocol looks more like this:

Prover->Verifier: com(f) Prover->Verifier: com(t_lo), com(t_mid), com(to_hi) Note right of Verifier: generates random z Verifier->Prover: z Prover->Verifier: f(z), r(z) = 0 Prover->Verifier: proofs of opening Note right of Verifier: reconstruct r(x) and \n validate opening proofs

Uh-oh, what about f?

The big proof in PLONK really boils down to two things:

  1. The permutation argument, which links the wires in our circuit. I ignore this proof in the post.
  2. the main polynomial $f$, which is our circuit.

Since the polynomial $f$ needs to be constructed such that:

  • it does not leak any non-public information to the verifier
  • it does not allow the prover to change fixed parts of the circuit

the prover and the verifier perform a "polynomial dance" to construct the polynomial together. The end product sorts of looks like this:

$$ f(x) = a(x) q_L(x) + b(x) q_R(x) + q_M(x) a(x) b(x) + q_O(x) c(x) + q_C(x) $$

where $a, b, c$ are private polynomials that the prover constructs, commits, and sends to the verifier; and $q_L, q_R, q_M, q_O, q_C$ are public polynomials (the selector polynomials) that both the verifier and the prover can construct (and commit to if necessary).

So the end protocol looks more like this:

Prover->Verifier: com(a), com(b), com(c) Prover->Verifier: com(t_lo), com(t_mid), com(to_hi) Note right of Verifier: generates random z Verifier->Prover: z Prover->Verifier: a(z), b(z), c(z), r(z) = 0 Prover->Verifier: proofs of opening Note right of Verifier: reconstruct r(x) and \n validate opening proofs

And as in the previous section, the verifier needs to reconstruct a commitment to $r$ before being able to ask for an opening, which is now impossible as we're dealing with multiplication of commitments

$$ \begin{align} r(x) = \; &a(x) q_L(x) + b(x) q_R(x) + a(x) b(x) q_M(x) + c(x) q_O(x) + q_C(x) \\ & - [t_{lo}(x) + x^n \cdot t_{mid}(x) + x^{2n} \cdot t_{hi}(x)] \cdot Z_H(z) \end{align} $$

but since the prover sends the evaluations of $a, b, c$ at $z$ (with proofs), the verifier can use that to simplify the polynomial $r$ to:

$$ \begin{align} r(x) = \; &a(z) q_L(x) + b(z) q_R(x) + a(z) b(z) q_M(x) + c(z) q_O(x) + q_C(x) \\ & - [t_{lo}(x) + x^n \cdot t_{mid}(x) + x^{2n} \cdot t_{hi}(x)] \cdot Z_H(z) \end{align} $$

Finally, the verifier can produce the commitment of $r$ as:

$$ \begin{align} com(r) = \; &a(z) com(q_L) + b(z) com(q_R) + a(z) b(z) com(q_M) + c(z) com(q_O) + com(q_C) \\ & - [com(t_{lo}) + z^n \cdot com(t_{mid}) + z^{2n} \cdot com(t_{hi})] \cdot Z_H(z) \end{align} $$

There's much more to PLONK. I've skipped the circuit part, the permutation argument, I've also ignored the big pairing equation at the end. These will be subjects for another post :)

6 comments

Maller optimization to reduce proof size posted July 2021

In the PLONK paper, they make use of an optimization from Mary Maller in order to reduce the proof size. This is a note explaining this optimization. If you have no idea what these words are, you might want to skip reading this post :)

Explanation

Maller's optimization is used in the "polynomial dance" between the prover and the verifier to reduce the number of openings the prover send.

Recall that the polynomial dance is the process where the verifier and the prover form polynomials together so that:

  1. the prover doesn't leak anything important to the verifier
  2. the verifier doesn't give the prover too much freedom

In the dance, the prover can additionally perform some steps that will keep the same properties but with reduced communication.


Let's see the protocol where Prover wants to prove to Verifier that

$$\forall x \in \mathbb{F}, \; h_1(x)h_2(x) - h_3(x) = 0$$

given commitments of $h_1, h_2, h_3$.

Note left of Prover: commits to h1, h2, h3 Prover->Verifier: com(h1), com(h2), com(h3) Note right of Verifier: generates random point s Verifier-->Prover: s Note left of Prover: evaluates at point s Prover->Verifier: h1(s), h2(s), h3(s) Prover->Verifier: 3 proofs of openings Note right of Verifier: verifies that \n h1(s)h2(s) - h3(s) = 0

A shorter proof exists. Essentially, if the verifier already has the opening h1(s), they can reduce the problem to showing that

$$ \forall x \in \mathbb{F}, \; L(x) = h_1(\mathbf{s})h_2(x) - h_3(x) = 0$$

given commitments of $h_1, h_2, h_3$ and evaluation of $h1$ at a point $s$.

Note left of Prover: commits to h1, h2, h3 Prover->Verifier: com(h1), com(h2), com(h3) Note right of Verifier: generates random point s Verifier-->Prover: s Note left of Prover: evaluates at point s Prover->Verifier: h1(s), L(s) Prover->Verifier: 2 proofs of openings Note right of Verifier: forms polynomial com(L) = \n h1(s)com(h2) - com(h3) Note right of Verifier: checks that L(s) = 0

Notes

Why couldn't the prover open the polynomial $L'$ directly?

$$L'(x) = h_1(x)h_2(x) - h_3(x)$$

By doing

Note left of Prover: commits to h1, h2, h3 Prover->Verifier: com(h1), com(h2), com(h3) Note right of Verifier: generates random point s Verifier-->Prover: s Note left of Prover: evaluates at point s Prover->Verifier: L'(s), 1 proof of opening Note right of Verifier: forms polynomial com(L') = \n com(h1)com(h2) - com(h3) Note right of Verifier: verifies that \n h1(s)h2(s) - h3(s) = 0

The problem here is that you can't multiply the commitments together without using a pairing (if you're using a pairing-based polynomial commitment scheme), and you can only use that pairing once in the protocol.

If you're using an inner-product-based commitment, you can't even multiply commitments anyway.

Appendix: Original explanation from the PLONK paper

https://eprint.iacr.org/2019/953.pdf

For completion, the lemma 4.7:

comment on this story

Pairing-based polynomial commitments and Kate polynomial commitments posted June 2021

There's this thing called a Kate polynomial commitment, which is a polynomial commitment primitive that makes use of pairings. There's an excellent post from Dankrad which I would recommend reading instead of this post. I wrote this as a shorter summary of how you can commit to a polynomial, and then prove any evaluation $f(x) = y$.

Here's how it works:

You have a polynomial $f(x) = x^2 + 3x$

and some public parameters:

$$ SRS = {[1], [s], [s^2], [s^3]} = {G, sG, s^2 G, s^3 G} $$

where $[x] := xG$ for some generator $G$ of an elliptic curve group.

and $s$ is a toxic waste (something that no one should know) hidden behind an elliptic curve point G (some people call that "hidden in the exponent").

to commit to $f$

To commit to this polynomial, evaluate it at the unknown point $s$. You can do that by playing with the $SRS$:

$$ [f(s)] := [s^2] + 3 [s] = s^2 G + 3 sG = (s^2 + 3s)G $$

to prove that $f(\zeta) = a$

One day, the verifier asks "what's the evaluation at $\zeta$?" And the prover responds by sending the answer, $a$, and a proof ($h(s)$, see below).

The idea behind the proof

Notice that because $\zeta$ is a root of $f(x)-f(\zeta)$, then for some polynomial $h(x)$:

$$ f(x) - f(\zeta) = (x-\zeta) \cdot h(x) $$

Due to this, $h(x) = \frac{f(x)-f(\zeta)}{x-\zeta}$ must be a valid polynomial.

At a high-level:

  • the verifier will compute what they think $[h(x)]$ should be at some random point $s$
  • the prover will send the actual value $[h(s)]$
  • the verifier will check if they match

This works because the Schartz-Zippel lemma tells us that two polynomials that are different are different in most points.

The proof

Here's the protocol:

  1. the prover sends the verifier a commitment $[\frac{f(s)-f(\zeta)}{s-\zeta}]=[h(s)]$ evaluated at some random point $s$ (the toxic waste).
  2. the verifier constructs a similar $h(s)$ but with the expected value of $f(\zeta)$ instead: $[\frac{f(s) - a}{s-\zeta}]$. The prover then checks if it's equal to $[h(s)]$.

Note:

  1. The prover can compute $[h(s)]$ easily, because they can just compute the polynomial $h(x)$ first, and then reconstruct it at $s$ with the $SRS$. $$ h(x) = \frac{f(x)-f(\zeta)}{x-\zeta} = a_0 + a_1x + a_2x^2 + \cdots $$ and then $$ [h(s)] := a_0[1] + a_1[s] + a_2[s^2] + \cdots $$

    for example with our previous $f(x)$ and $\zeta = 3$

  2. The verifier cannot compute their own $[h(s)]$ because they cannot divide by $s$ (remember, nobody knows $s$). They need a pairing. Remember, you want to check the following identity hidden in the exponent (using commitments): $$ \frac{[f(s) - a]}{[s-\zeta]} = [h(s)] $$ But since you can't divide with commitments, you can't compute what's on the left-hand side. You can multiply thanks to pairings though. So instead, you could check the following equation: $$ [f(s) - a] = [(s-\zeta)h(s)] $$ and with pairings, you can multiply $[s-\zeta]$ with the proof $[h(s)]$: $$ e([f(s)] - [a], [1]) = e([s-\zeta], [h(s)]) $$
comment on this story

I'm now at O(1) Labs working on Mina protocol!!! posted May 2021

Hey reader! I haven't posted in a while, but as this is my blog I'm contracted to talk about life events such as this one. I've joined O(1) Labs a bit more than a month ago to work on the Mina cryptocurrency. If you don't know about Mina, check it out, it's pretty cool: it uses recursive zero-knowledge proofs to compress a blockchain into a single proof of 11KB. I hope I got you intrigued! I want to say it is one of, if not the most, ambitious project in the space (but I'm biased). As I'm still relatively new there, I don't have much to say besides that, but you can imagine that my posting will switch to more zero-knowledgy type of stuff very soon!

1 comment