David Wong | Cryptologie | HTML http://www.cryptologie.net/ About my studies in Cryptography. en-us Tue, 26 Mar 2024 15:10:33 +0100 A note on the elliptic curve pairing checks in zero-knowledge proofs David Wong Tue, 26 Mar 2024 15:10:33 +0100 http://www.cryptologie.net/article/611/a-note-on-the-elliptic-curve-pairing-checks-in-zero-knowledge-proofs/ http://www.cryptologie.net/article/611/a-note-on-the-elliptic-curve-pairing-checks-in-zero-knowledge-proofs/#comments As I explained here a while back, checking polynomial identities (some left-hand side is equal to some right-hand side) when polynomials are hidden using polynomial commitment schemes, gets harder and harder with multiplications. This is why we use pairings, and this is why sometimes we "linearize" our identities. If you didn't get what I just said, great! Because this is exactly what I'll explain in this post.

Using Schwartz-Zippel with no multiplication

First, let me say that there's typically two types of "nice" polynomial commitment schemes that people use with elliptic curves: Pedersen commitments and KZG commitments.

Pedersen commitments are basically hidden random linear combinations of the coefficients of a polynomial. That is, if your polynomial is $f(x) = \sum c_i \cdot x^i$ your commitment will look like $[\sum r_i \cdot c_i] G$ for some base point $G$ and unknown random values $r_i$. This is both good and bad: since we have access to the coefficients we can try to use them to evaluate a polynomial from its commitment, but since it's a random linear combination of them things can get ugly.

On the other hand, KZG commitments can be seen as hidden evaluations of your polynomials. For the same polynomial $f$ as above, a KZG commitment of $f$ would look like $[f(s)]G$ for some unknown random point $s$. Not knowing $s$ here is much harder than not knowing the values $r_i$ in Pedersen commitments, and this is why KZG usually requires a trusted setup whereas Pedersen doesn't.

In the rest of this post we'll use KZG commitments to prove identities.

Let's use $[a]$ to mean "commitment of the polynomial $a(x)$", then you can easily check that $a(x) = b(x)$ knowing only the commitments to $a(x)$ and $b(x)$ by checking that $[a] = [b]$ or $[a] - [b] = [0]$. This is because of the Schwartz-Zippel (S-Z) lemma which tells us that checking this identity at a random point is convincing with high-enough probability.

When multiplication with scalars is required, then things are fine. As you can do $i \cdot [a]$ to obtain $[i \cdot a]$, checking that $i \cdot a = j \cdot b$ is as simple as checking that $i \cdot [a] - j \cdot [b] = [0]$.

This post is about explaining how pairing helps us when we want to check an identity that involves multiplying $a$ and $b$ together.

Using elliptic curve pairings for a single multiplication

It turns out that elliptic curve pairings allow us to perform a single multiplication. Meaning that once things get multiplied, they move to a different planet where things can only get added together and compared. No more multiplications.

Pairings give you this function $e$ which allows you to move things in the exponent like this: $e([a], [b]) = e([1], [1])^{ab}$. Where, remember, $ab$ is the multiplication of the two polynomials evaluated at a random point: $a(s) \cdot b(s)$.

As such, if you wanted to check something like this for example: $a \cdot b = c + 3$ with commitments only, you could check the following pairings:

$$ e([a], [b]) = e([c] + 3 [1], [1]) $$ By the way, the left argument and the right argument of a pairing are often in different groups for "reasons". So we usually write things like this:

$$ e([a]_1, [b]_2) = e([c]_1 + 3 [1]_1, [1]_2) $$ And so it is important to have commitments in the right groups if you want to be able to construct your polynomial identity check.

Evaluations can help with more than one multiplication

But what if you want to check something like $a \cdot b \cdot c = d + 4$? Are we doomed?

We're not! One insight that plonk brought to me (which potentially came from older papers, I don't know, I'm not an academic, leave me alone), is that you can reduce the number of multiplication with "this one simple trick". Let me explain...

A typical scenario includes you wanting to check an identity like this one:

$$a(x) \cdot b(x) \cdot c(x) = d(x)$$

and you have KZG commitments to all three polynomials $[a], [b], [c]$. (So in other words, hidden evaluations of these polynomials at the same unknown random point $s$)

You can't compute the commitment of the left-hand side because you can't perform the multiplication of the three commitments.

The trick is to evaluate (using KZG) the previous identity at a different point, let's say $\zeta$, and pre-evaluate (using KZG as well) as many polynomials as you can to $\zeta$ to reduce the number of multiplications down to 0.

Note: that is, if we want to check that $a(x) - b(x) = 0$ is true, and we want to use S-Z to do that at some point $\zeta$, then we can pre-evaluate $a$ (or $b$) and check the following identity $a(\zeta) - b(x) = 0$ at some point $\zeta$ instead.

More precisely, we'll choose to pre-evaluate $b(\zeta) = \bar{b}$ and $c(\zeta) = \bar{c}$, for example. This means that we'll have to produce a quotient polynomial $q_b$ and $q_c$ such that:

  1. $b(s) - \bar{b} = (s - \zeta) \cdot q_b(s)$
  2. $c(s) - \bar{c} = (s - \zeta) \cdot q_c(s)$

which means that the verifier will have to perform the following two pairings (after having been sent the evaluation $\bar{b}$ and $\bar{c}$ in the clear):

  1. $e([b]_1 - \bar{b} \cdot [1]_1, [1]_2) = e([x]_1 - \zeta \cdot [1]_1, [q_b]_2)$
  2. $e([c]_1 - \bar{c} \cdot [1]_1, [1]_2) = e([x]_1 - \zeta \cdot [1]_1, [q_c]_2)$

Then, they'll be able to check the first identity at $\zeta$ and use $\bar{b}$ and $\bar{c}$ in place of the commitments $[b]$ and $[c]$. The verifier check will look like the following pairing (after receiving a commitment $[q]$ from the prover): $$e( \bar{b} \cdot \bar{c} \cdot [a]_1 - [d] - 0, [1]_2) = e([x]_1 - \zeta \cdot [1]_1, [q]_2)$$ which proves using KZG that $a(\zeta)b(\zeta)c(\zeta) - d(\zeta) = 0$ (which proves that the identity checks out with high probability thanks to S-Z).

Aggregating all the KZG evaluation proofs

In the previous explanation, we actually perform 3 KZG evaluation proofs instead of one:

  • $2$ pairings that are KZG evaluation proofs that pre-evaluate different polynomials from the main check at some random point $\zeta$.
  • $1$ pairing that evaluates the main identity at $\zeta$, after it was linearized to get rid of any multiplication of commitments.

Pairings can be aggregated by simply creating a random linear combinations of the pairings. That is, with some random values $r_i$ we can aggregate the checks where the left-hand side is: $$ b(s) - \bar{b} + r_1 (c(s) - \bar{c}) + r_2 (\bar{b} \cdot \bar{c} \cdot a(s) - d(s) - 0]) $$ and the right-hand side is: $$ = (s - \zeta) \cdot q_b(s) + r_1 ((s - \zeta) \cdot q_c(s)) + r_2 ((s - \zeta) \cdot q(s))$$

]]>
Plonk's permutation, the definitive explanation David Wong Mon, 11 Mar 2024 21:56:39 +0100 http://www.cryptologie.net/article/610/plonks-permutation-the-definitive-explanation/ http://www.cryptologie.net/article/610/plonks-permutation-the-definitive-explanation/#comments I've recorded a video on how the plonk permutation works here, but I thought I would write a more incremental explanation about it for those who want MOAR! If things don't make sense in this explanation, I'm happy to dig into specifics in more detail, just ask in the comments! Don't forget your companion eprint paper.

Multiset equality check

Suppose that you have two ordered sets) of values $D = {d_1, d_2, d_3, d_4}$ and $E = {e_1, e_2, e_3, e_4}$, and that you want to check that they contain the same values. That is, you want to check that there exists a permutation of the elements of $D$ (or $E$) such that the multisets (sets where some values can repeat) are the same, but you don't care about which permutation exactly gets you there. You're willing to accept ANY permutation.

$${d_1, d_2, d_3, d_4} = \text{some_permutation}({e_1, e_2, e_3, e_4})$$ For example, it could be that re-ordering $E$ as ${e_2, e_3, e_1, e_4}$ gives us exactly $D$.

Trick 1: multiply things to reduce to a single value

One way to do perform our multiset equality check is to compare the product of elements on both sides:

$$d_1 \cdot d_2 \cdot d_3 \cdot d_4 = e_1 \cdot e_2 \cdot e_3 \cdot e_4$$

If the two sets contain the same values then our identity checks out. But the reverse is not true, and thus this scheme is not secure.

Can you see why?

For example, $D = (1, 1, 1, 15)$ and $E = (3, 5, 1, 1)$ are obviously different multisets, yet the product of their elements will match!

Trick 2: use polynomials, because maybe it will help...

What we can do to fix this issue is to encode the values of each lists as roots of two polynomials:

  • $d(x) = (x - d_1)(x - d_2)(x - d_3)(x - d_4)$
  • $e(x) = (x - e_1)(x - e_2)(x - e_3)(x - e_4)$

These two polynomials are equal if they have the same roots with the same multiplicities (meaning that if a root repeats, it must repeat the same number of times).

Trick 3: optimize polynomial identities with Schwartz-Zippel

Now is time to use the Schwartz-Zippel lemma to optimize the comparison of polynomials! Our lemma tells us that if two polynomials are equal, then they are equal on all points, but if two polynomials are not equal, then they differ on MOST points.

So one easy way to check that they match with high probability is to sample a random evaluation point, let's say some random $\gamma$. Then evaluate both polynomials at that random point $\gamma$ to see if their evaluations match: $$(\gamma - d_1)(\gamma - d_2)(\gamma - d_3)(\gamma - d_4) = (\gamma - e_1)(\gamma - e_2)(\gamma - e_3)(\gamma - e_4)$$

Permutation check

The previous check is not useful for wiring different cells within some execution trace. There is no specific "permutation" being enforced. So we can't use it as in in plonk to implement our copy constraints.

Trick 4: random linear combinations to encode tuples

To enforce a permutation, we can compare tuples of elements instead! For example, let's say we want to enforce that $E$ must be re-ordered using the permutation $(1 3 2) (4)$ in cycle notation. Then we would try to do the following identity check: $$ ((1, d_1), (2, d_2), (3, d_3), (4, d_4)) = ((2, e_1), (3, e_2), (1, e_3), (4, e_4)) $$ Here, we are enforcing that $d_1$ is equal to $e_3$, and that $d_2$ is equal to $e_1$, etc. This allows us to re-order the elements of $E$: $$ ((1, d_1), (2, d_2), (3, d_3), (4, d_4)) = ((1, e_3), (2, e_1), (3, e_2), (4, e_4)) $$ But how can we encode our tuples into the polynomials we've seen previously? The trick is to use a random linear combination! (And that is often the answer in a bunch of ZK protocol.)

So if we want to encode $(2, d_2)$ in an equation, for example, we write $2 + \beta \cdot d_2$ for some random value $\beta$.

Note: The rationale behind this idea is still due to Schwartz-Zippel: if you have two tuples $(a,b)$ and $(a', b')$ you know that the polynomials $a + x \cdot b$ is the same as the polynomial $a' + x \cdot b'$ if $a = a'$ and $b = b'$, or if you have $x = \frac{a' - a}{b - b'}$ . If $x$ is chosen at random, the probability that it is exactly that value is $\frac{1}{N}$ with $N$ the size of your sampling domain (i.e. the size of your field) which is highly unlikely.

So now we can encode the previous lists of tuples as these polynomials:

  • $d(x, y) = (1 + y \cdot d_1 - x)(2 + y \cdot d_2 - x)(3 + y \cdot d_3 - x)(4 + y \cdot d_4 - x)$
  • $e(x, y) = (2 + y \cdot e_1 - x)(3 + y \cdot e_2 - x)(1 + y \cdot e_3 - x)(4 + y \cdot e_4 - x)$

And then reduce both polynomials to a single value by sampling random values for $x$ and $y$. Which gives us:

  • $(1 + \beta \cdot d_1 - \gamma)(2 + \beta \cdot d_2 - \gamma)(3 + \beta \cdot d_3 - \gamma)(4 + \beta \cdot d_4 - \gamma)$
  • $(2 + \beta \cdot e_1 - \gamma)(3 + \beta \cdot e_2 - \gamma)(1 + \beta \cdot e_3 - \gamma)(4 + \beta \cdot e_4 - \gamma)$

If these two values match, with overwhelming probability we have that the two polynomials match and thus our permutation of $E$ matches $D$.

Wiring within a single execution trace column

Let's now see how we can use the (optimized) checks we've learn previously in plonk. We will first learn how to wire cells of a single execution trace column, and in the next section we will expand this to three columns (as vanilla Plonk uses three columns).

Take some moment to think about how can we use the previous stuff.

The answer is to see the execution trace as your list $E$, and then see if it is equal to a fixed permutation of it ($D$). Note that this permutation is decided when you write your circuit, and precomputed into the verifier key in Plonk.

Remember that the formula we're trying to check is the following for some random $\beta$ and $\gamma$, and for some permutation function $\sigma$ that we defined:

$$ \prod_{i=1} (i + \beta \cdot d[i] - \gamma) = \prod_{i=1} (\sigma(i) + \beta \cdot e[i] - \gamma) $$

Trick 5: write a circuit for the permutation check

To enforce the previous check, we will write a mini-circuit (yes an actual circuit!) which will progressively accumulate the result of dividing the left-hand side with the right-hand side. This circuit only requires one variable/register we'll call $z$ (and so it will add a new column $z$ in our execution trace) which will start with the initial value 1 and will end with the following value:

$$ \prod_{i=1} \frac{i+\beta \cdot d[i] - \gamma}{\sigma(i) + \beta \cdot e[i] - \gamma} = 1 $$

Let's rewrite it using only the first wire/column $a$ of Plonk, and using our generator $\omega$ as index in our tuples (because this is how we handily index things in Plonk):

$$ \prod_{i=1} \frac{\omega^i+\beta \cdot a[i] - \gamma}{\sigma(\omega^i) + \beta \cdot a[i] - \gamma} = 1 $$

We can then constrain the last value to be equal to 1, which will enforce that the two polynomials encoding our list of value and its permutation are equal (with overwhelming probability).

In plonk, a gate can only access variables/registers from the same row. So we will use the following extra gate (reordering the previous equation, as we can't divide in a circuit) throughout the circuit:

$$ z[i+1] \cdot (\sigma(i) + \beta \cdot a[i] - \gamma) = z[i] \cdot (i + \beta \cdot a[i] - \gamma) $$ Now, how do we encode this gate in the circuit? The astute eye will have noticed that we are using a cell of the next row ($z[i+1]$) which we haven't done in Plonk so far.

Trick 6: you're in a multiplicative subgroup, remember?

Enforcing things across rows is actually possible in plonk because we encode our polynomials in a multiplicative subgroup of our field! Due to this, we can reach for the next value(s) by multiplying an evaluation point with the subgroup's generator.

That is, values are encoded in our polynomials at evaluation points $\omega, \omega^2, \omega^3, \cdots$, and so multiplying an evaluation point by $\omega$ (the generator) brings you to the next cell in an execution trace.

As such, the verifier will later try to enforce that the following identity checks out in the multiplicative subgroup:

$$ z(x \cdot \omega) \cdot (\sigma(x) + \beta \cdot a(x) - \gamma) = z(x) \cdot (x + \beta \cdot a(x) - \gamma) $$

Note: This concept was generalized in turboplonk, and is used extensively in the AIR arithmetization (used by STARKs). This is also the reason why in Plonk we have to evaluate the $z$ polynomial at $\zeta \omega$.

There will also be two additional gates: one that checks that the initial value is 1, and one that check that the last value is 1, both applied only to their respective rows. One trick that Plonk uses is that the last value is actually obtained in the last row. As last_value + 1 = 0 in our multiplicative subgroup, we have that $z[\text{last_value} + 1] = z[0]$ is constrained automatically. As such, checking that $z[0] = 1$ is enough.

You can see these two gates added to the vanilla plonk gate in the computation of the quotient polynomial $t$ in plonk. Take a look at this screenshot of the round 3 of the protocol, and squint really hard to ignore the division by $Z_H(X)$, the powers of $\alpha$ being used to aggregate the different gate checks, and the fact that $b$ and $c$ (the other wires/columns) are used:

round 3

The first line in the computation of $t$ is the vanilla plonk gate (that allows you to do multiplication and addition); the last line constrains that the first value of $z$ is $1$; and the other lines encode the permutation gate as I described (again, if you ignore the terms involving $b$ and $c$).

Trick 7: create your execution trace in steps

There's something worthy of note: the extra execution trace column $z$ contains values that use other execution trace columns. For this reason, the other execution trace columns must be fixed BEFORE anything is done with the permutation column $z$.

In Plonk, this is done by waiting for the prover to send commitments of $a$, $b$, and $c$ to the verifier, before producing the random challenges $\beta$ and $\gamma$ that will be used by the prover to produce the values of $z$.

Wiring multiple execution trace columns

The previous check only works within the cells of a single execution trace, how does Plonk generalizes this to several execution trace columns?

Remember: we indexed our first execution trace column with the values of our circuit domain (that multiplicative subgroup), we simply have to find a way to index the other columns with distinct values.

Trick 8: use cosets

A coset is simply a set that is the same size as another set, but that is completely disjoint from that set. Handily, a coset is also defined as something that's very easy to compute if you know a subgroup: just multiply it with some element $k$.

Since we want a similar-but-different set from the elements of our multiplicative subgroup, we can use cosets!

Plonk produces the values $k_1$ and $k_2$ (which can be the values $2$ and $3$, for example), which when multiplied with the values of our multiplicative subgroup (${\omega, \omega^2, \omega^3, \cdots}$) produces a different set of the same size. It's not a subgroup anymore, but who cares!

We now have to create three different permutations, one for each set, and each permutation can point to the index of any of the sets.

]]>
Are you into finding bugs and learning ZK? Here's a challenge for you David Wong Mon, 26 Feb 2024 20:41:49 +0100 http://www.cryptologie.net/article/609/are-you-into-finding-bugs-and-learning-zk-heres-a-challenge-for-you/ http://www.cryptologie.net/article/609/are-you-into-finding-bugs-and-learning-zk-heres-a-challenge-for-you/#comments Spent some time to write a challenge focused on GKR (the proof system) on top of the gnark framework (which is used to write ZK circuits in Golang).

It was a lot of fun and I hope that some people are inspired to try to break it :)

We're using the challenge to hire people who are interested in doing security work in the ZK space, so if that interests you, or if you purely want a new challenge, try it out here: https://github.com/zksecurity/zkBank

And of course, since this is an active wargame) please do not release your own solution or write up!

]]>
Want to learn more about zkBitcoin? I've made some videos David Wong Sun, 28 Jan 2024 05:59:45 +0100 http://www.cryptologie.net/article/608/want-to-learn-more-about-zkbitcoin-ive-made-some-videos/ http://www.cryptologie.net/article/608/want-to-learn-more-about-zkbitcoin-ive-made-some-videos/#comments My previous article on zkBitcoin blew up, and we got 100 Github stars, a number of contributions on the repository, and some interest from a number of projects, and all of that in a single day!

So as requested, I made a number of videos to explain what zkBitcoin is.

]]>
Zero-knowledge proofs in stateful applications David Wong Wed, 24 Jan 2024 01:36:13 +0100 http://www.cryptologie.net/article/607/zero-knowledge-proofs-in-stateful-applications/ http://www.cryptologie.net/article/607/zero-knowledge-proofs-in-stateful-applications/#comments Something that might not be immediately obvious if you're not used to zero-knowledgifying your applications, is that the provable circuits you end up using are pure functions. They do not have access to long-lasting memory and cannot have side effects. They just take some input, and produce some output.

Note: circuits are actually not strictly pure, as they are non-deterministic. For example, you might be able to use out-of-circuit randomness in your circuit.

So when mutation of persistent state is needed, you need to provide the previous state as input, and return the new state as output. This not only produces a constraint on the previous state (time of read VS time of write issues), but it also limits the size of your state.

I've talked about the first issue here:

The problem of update conflicts comes when one designs a protocol in which multiple participants decide to update the same value, and do so using local execution. That is, instead of having a central service that executes some update logic sequentially, participants can submit the result of their updates in parallel. In this situation, each participant locally executes the logic on the current state assuming that it will not have changed. But this doesn't work as soon as someone else updates the shared value. In practice, someone's update will invalidate someone else's.

The second issue of state size is usually solved with Merkle trees, which allow you to compress your state in a verifiable way, and allow you to access or update the state without having to decompress the ENTIRE state.

That's all.

]]>
Verifying zero-knowledge proofs on Bitcoin? David Wong Sun, 21 Jan 2024 05:46:42 +0100 http://www.cryptologie.net/article/606/verifying-zero-knowledge-proofs-on-bitcoin/ http://www.cryptologie.net/article/606/verifying-zero-knowledge-proofs-on-bitcoin/#comments zkbitcoin

A few months ago Ivan told me "how cool would it be if we could verify zero-knowledge proofs on Bitcoin?" A week later, we had a prototype of the best solution we could come up with: a multi-party computation to manage a Bitcoin wallet, and a committee willing to unlock funds only in the presence of valid zero-knowledge proofs. A few iterations later and we had something a bit cooler: stateful apps with states that can be tracked on-chain, and committee members that don't need to know anything about Bitcoin. Someone might put it this way: a Bitcoin L2 with minimal trust assumption of a "canonical" Bitcoin blockchain.

From what we understand, a better way to verify zero-knowledge proofs on Bitcoin is not going to happen, and this is the best we ca have. And we built it! And we're running it in testnet. Try it here!

]]>
What's out there for ECDSA threshold signatures David Wong Sat, 13 Jan 2024 23:26:06 +0100 http://www.cryptologie.net/article/605/whats-out-there-for-ecdsa-threshold-signatures/ http://www.cryptologie.net/article/605/whats-out-there-for-ecdsa-threshold-signatures/#comments In the realm of multi-party computation (MPC) protocols, threshold signing is the protocol that address how multiple participants can sign something under a "shared" private key. In other words, instead of one guy signing something with a private key, we want $N$ guys doing the same thing and obtaining the same result without any of them actually knowing the private key (each of them holds a share of the private key, revealing nothing about the private key itself).

The threshold part means that not every participant who has a share has to participate. If there's $N$ participants, then only $t < N$ has to participate for the protocol to succeed. The $t$ and $N$ depend on the protocol you want to design, on the overhead you're willing to eat, the security you want to attain, etc.

Threshold protocols are not just for signing, they're everywhere. The NIST has a Multi-Party Threshold Cryptography competition, in which you can see proposals for threshold signing, but also threshold decryption, threshold key exchanges, and others.

This post is about threshold signatures for ECDSA specifically, as it is the most commonly used signature scheme and so has attracted a number of researchers. In addition, I'm only going to talk about the history of it, because I haven't written an actual explainer on how these works, and because the history of threshold signing for ECDSA is really messy and confusing and understanding what constructions exist out there is near impossible due to the naming collisions and the number of papers released without proper nicknames (unlike FROST, which is the leading threshold signing algorithm for schnorr signatures).

So here we are, the main line of work for ECDSA threshold signatures goes something like this, and seems to mainly involve two Gs (Gennaro and Goldfeder):

  1. GG18. This paper is more officially called "Fast Multiparty Threshold ECDSA with Fast Trustless Setup" and improves on BGG: Using level-1 homomorphic encryption to improve threshold DSA signatures for bitcoin wallet security (2017) and GGN: Threshold-optimal dsa/ecdsa signatures and an application to bitcoin wallet security (2016).
  2. GG19. This has the same name as GG18, but fixes some of the issues in GG18. I think this is because GG18 was published in a journal, so they couldn't update it. But GG18 on eprint is the updated GG19 one. (Yet few people refer to it as GG19.) It fixes a number of bugs, including the ones brought by the Alpha-Rays attack, and A note about the security of GG18.
  3. GG20. This paper is officially called "One Round Threshold ECDSA with Identifiable Abort" and builds on top of GG18/GG19 to introduce the ability to identify who caused the abort. (In other words, who messed up if something was messed up during the multi-party computation.) Note that there are still some bugs in this paper.
  4. CGGMP21. This one combines GG20 with CMP20 (another work on threshold signatures). This is supposed to be the latest work in this line of work and is probably the only version that has no known issues.

Note that there's also another line of work that happened in parallel from another team, and which is similar to GG18 except that they have different bugs: Lindell-Nof: Fast secure multiparty ecdsa with practical distributed key generation and applications to cryptocurrency custody (2018).

PS: thanks to Rosario Gennaro for help figuring this out :)

]]>
The ZK update conflict issue in multi-user applications David Wong Thu, 11 Jan 2024 20:53:13 +0100 http://www.cryptologie.net/article/604/the-zk-update-conflict-issue-in-multi-user-applications/ http://www.cryptologie.net/article/604/the-zk-update-conflict-issue-in-multi-user-applications/#comments I haven't seen much ink being spewed on the ZK update conflict issue so I'll write a short note here.

Let's take a step back. Zero-knowledge proofs allow you to prove the result of the execution of some logic. Like signatures attached to data you receive, ZK proofs can be attached to a computation result. This means that with ZK, internet protocols can be rethought and redesigned. If execution of the protocol logic had to happen somewhere trusted, now some of it can be moved around and delegated to untrusted places, or for privacy-reasons some of it can be moved to places where private data should remain.

How do we design protocols using ZK? It's easy, assume that when a participant of your protocol computes something, they will do it honestly. Then, when you implement the protocol, use ZK proofs to enforce that they behave as intended.

The problem of update conflicts comes when one designs a protocol in which multiple participants decide to update the same value, and do so using local execution. That is, instead of having a central service that executes some update logic sequentially, participants can submit the result of their updates in parallel. In this situation, each participant locally executes the logic on the current state assuming that it will not have changed. But this doesn't work as soon as someone else updates the shared value. In practice, someone's update will invalidate someone else's.

This issue is not just a ZK issue, if you know anything about databases then how to perform conflict resolution has been an issue for a very long time. For example, in distributed databases with more than one writer, conflicts could happen as two nodes attempt to update the same value at the same time. Conflict can also happen in the same way in applications where multiple users want to update the same data, think Google Docs.

The solutions as far as I know can be declined in the following categories:

  1. Resolve conflicts automatically. The simplest example is the Thomas write rule which discards any outdated update. In situations were discarding updates is unacceptable more algorithm can take over. For example, Google Docs uses an algorithm called Operational Transformation to figure out how to merge two independent updates.
  2. Ask the user for help if needed. For example, the git merge command that can sometimes ask for your help to resolve conflicts.
  3. Refuse to accept any conflicts. This often means that the application is written in such a way that conflicts can't arise, and in distributed databases this always mean that there can only be a single node that can write (with all other nodes being read-only). Although applications can also decide to simply deny updates that lead to conflicts, which would lead to poor performance in concurrency-heavy scenarios, as well as poor user experience.

As one can see, the barrier between application and database doesn't matter too much, besides the fact that a database has poor ways of prompting a user: when conflict resolution must be done by a user it is generally the role of the application to reach out.

What about ZK though? From what I've seen, the last "avoid conflicts" solution is always chosen. Perhaps this is because my skewed view has only been within the blockchain world, which can't afford to play conflict resolution with $$$.

For example, simpler ZK protocols like Zcash will often massage their protocol such that proofs are only computed on immutable data. For example, arguments of a function cannot be the latest root of a merkle tree (as it might get updated before we can publish the result of running the function) but it can easily be the root of a merkle tree that was seen previously (we're using a previous state, not the latest state, that's fine).

Another technique is to extract the parts of updates that occur on a shared data structure, and sequence them before running them. For example, the set of nullifiers in zcash is updated outside of a ZK execution by the network, according to some logic that only gets executed sequentially. More complicated ZK platforms like Aleo and Mina do that as well. In Aleo's case, the user can split the logic of its smart contracts by choosing what can be executed locally (provided a proof) and what has to be executed serially by the network (Ethereum-style). In Mina's case, updates that have the potential to lead to conflicts are queued up and later on a single user can decide (if authorized) to process the queued updates serially but in ZK.

]]>
Cairo's public memory David Wong Tue, 21 Nov 2023 17:53:13 +0100 http://www.cryptologie.net/article/603/cairos-public-memory/ http://www.cryptologie.net/article/603/cairos-public-memory/#comments Here are some notes on how the Cairo zkVM encodes its (public) memory in the AIR (arithmetization) of the STARK.

If you'd rather watch a 25min video of the article, here it is:

The AIR arithmetization is limited on how it can handle public inputs and outputs, as it only offer boundary constraints. These boundary constraints can only be used on a few rows, otherwise they're expensive to compute for the verifier. (A verifier would have to compute $\prod_{i \in S} (x - g^i)$ for some given $x$, so we want to keep $|S|$ small.)

For this reason Cairo introduce another way to get the program and its public inputs/outputs in: public memory. This public memory is strongly related to the memory vector of cairo which a program can read and write to.

In this article we'll talk about both. This is to accompany this video and section 9.7 of the Cairo paper.

Cairo's memory

Cairo's memory layout is a single vector that is indexed (each rows/entries is assigned to an address starting from 1) and is segmented. For example, the first $l$ rows are reserved for the program itself, some other rows are reserved for the program to write and read cells, etc.

Cairo uses a very natural "constraint-led" approach to memory, by making it write-once instead of read-write. That is, all accesses to the same address should yield the same value. Thus, we will check at some point that for an address $a$ and a value $v$, there'll be some constraint that for any two $(a_1, v_1)$ and $(a_2, v_2)$ such that $a_1 = a_2$, then $v_1 = v_2$.

Accesses are part of the execution trace

At the beginning of our STARK, we saw in How STARKs work if you don't care about FRI that the prover encodes, commits, and sends the columns of the execution trace to the verifier.

The memory, or memory accesses rather (as we will see), are columns of the execution trace as well.

The first two columns introduced in the paper are called $L_1.a$ and $L_1.v$. For each rows in these columns, they represent the access made to the address $a$ in memory, with value $v$. As said previously, we don't care if that access is a write or a read as the difference between them are blurred (any read for a specific address could be the write).

These columns can be used as part of the Cairo CPU, but they don't really prevent the prover from lying about the memory accesses:

  1. First, we haven't proven that all accesses to the same addresses $a_i$ always return the same value $v_i$.
  2. Second, we haven't proven that the memory contains fixed values in specific addresses. For example, it should contain the program itself in the first $l$ cells.

Let's tackle the first question first, and we will address the second one later.

Another list to help

In order to prove that the two columns in the $L_1$ part of the execution trace, Cairo adds two columns to the execution trace: $L_2.a'$ and $L_2.v'$. These two columns contain essentially the same things as the $L_1$ columns, except that these times the accesses are sorted by address.

One might wonder at this point, why can't L1 memory accesses be sorted? Because these accesses represents the actual memory accesses of the program during runtime, and this row by row (or step by step). The program might read the next instruction in some address, then jump and read the next instruction at some other address, etc. We can't force the accesses to be sorted at this point.

We will have to prove (later) that $L_1$ and $L_2$ represent the same accesses (up to some permutation we don't care about).

So let's assume for now that $L_2$ correctly contains the same accesses as $L_1$ but sorted, what can we check on $L_2$?

The first thing we want to check is that it is indeed sorted. Or in other words:

  • each access is on the same address as previous: $a'_{i+1} = a'_i $
  • or on the next address: $a'_{i+1} = a'_i + 1$

For this, Cairo adds a continuity constraint to its AIR:

Screenshot 2023-11-21 at 10.55.07 AM

The second thing we want to check is that accesses to the same addresses yield the same values. Now that things are sorted its easy to check this! We just need to check that:

  • either the values are the same: $v'_{i+1} = v'_i$
  • or the address being accessed was bumped so it's fine to have different values: $a'_{i+1} = a'_i + 1$

For this, Cairo adds a single-valued constraint to its AIR:

Screenshot 2023-11-21 at 10.56.11 AM

And that's it! We now have proven that the $L_2$ columns represent correct memory accesses through the whole memory (although we didn't check that the first access was at address $1$, not sure if Cairo checks that somewhere), and that the accesses are correct.

That is, as long as $L_2$ contains the same list of accesses as $L_1$.

A multiset check between $L_1$ and $L_2$

To ensure that two list of elements match, up to some permutation (meaning we don't care how they were reordered), we can use the same permutation that Plonk uses (except that plonk fixes the permutation).

The check we want to perform is the following:

$$ { (a_i, v_i) }_i = { (a'_i, v'_i) }_i $$

But we can't check tuples like that, so let's get a random value $\alpha$ from the verifier and encode tuples as linear combinations:

$$ { a_i + \alpha \cdot v_i }_i = { a'_i + \alpha \cdot v'_i }_i $$

Now, let's observe that instead of checking that these two sets match, we can just check that two polynomials have the same roots (where the roots have been encoded to be the elements in our lists):

$$ \prod_i [X - (a_i + \alpha \cdot v_i)] = \prod_i [X - (a'_i + \alpha \cdot v'_i)] $$

Which is the same as checking that

$$ \frac{\prod_i [X - (a_i + \alpha \cdot v_i)]}{\prod_i [X - (a'_i + \alpha \cdot v'_i)]} = 1 $$

Finally, we observe that we can use Schwartz-Zippel to reduce this claim to evaluating the LHS at a random verifier point $z$. If the following is true at the random point $z$ then with high probability it is true in general:

$$ \frac{\prod_i [z - (a_i + \alpha \cdot v_i)]}{\prod_i [z - (a'_i + \alpha \cdot v'_i)]} = 1 $$

The next question to answer is, how do we check this thing in our STARK?

Creating a circuit for the multiset check

Recall that our AIR allows us to write a circuit using successive pairs of rows in the columns of our execution trace.

That is, while we can't access all the $a_i$ and $a'_i$ and $v_i$ and $v'_i$ in one shot, we can access them row by row.

So the idea is to write a circuit that produces the previous section's ratio row by row. To do that, we introduce a new column $p$ in our execution trace which will help us keep track of the ratio as we produce it.

$$ p_i = p_{i-1} \cdot \frac{z - (a_i + \alpha \cdot v_i)}{z - (a'_i + \alpha \cdot v'_i)} $$

This is how you compute that $p$ column of the execution trace as the prover.

Note that on the verifier side, as we can't divide, we will have to create the circuit constraint by moving the denominator to the right-hand side:

$$ p(g \cdot x) \cdot [z - (a'(x) + \alpha \cdot v'(x))] = p(x) \cdot [z - (a(x) + \alpha \cdot v(x))] $$

There are two additional (boundary) constraints that the verifier needs to impose to ensure that the multiset check is coherent:

  • the initial value $p_0$ should be computed correctly ($p_0 = \frac{z - (a_0 + \alpha \cdot v_0)}{z - (a'_0 + \alpha \cdot v'_0)}$)
  • the final value $p_{-1}$ should contain $1$

Importantly, let me note that this new column $p$ of the execution trace cannot be created, encoded to a polynomial, committed, and sent to the verifier in the same round as other columns of the execution trace. This is because it makes uses of two verifier challenges $z$ and $\alpha$ which have to be revealed after the other columns of the execution trace have been sent to the verifier.

Note: a way to understand this article is that the prover is now building the execution trace interactively with the help of the verifier, and parts of the circuits (here a permutation circuit) will need to use these columns of the execution trace that are built at different stages of the proof.

Inserting the public memory in the memory

Now is time to address the second half of the problem we stated earlier:

Second, we haven't proven that the memory contains fixed values in specific addresses. For example, it should contain the program itself in the first $l$ cells.

To do this, the first $l$ accesses are replaced with accesses to $(0,0)$ in $L_1$. $L_2$ on the other hand uses acceses to the first parts of the memory and retrieves values from the public memory $m^*$ (e.g. $(1, m^*[0]), (2, m^*[1]), \cdots$).

This means two things:

  1. the nominator of $p$ will contain $z - (0 + \alpha \cdot 0) = z$ in the first $l$ iterations (so $z^l$). Furthermore, these will not be cancelled by any values in the denominator (as $L_2$ is supposedly using actual accesses to the public memory)
  2. the denominator of $p$ will contain $\prod_{i \in [[0, l]]} [z - (a'_i + \alpha \cdot m^*[i])]$, and these values won't be canceled by values in the nominator either

As such, the final value of the accumulator should look like this if the prover followed our directions:

$$ \frac{z^l}{\prod_{i \in [[0, l]]} [z - (a'_i + \alpha \cdot m^*[i])]} $$

which we can enforce (as the verifier) with a boundary constraint.

Section 9.8 of the Cairo paper writes exactly that:

Screenshot 2023-11-21 at 11.31.39 AM

]]>
How STARKs work if you don't care about FRI David Wong Tue, 07 Nov 2023 01:38:59 +0100 http://www.cryptologie.net/article/601/how-starks-work-if-you-dont-care-about-fri/ http://www.cryptologie.net/article/601/how-starks-work-if-you-dont-care-about-fri/#comments Here's some notes on how STARK works, following my read of the ethSTARK Documentation (thanks Bobbin for the pointer!).

Warning: the following explanation should look surprisingly close to PlonK or SNARKs in general, to anyone familiar with these other schemes. If you know PlonK, maybe you can think of STARKs as turboplonk without preprocessing and without copy constraints/permutation. Just turboplonk with a single custom gate that updates the next row, also the commitment scheme makes everything complicated.

The execution trace table

Imagine a table with $W$ columns representing registers, which can be used as temporary values in our program/circuit. The table has $N$ rows, which represent the temporary values of each of these registers in each "step" of the program/circuit.

For example, a table of 4 registers and 3 steps:

r0 r1 r2
1 0 1 534
2 4 1 235
3 3 4 5

The constraints

There are two types of constraints which we want to enforce on this execution trace table to simulate our program:

  • boundary constraints: if I understand correctly this is for initializing the inputs of your program in the first rows of the table (e.g. the second register must be set to 1 initially) as well as the outputs (e.g. the registers in the last two rows must contain $3$, $4$, and $5$).
  • state transitions: these are constraints that apply to ALL contiguous pairs of rows (e.g. the first two registers added together in a row equal the value of the third register in the next row). The particularity of STARKs (and what makes them "scallable" and fast in practice) is that the same constraint is applied repeatidly. This is also why people like to use STARKs to implement zkVMs, as VMs do the same thing over and over.

This way of encoding a circuit as constraints is called AIR (for Algebraic Intermediate Representation).

Straw man 1: Doing things in the clear coz YOLO

Let's see an example of how a STARK could work as a naive interactive protocol between a prover and verifier:

  1. the prover constructs the execution trace table and sends it to the verifier
  2. the verifier checks the constraints on the execution trace table by themselves

This protocol works if we don't care about zero-knowledge, but it is obviously not very efficient: the prover sends a huge table to the verifier, and the verifier has to check that the table makes sense (vis a vis of the constraints) by checking every rows involved in the boundary constraints, and checking every contiguous pair of rows involved in the state transition constraints.

Straw man 2: Encoding things as polynomials for future profit

Let's try to improve on the previous protocol by using polynomials. This step will not immediately improve anything, but will set the stage for the step afterwards. Before we talk about the change to the protocol let's see two different observations:

First, let's note that one can encode a list of values as a polynomial by applying a low-degree extension (LDE). That is, if your list of values look like this: $(y_0, y_1, y_2, \cdots)$, then interpolate these values into a polynomial $f$ such that $f(0) = y_0, f(1) = y_1, f(2) = y_2, \cdots$

Usually, as we're acting in a field, a subgroup of large-enough size is chosen in place of $0, 1, 2$ as domain. You can read why's that here. (This domain is called the "trace evaluation domain" by ethSTARK.)

Second, let's see how to represent a constraint like "the first two registers added together in a row equal the value of the third register in the next row" as a polynomial. If the three registers in our examples are encoded as the polynomials $f_1, f_2, f_3$ then we need a way to encode "the next row". If our domain is simply $(0, 1, 2, \cdots)$ then the next row for a polynomial $f_1(x)$ is simply $f_1(x + 1)$. Similarly, if we're using a subgroup generated by $g$ as domain, we can write the next row as $f_1(x \cdot g)$. So the previous example constraint can be written as the constraint polynomial $c_0(x) = f_1(x) + f_2(x) - f_3(x \cdot g)$.

If a constraint polynomial $c_0(x)$ is correctly satisfied by a given execution trace, then it should be zero on the entire domain (for state transition constraints) or on some values of the domain (for boundary constraints). This means we can write it as $c_0(x) = t(x) \cdot \sum_i (x-g^i)$ for some "quotient" polynomial $t$ and the evaluation points $g^i$ (that encode the rows) where the constraint should apply. (In other words, you can factor $c_0$ using its roots $g^i$.)

Note: for STARKs to be efficient, you shouldn't have too many roots. Hence why boundary constraints should be limited to a few rows. But how does it work for state transition constraints that need to be applied to all the rows? The answer is that since we are in a subgroup there's a very efficient way to compute $\sum_i (x - g^i)$. You can read more about that in Efficient computation of the vanishing polynomial of the Mina book.

At this point, you should understand that a prover that wants to convince you that a constraint $c_1$ applies to an execution trace table can do so by showing you that $t$ exists. The prover can do so by sending the verifier the $t$ polynomial and the verifier computes $c_1$ from the register polynomials and verifies that it is indeed equal to $t$ multiplied by the $\sum_i (x-g^i)$. This is what is done both in Plonk and in STARK.

Note: if a constraint doesn't satisfy the execution trace, then you won't be able to factor it with $\sum_i (x - g^i)$ as not all of the $g^i$ will be roots. For this reason you'll get something like $c_1(x) = t(x) \cdot \sum_i (x - g^i) + r(x)$ for $r$ some "rest" polynomial. TODO: at this point can we still get a $t$ but it will have a high degree? If not then why do we have to do a low-degree test later?

Now let's see our modification to the previous protocol:

  1. Instead of sending the execution trace table, the prover encodes each column of the execution trace table (of height $N$) as polynomials, and sends the polynomials to the verifier.
  2. The prover then creates the constraint polynomials $c_0, c_1, \cdots$ (as described above) for each constraint involved in the AIR.
  3. The prover then computes the associated quotient polynomials $t_0, t_1, \cdots$ (as described above) and sends them to the verifier. Note that the ethSTARK paper call these quotient polynomials the constraint polynomials (sorry for the confusion).
  4. The verifier now has to check two things:
    • low-degree check: that these quotient polynomials are indeed low-degree. This is easy as we're doing everything in the clear for now (TODO: why do we need to check that though?)
    • correctness check: that these quotient polynomials were correctly constructed. For example, the verifier can check that for $t_0$ by computing $c_0$ themselves using the execution trace polynomials and then checking that it equals $t_0 \cdot (x - 1)$. That is, assuming that the first constraint $c_0$ only apply to the first row $g^0=1$.

Straw man 3: Let's make use of the polynomials with the Schwartz-Zippel optimization!

The verifier doesn't actually have to compute and compare large polynomials in the correctness check. Using the Schwartz-Zippel lemma one can check that two polynomials are equal by evaluating both of them at a random value and checking that the evaluations match. This is because Schwartz-Zippel tells us that two polynomials that are equal will be equal on all their evaluations, but if they differ they will differ on most of their evaluations.

So the previous protocol can be modified to:

  1. The prover sends the columns of the execution trace as polynomials $f_0, f_1, \cdots$ to the verifier.
  2. The prover produces the quotient polynomials $t_0, t_1, \cdots$ and sends them to the verifier.
  3. The verifier produces a random evaluation point $z$.
  4. The verifier checks that each quotient polynomial has been computed correctly. For example, for the first constraint, they evaluate $c_0$ at $z$, then evaluate $t_0(z) \cdot (z - 1)$, then check that both evaluations match.

Straw man 4: Using commitments to hide stuff and reduce proof size!

As the title indicates, we eventually want to use commitments in our scheme so that we can add zero-knowledge (by hiding the polynomials we're sending) and reduce the proof size (our commitments will be much smaller than what they commit).

The commitments used in STARKs are merkle trees, where the leaves contain evaluations of a polynomial. Unlike the commitments used in SNARKs (like IPA and KZG), merkle trees don't have an algebraic structure and thus are quite limited in what they allow us to do. Most of the complexity in STARKs come from the commitments. In this section we will not open that pandora box, and assume that the commitments we're using are normal polynomial commitment schemes which allow us to not only commit to polynomials, but also evaluate them and provide proofs that the evaluations are correct.

Now our protocol looks like this:

  1. The prover commits to the execution trace columns polynomials, then sends the commitments to the verifier.
  2. The prover commits to the quotient polynomials, the sends them to the verifier.
  3. The verifier sends a random value $z$.
  4. The prover evaluates the execution trace column polynomials at $z$ and $z \cdot g$ (remember the verifier might want to evaluate a constraint that looks like this $c_0(x) = f1(x) + f2(x) - f3(x \cdot g)$ as it also uses the next row) and sends the evaluations to the verifier.
  5. The prover evaluates the quotient polynomials at $z$ and sends the evaluations to the verifier (these evaluations are called "masks" in the paper).
  6. For each evaluation, the prover also sends evaluation proofs.
  7. The verifier verifies all evaluation proofs.
  8. The verifier then checks that each constraint is satisfied, by checking the $t = c \cdot \sum_i (x - g^i)$ equation in the clear (using the evaluations provided by the prover).

Straw man 5: a random linear combination to reduce all the checks to a single check

If you've been reading STARK papers you're probably wondering where the heck is the composition polynomial. That final polynomial is simply a way to aggregate a number of checks in order to optimize the protocol.

The idea is that instead of checking a property on a list of polynomial, you can check that property on a random linear combination. For example, instead of checking that $f_1(z) = 3$ and $f_2(z) = 4$, and $f_3(z) = 8$, you can check that for random values $r_1, r_2, r_3$ you have:

$$r_1 \cdot f_1(z) + r_2 \cdot f_2(z) + r_3 \cdot f_3(z) = 3 r_1 + 4 r_2 + 8 r_3$$

Often we avoid generating multiple random values and instead use powers of a single random value, which is a tiny bit less secure but much more practical for a number of reasons I won't touch here. So things often look like this instead, with a random value $r$:

$$f_1(z) + r \cdot f_2(z) + r^2 \cdot f_3(z) = 3 + 4 r + 8 r^2$$

Now our protocol should look like this:

  1. The prover commits to the execution trace columns polynomials, then sends the commitments to the verifier.
  2. The verifier sends a random value $r$.
  3. The prover produces a random linear combination of the constraint polynomials.
  4. The prover produces the quotient polynomial for that random linear combination, which ethSTARK calls the composition polynomial.
  5. The prover commits to the composition polynomial, then sends them to the verifier.
  6. The protocol continues pretty much like the previous one

Note: in the ethSTARK paper they note that the composition polynomial is likely of higher degree than the polynomials encoding the execution trace columns. (The degree of the composition polynomial is equal to the degree of the highest-degree constraint.) For this reason, there's some further optimization that split the composition polynomial into several polynomials, but we will avoid talking about it here.

We now have a protocol that looks somewhat clean, which seems contradictory with the level of complexity introduced by the various papers. Let's fix that in the next blogpost on FRI...

]]>
A journey into zero-knowledge proofs David Wong Tue, 03 Oct 2023 03:06:28 +0200 http://www.cryptologie.net/article/600/a-journey-into-zero-knowledge-proofs/ http://www.cryptologie.net/article/600/a-journey-into-zero-knowledge-proofs/#comments

My journey into zero-knowledge proofs (ZKPs) began in university, with me slouching in an amphitheater chair attending a cryptography course. "Zero-knowledge proofs", said the professor, "allow a prover to convince a verifier that they know something without revealing the something". Intriguing, I thought. At the end of the course I had learned that you could prove simple statements, like the knowledge of the discrete logarithm of a field element (e.g. that you know $x$ in $y = g^x \mod p$ for some $g$ and prime $p$) or of an isomorphism between two graphs. Things like that.

The protocols we learned about were often referred to as "sigma protocols" due to the Σ-like shape the interaction between the prover and the verifier looked like. The statements proven were quite limited and simple and I remember thinking "this is interesting but where is this used?"

It took me some time to realize that the cryptographic signature schemes used all over the place were actually zero-knowledge proofs. Non-interactive zero-knowledge proofs specifically. As in, no back-and-forth had to happen between a prover and a verifier. The prover could instead create the proof by themselves, publish it, and then anyone could act as a verifier and verify it. I wrote about this first discovery of mine in 2014 here.

Anyone familiar with these ZK proofs would tell you that the non-interactivity is usually achieved using a transformation called "Fiat-Shamir", where the messages of the verifier are replaced by hashing any messages sent previously. If the hash function acts random enough, it seems to work. Although, this technique can only be applied when the verifier messages are not structured, and are just random values. Sigma protocols that have verifiers who only send random values as part of the protocol are referred to as "public-coin".


In 2015 I moved to Chicago and joined NCC Group as the first intern in a new team called Crypto Services. It was a small team of cryptographers led by the fantastic Tom Ritter, and we mostly focused on auditing cryptographic applications. From TLS/SSL to secure messaging, we did it all. These were transformative years for me, as I got a crash course in applied cryptography, and got to work with truly amazing people (like the Thomas Pornin!)

At some point cryptocurrencies started booming and we switched to auditing cryptocurrencies almost exclusively. The second time I encountered zero-knowledge proofs was during an audit of ZCash, a fork of Bitcoin that used ZKPs to hide who sent how much to who. This was mind-blowing to me, as the statements proven were not "simple" anymore. What was proven was actual program logic.

This new kind of "general-purpose" zero-knowledge proofs seemed like a game changer to me. It took me some time to understand the interface behind these ZKPs, but let me try to give you the gist using sudokus. I believe that if you understand this simple example, you can then understand any type of application that makes use of ZKPs by abstracting the zero-knowledge proofs as black boxes.

In this scenario, Alice has a sudoku grid and Bob has a solution. Also, they both have access to some Python program on Github that they can use to verify that the solution solves Alice's grid.

Bob could send the solution to Alice, and she could then run that program, but Bob does not want to share his solution. So Bob runs the program himself and tells Alice "I ran it with your grid and my solution as inputs, and it output true". Is this enough to convince Alice?

Of course not! Why would Alice trust that Bob actually ran the software? For all she knows he could have done nothing, and just lied to her. This is what general-purpose zero-knowledge proofs solve: they allow Bob to create a proof of correct execution given some public inputs (Alice's grid) and some private inputs (Bob's solution).

More specifically, both of them can take the "verify solution" program and compile it to two asymmetric parts: a prover index and a verifier index (also called prover key and verifier key, although they're not really keys). Bob can then use a proving algorithm with the correct arguments to produce a proof, and Alice can use another algorithm (a verifier algorithm) to verify that proof. To do that she of course needs to use the verifier index (which is basically a short description of the program) as well as pass the sudoku grid as public input (otherwise who knows what sudoku grid Bob used to create the execution proof).

Python programs are of course not really something that we can prove using math, and so you'll have to believe me when I say that we can translate all sorts of logic using just the multiplication and addition gates. Using just these, we form arithmetic circuits.

I'll give you some examples that come up all the time in circuits people write:

  • here's how you constraint some value to be a bit: $x ( x - 1 ) = 0$
  • here's how you unpack a value into bits: $x = x_0 + x_1 \cdot 2 + x_2 \cdot 4 + \cdots$
  • here's how you do an XOR between two bits (this one is cute): $a + b - 2ab = 0$
  • here's how you do an if else condition to encode something like r = if cond { a } else { b }: $r = \text{cond} \cdot a + (\text{cond} - 1) b$.

And so, you can imagine that what people end up proving are lists of such equations where the variables are "wired" between different equations (for example, an XOR would use variables that are also constrained to be bits).

To prove that a list of equations is correct given some values, you would then record all of the values used to fill the equations during execution (an execution trace if you will) and apply some math magic on it.

That's all I'll say about that but you can check my series of videos on plonk (a proof system) to learn more about that part.


But let's go back to the project I was looking at at the time. Zcash used a proof system called Groth16. Groth16 was, as the name indicates, created in 2016 by a Mr. Groth. 2016, in the world of zero-knowledge proofs, sounds like centuries ago. Things go really fast and new schemes and breakthroughs are published every year. Yet, one can say that Groth16 is still competitive today as it remains the proving scheme with some of the smallest proofs (hundreds of bytes). In addition it is still used all over the place. Most ZK apps that I've seen being built, or that have been built, on top of Ethereum are using Groth16.

The reason might be that some of the tooling available has dominated the space: Circom and snarkjs. Circom allows you to write circuits in a format Groth16 can understand, and snarkjs allows you to prove and verify these on your laptop, as well as verify proofs on Ethereum (via a verifier implemented in Solidity).

In the following screenshot you can see what Circom code looks like on the left, and how the code is converted down to equations (called constraints) on the right. This is obtained via circomscribe. Take some time to understand what the code does, and how the compiled constraints end up enforcing the same logic.

You might have noticed that none of the equations/constraints have a higher degree than 2. This is a limitation of the arithmetization that Groth16 supports, which is called R1CS. The term arithmetization here is used as to refer to the format that your constraints or equations have to be in for them to be provable by the proof system. This is important as not all proof systems enforce the same "encoding" for your circuit, some allow more complex equations to be packed, while others force you to break down your equations into smaller ones (which increases the list of equations, and thus is more expensive).

Briefly, the way R1CS works is that each equation/constraint has the ability to perform the multiplication of two linear combinations of any of your witness values (the values in the execution trace) and enforce that the result is equal to another linear combination. If that doesn't make sense think of it this way, what you do in R1CS is to encode your circuits as a number of variables $a_i$, $b_i$, and $c_i$ such that if your execution trace is the vector of values $\vec{w} = (w_0, w_1, \cdots)$ then each of your constraints will look like this:

$$ (a_0 w_0 + a_1 w_1 + \cdots)(b_0 w_0 + b_1 w_1 + \cdots) = c_0 w_0 + c_1 w_1 + \cdots $$

Imagine that we wanted to constrain that $w_1$ is a bit (it's equal to 0 or 1), then we'd try to hardcode a new set of values $a_i, b_i, c_i$ such that we get $w_1 (w_1 - 1) = 0$.

We could set $a_i = b_i = c_i = 0$ except for: $a_1 = 1$ and $b_1 = 1$ which would get us to $(1 \cdot w_1)(1 \cdot w_1) = 0$ which is close but not exactly what we want. How do we get the $-1$ in there?

The solution to encode constants in your circuit is to require that one of the witness values is fixed to a known value. This is done via the proof system (and not by R1CS itself), and usually we set $w_0 = 1$. This way by setting $b_0 = -1$ we get what we want. Right?

Note: additionally, we usually enforce the following witness values $w_1, w_2, \cdots$ to contain the public input values, and then everything after that are private variables that will be used to store the prover's execution trace.

Now, let me say one last thing about Groth16 before moving on. The scheme sort of sucks massively in other ways, because it requires a trusted setup. I've explained what trusted setups are here, but I'll briefly repeat it here: Alice, Bob, or whoever wants to use Groth16 has to generate a set of parameters first. This generation of parameters is unfortunately dangerous, as the person who performs it will be in possession of some values that will let them forge proofs. So-called toxic waste.

We're not crazy, so the way we deal with this is to have someone we trust run the generation of parameters and then destroy the toxic waste they produced during that. This explains the "trusted" part in "trusted setup". Relying on the goodness of a single person is usually not enough to convince people, and so the trusted setup is normally ran by many people in a multi-party computation kind of protocol (called a ceremony). This ensures that as long as one person is honest, the whole thing is secure.

This doesn't always work out. In 2018, Ariel, an employee of Zcash, found a bad bug. Turns out Zcash's ceremony had been run incorrectly and people could have minted money out of thin air. The fix (and a subsequent new ceremony) took a year to land.

What doubly sucks with Groth16 is that you need such a ceremony per-circuit. Every change you want to make to your circuit, or every new circuit you want to prove, will require you to go through that painful ceremony thing again. On top of that, I would say that the tooling to do such ceremonies is in quite an experimental shape.


In 2018, while still working at NCC Group, Ava brought me to the announcement party of the Coda protocol blockchain. There I learned that this project used zero-knowledge proofs in a novel way: they recursed them! Using proofs to prove other proofs, ad-infinitum.

After the meetup I went on their webpage that at the time hosted a demo that looked like that:

Zero-knowledge proofs managed to blew my mind once more. Unfortunately the demo doesn't exist anymore, but if you could experience it today you would have see that: a webpage taking seconds to synchronize to a testnet's latest state (with some cool animations) and then proceeding to verify each new block being added on top of the latest one, live.

At the time this was something I had never seen. Bitcoin or Ethereum were cryptocurrencies that got longer and longer with time. And if you wanted to synchronize your client to their network, you'd have to wait for it to verify the chain block by block, which could take days or even weeks (and perhaps months today). Coda's demo would do that in mere seconds.

ZKPs gave the whole "trust but verify" a whole new meaning.

Each new block would carry a proof, which would prove the correct validation of the entire block and its transactions, and the correct application of the transactions to the previous state resulting in the latest state. On top of that the proof contained a proof that the previous proof was correct. And the previous proof contained a proof that the previous block was correct, along with the previous previous proof. And on and on. You get the idea.

Back then I couldn't believe what I had seen. It seemed too magical, too good to be true. Later I learned how recursive proofs worked and some of the magic dissipated. Intuitively, you can think of it like this: if these zero-knowledge proofs allow you to prove any sort of programs, then why not prove a verifier? The proof verifier is just some software, like any other software, and so its execution can be proved as well!

This novel idea brought some new use cases with it: you could now prove long-running computations. Year-long computations if you wanted. It also brought some novel problems with it that are worth mentioning: recursion is expensive. The reason is that in general, while verifying is cheap (in the order of milliseconds), proving is expensive (in the order of seconds). And so, if recursing meant creating a proof at each step, it meant paying some heavy cost at each step.

In 2019, Sean from Zcash ended up coming up with some techniques called Halo to defer some of that cost to the very end. More recently, a new technique called folding was independently proposed by Srinath (via his scheme Nova) and Benedikt and Binyi (via their scheme Protostar) which allows recursion to completely bypass the expensive computation of a proof at every step. Folding allows us to directly recurse on the execution trace. I call it pre-proof recursion. Of course, folding comes with its different set of challenges and limitations but this is for another article.


In 2018 I decided to move from the service industry to work on a product: Libra (later renamed Diem). I served as the security lead there for two years, in which I ended up writing a book: Real-World Cryptography. This was the culmination of two years of work, and you can read more about why I decided to write yet another cryptography book here. The book contains a chapter on zero-knowledge proofs (and MPC and FHE), as well as a chapter on cryptocurrencies which I believe makes it the first (and perhaps still the only) book with a chapter on cryptocurrencies. Believe it or not.

If you're enjoying my writing thus far, and you don't have a copy yet, you should buy the book and leave me a good review on Amazon. Obviously!

Not much happened ZK-wise in my life during my time at Facebook. The only thing I can remember was learning about Celo's Plumo protocol and pushing for similar ideas internally. The idea behind Plumo was similar to Coda protocol, but instead of using recursive zero-knowledge proofs to prove the whole blockchain, it relied on the consensus protocol of the Celo blockchain to do most of the work. Left to the zero-knowledge proof was to verify a number of signatures. Sometimes a number of signatures so large that Plumo used a very big proof as well as beefy computers in the cloud to compute it.

it is possible create proofs that span half a year worth of epochs for 100 validators in about 2 hours. We evaluated the performance of our verifier on a Motorola Moto G (2nd Gen), a 2014 mobile phone with 1GB RAM and a Quad-core 1.2 GHz Cortex-A7 processor. We used an unoptimized implementation, directly cross-compiled from [GS19]. The results show it is possible to verify such a proof in about 6 seconds.

I'm not sure why the proof takes so long to verify, but the proving time says a lot about the performance of zero-knowledge proofs in 2020. Since then, a LOT has happened (including folding) and it is probable that the proving time is much more reasonable now.


Unlike AI, which has opened the way to so many applications, it is always hard for me to think of interesting applications for zero-knowledge proofs. My current mental model is the following: the privacy features of ZKPs (provided by the "ZK") seem to mostly be useful when they impact end-users directly. Privacy has been very useful in cryptocurrencies, where everything is public by design, and the word public mixed with money leads to obvious issues. But outside of cryptocurrencies, I have a hard time seeing a large adoption of ZK thanks to its privacy features.

People don't care about privacy. Users want something that's fast and that has a good user experience. If they have to choose between two applications where one is a better experience, but the other provides privacy, they will choose the one with a better experience over the privacy-enhanced one. I've seen this happen again and again (encrypted emails, encrypted messages, cryptographic algorithms, etc.)

People also generally don't care about the "verifiable computation" aspect. Every day we agree to trust many establishments and other human beings. This is how society works, and nothing would work without this kind of network of trust. As such, the high-degree of assurance that ZKPs provide are often unnecessary, and if more trust is needed signatures are often enough.

Between machines though? When no users are involved, the compression and verifiable computation aspects of zero-knowledge proofs are totally a game changer, and I believe that these are going to be the two aspects of ZKPs that are going to impact the non-blockchain world.

I said that signing is often enough, and ZKPs are overkill in a number of scenarios, but verifying signatures in a ZKP is something that actually leads to interesting and useful applications. Both Coda protocol and Plumo did exactly this (verifying consensus and transaction signatures for nodes/light clients). More recently, Sui came out with zkLogin which allows you to prove that a platform (e.g. Google) signed over your username/mail without leaking that information, and Aayush and Sampriti came up with zkEmail which allows you to verify that your email provider signed on the fact that you received some email without leaking the content or your email.


At some point in 2021, the writing was on the wall that Libra was not going to launch. I was still thinking about ZK a lot, and also about the lack of progress I had made in this field. I decided to join the project that had blown my mind a few years prior: Coda Protocol. Their name was now Mina protocol (apparently they had been sued into changing name). I ended up joining and working there for two years on their proof system as a cryptography architect.

The proof system was called Kimchi, which was a variant of PlonK. PlonK had been released in 2019, and had been a game changer in the field, introducing an efficient universal scheme which got rid of Groth16's ugly per-circuit trusted setup. Now a single trusted setup was enough to last for a lifetime of circuits (as long as the circuits didn't reach some upper bound size).

PlonK also brought a different arithmetization, moving us from R1CS to something more flexible (that people often refer to as the "plonkish arithmetization"). No more quadratic constraints, now you could come up with higher degree equations for your constraints as long as they were limited to act on a small part of the execution trace (if that doesn't make sense don't worry). The R1CS tooling was lost though, and PlonK implementations were often incompatible with one another.

Around the same time, Marlin, a universal scheme for R1CS was also released. PlonK won the popular contest though, and a long line of papers started forming around the construction: turboplonk, plookup & ultraplonk, fflonk, shlonk, hyperplonk, honk, etc.


I wouldn't do the space justice without mentioning transparent schemes. Transparent schemes are proof systems that completely bypass the trusted setup: you can generate the parameters to use them without having to do a ceremony! (In other words, Alice, Bob, and anyone can generate them by themselves.)

So-called STARKs (where the T is for transparent) formed a bubble at the same time as the Plonk and Groth16 bubble. These schemes had much faster provers at the cost of larger proofs. If Groth16 had proofs of hundreds of bytes, STARKs looked like hundreds of kilobytes. In addition, due to their use of hash functions they boasted resistance against hypothetical quantum-computers. More seriously they have been criticized for using lower security parameters compared to other schemes.

Another notable line of work of transparent ZKP based on the discrete logarithm problem are inner product arguments (IPAs). Most people know the bulletproof scheme for its use in the Monero cryptocurrency. Unlike other schemes, IPAs have slower verifiers (although still in the order of ms), which can be an issue in practice, but unlike STARKs can still claim to have small proofs.

Interestingly, people have mixed PlonK with the two previously discussed schemes. Plonky mixes PlonK with STARKs, and Zcash's Halo 2 mixes PlonK with IPAs (which is what we used at Mina).


During my time at Mina, most of the company was focused on shipping zero-knowledge smart contracts (or zkapps). The idea was pretty simple: smart contracts in Ethereum (and similar cryptocurrencies) were inefficient in parts due to everyone having to re-execute transactions over and over.

This was the perfect problem to solve for zero-knowledge proofs as one could just execute a smart contract once and then provide an execution proof for others to verify. Avoiding the cost of recomputing the same thing over and over.

One issue when smart contracts are executed locally, on people's laptops, is that they might prove conflicting state transitions. Imagine that someone's transaction takes a smart contract from state A to state B, and someone else's transaction takes the same smart contract from state A to state C, then whoever's transaction gets in first will invalidate the other one. The two transactions are racing.

The problem is that each execution proof has the precondition that the state must be A for it to be valid. In practice this means that the network verifies the proof using the state as public input, and so if the state is no longer "A" then the proof won't verify.

Preconditions can be more relaxed than "this public input MUST be equal to this specific value for the proof to be valid". You can, for example, enforce that the a public input has to be higher than some value, or something like that. But this relaxation also limits the number of applications one can build.

To solve this problem of resource contention, Mina separates the logic of smart contracts into two parts: a private part that doesn't lead to conflicts (and is thus highly parallelizable), and an optional final intent. Final intents are ordered by the network (consensus specifically) and queued up for some time. Other parts of the smart contract can then be used to process that queue. That "post-ordering" execution is lagging behind, and who performs it is a choice of the application.

Aleo, another cryptocurrency similar to Mina, has a similar approach except that their final intent is an actual logic that is executed by the network. The network does that right after processing the transaction (and verifying its ZKP). While both solutions have the network order transactions, Aleo decides to provide a smoother developer experience (at a cost for the network) by taking the burden of executing the final intent in the same fashion Ethereum does.

Note: another worthy contender is Starknet, who decides to bypass this issue by not allowing users to create the proofs at all, and by letting the network sequentially create proofs after some ordering/sequencing of the transactions. The whole purpose of Starknet is to create proofs of execution that Ethereum can verify (making Starknet a "zkrollup", the best kind of "layer 2").

Another issue to solve, if one wants to emulate Ethereum-style smart contracts, is to support contracts calling other contracts. Both projects do this by having each nested call be its own proof, and by having the verifier (the network) glue the calls (but really the proofs) together.

More specifically, one can imagine that at the moment of calling another smart contract's function, a circuit can simply advertise (we also say "witness publicly") what arguments it is using to call the function with, and then advertise the resulting value of the call. It is up to the verifier to verify that the call was correctly encoded by plugging the input and output values as public inputs in the verification of both the caller and callee circuits.


There's something I've sort of shied away from talking about so far: writing circuits. If proof systems can be thought of as backends, then circuit writing is the frontend.

Mina's solution is to provide a library in Typescript (called o1js). That's the "embedded-DSL (Domain Specific Language)" approach. With such a library people can keep using their favorite language, at the cost of writing code that might not produce constraints (and thus produce vulnerabilities). Here's what a Mina zkapp looks like written with o1js:

Several projects have followed this approach. For example, there's arkworks-r1cs for writing R1CS circuits in Rust, and there's gnark for writing circuits in Golang.

Another approach is to invent a new programming language. That's the approach taken by Circom which I've mentioned before. More recently Noir (by Aztec Network) and Leo (by Aleo) were introduced as Rust-like languages. Interestingly Aleo also has a VM that can be compiled into a circuit (so a more low-level-looking language in some way).

Finally, there's the approach of letting developers use their favorite language without having to write code in a specific "circuit way". By that I mean that we can compile the Rust/Golang/C/etc. code they write into a circuit directly. This is the approach of zkLLVM which supports any language that uses the LLVM toolchain/compiler, and translates the LLVM intermediate representation (IR) directly into a circuit.

But wait, I lied, I'm actually not done. There's another completely different proposition: zkVMs. If the previous approach was about "encoding your logic into a circuit", the zkVM approach is about "encoding a Virtual Machine into a circuit".

A VM is essentially just a loop, that at every iteration loads the next instruction and executes it. This is logic in itself that can be converted in a circuit, which would load another program and run it to its course. This is the idea behind the concept of zkVM. It is a very attractive idea because it removes a number of bugs that only happen when you're writing circuits directly (as long as the VM circuit itself doesn't have bugs of course). It also makes loops, branching, etc. much more efficient.

In zkVM-land, there's two branches of philosophy: targeting existing VMs and inventing your own VM.

For example, Risc0 and Jolt both target the RiscV VM. There's also been some ZKWASM projects (like this one). In crypto-land, there are dozens of zkEVMs which all attempt to target the Ethereum VM.

Why would anyone invent new VMs? The idea is that the other VMs I've mentioned are not optimized for being encoded as circuits. And so there's been a line of work to invent optimal zkVMs. For example, there was Cairo (which has an amazing whitepaper) and Miden (which has amazing doc), and more recently Valida (productionized by Lita).


I recently quit my job to launch zkSecurity with two of my ex-coworkers Gregor and Brandon. The idea being that zero-knowledge proofs are going to change the world, and I'll be the first to move into the security for ZK space (hence the great choice of name).

We've been booked non-stop, and have released some interesting blogposts, interesting tools, as well as some interesting public reports. Check it out!

We're also helping organize Zprize, the largest competition in ZK. The idea of ZPrize is to accelerate what is today a technology that's practical for applications that remain relatively simple, but that could be practical for a much larger number of applications if the technology were to gain some efficiency boost.

ZKPs are not FHEs (in that they're practical today), but they still have a number of speed issues to overcome. Mostly, not all operations are practical to encode in circuits. Circuits are encoded over a field (think numbers modulo a large prime number), and don't play nice with anything that requires a lot of bitwise operations (impacting basically all of symmetric cryptography) or bigint arithmetic (impacting basically all of asymmetric cryptography). This means that in general, simple things like doing XORs and checking if values are within a certain range easily blow up the size of our circuits. This leads to provers being slow and using a lot of memory.

There's been three ways to tackle these issues:

  1. screw it, we'll run things on beefy machines in the cloud
  2. let's figure out how to accelerate these operations in order to reduce the circuit size
  3. screw it, let's break up a circuit into multiple smaller circuits

The first solution works somewhat well for machine-to-machine applications that provide mostly compression of computation. For privacy applications targeting end-users? It's often more tricky as users don't want to share their private inputs to some untrusted machine in the cloud. For this, there's been some interesting lines of work that decentralize the prover using multi-party computation protocols, so that delegating a proof doesn't mean losing privacy. The latest work that I know of is zkSAAS from Sanjam.

Still, finding better proof systems and accelerating awkward operations has been one of the most successful ways to solve the efficiency problem. The major breakthrough was to use lookups. For example, integrating an XOR table to your proof system so that you can quickly get the XOR of 4-bit variables, or integrating a table with all the numbers from 0 to $2^{12}$ to allow you to quickly check if a number is within a certain range. Ariel was the first one to introduce a lookup scheme (called plookup) in 2020, which was followed by a stream of improvements, up to the more recent Lasso which basically claims that lookups are so efficient that we can now implement a zkVM using lookups only (isn't that crazy?)

Finally, breaking a circuit into smaller circuits is something I've already almost mentioned (but you might have missed that): using recursion and/or folding one can break a circuit into smaller circuits! Nova Scotia does exactly that by upgrading Circom to use the Nova folding scheme.


There's so much more I could say, but I've now reached 5,000 words and it's getting late. I hope you appreciated the wall of text, and if you have your own ZK journey I encourage you to write about it as well :)

]]>
I talked about ZK security on the first episode of Node Guardians season 2! David Wong Fri, 01 Sep 2023 02:17:53 +0200 http://www.cryptologie.net/article/598/i-talked-about-zk-security-on-the-first-episode-of-node-guardians-season-2/ http://www.cryptologie.net/article/598/i-talked-about-zk-security-on-the-first-episode-of-node-guardians-season-2/#comments I was invited by Sam to talk about diverse things, including ZK security. Check the episode here:

]]>
Mum, I was on the zkpodcast! David Wong Wed, 30 Aug 2023 19:07:47 +0200 http://www.cryptologie.net/article/597/mum-i-was-on-the-zkpodcast/ http://www.cryptologie.net/article/597/mum-i-was-on-the-zkpodcast/#comments That's it, I made it, I can finally retire now.

https://zeroknowledge.fm/290-2/

This week, Anna and Guillermo chat with David Wong, author of the Real-World Cryptography book, and a cofounder zksecurity.xyz – an auditing firm focused on Zero Knowledge technology.

They chat about what first got him interested in cryptography, his early work as a security consultant, his work on the Facebook crypto project and the Mina project, zksecurity.xyz, auditing techniques and their efficacy in a ZK context, what common bugs are found in ZK code, and much more.

]]>
First zksecurity public report is out! David Wong Tue, 22 Aug 2023 20:52:49 +0200 http://www.cryptologie.net/article/596/first-zksecurity-public-report-is-out/ http://www.cryptologie.net/article/596/first-zksecurity-public-report-is-out/#comments My first public report (since I left NCC Group) is out. It was work I did for zksecurity, auditing the Penumbra circuits. You can read it here: https://penumbra.zone/blog/2023-audits

It should be quite interesting to read as we found double spending and double voting issues. It's also nice because there are two reports in the post, one from us (zksecurity) and one from my previous team over at NCC Group =)

]]>
The zero-knowledge attack of the year might just have happened, or how Nova got broken David Wong Thu, 13 Jul 2023 23:55:46 +0200 http://www.cryptologie.net/article/595/the-zero-knowledge-attack-of-the-year-might-just-have-happened-or-how-nova-got-broken/ http://www.cryptologie.net/article/595/the-zero-knowledge-attack-of-the-year-might-just-have-happened-or-how-nova-got-broken/#comments attack of the year

I wrote a thing that got quite the traction on the internet, which is merely a summary of something awesome that someone else found.

You can read it here: https://www.zksecurity.xyz/blog/posts/nova-attack/

the first paragraph to give you an idea:

Last week, a strange paper (by Wilson Nguyen et al.) came out: Revisiting the Nova Proof System on a Cycle of Curves. Its benign title might have escaped the attention of many, but within its pages lied one of the most impressive and devastating attack on a zero-knowledge proof (ZKP) system that we’ve ever seen. As the purpose of a ZKP system is to create a cryptographic proof certifying the result of a computation, the paper demonstrated a false computation result accompanied with a valid proof.

]]>
What's happening in the round 5 of PlonK? David Wong Mon, 05 Jun 2023 23:09:39 +0200 http://www.cryptologie.net/article/594/whats-happening-in-the-round-5-of-plonk/ http://www.cryptologie.net/article/594/whats-happening-in-the-round-5-of-plonk/#comments Someone was asking the following question on [Plonk}():

question about 5th round of plonk

If you also looked at Plonk and wanted and were wondering the same, here's a short answer. If you do not care about PlonK, feel free to ignore this post. If you care about PlonK, but are starting from the very beginning, then just check my series of videos on PlonK.


First, there's different things going on in this picture. The best way to understand what's going on is to understand them individually.

fiat-shamir

The first step is a random challenge from the verifier in the interactive version of the protocol. Since we're in the non-interactive version, we've replaced the messages of the verifier by calls to a random oracle. This technique to convert an interactive protocol into a non-interactive one is very famous and used all over the place, it's called Fiat-Shamir.

Because we rely on a random oracle, proofs have to state that they are in the random oracle model, and some people don't like that too much because in the real world you end up instantiating these random oracles with hash functions (which are non-ideal constructions). So some people like protocols better when they don't rely on random oracles. In practice, if your hash function is thought to behave like a random oracle (e.g. SHA-3), then you're all good.

Fiat-Shamir'ing an interactive protocol only works if the protocol is a public-coin protocol, which PlonK is. Public-coin here means that the messages of the verifier are random values (coin tosses) that are public (outsiders can look at them and this won't affect the security of the protocol).

Another interesting point in that picture is that we're hashing the whole transcript, which is something you need to do in order to avoid a large class of ambiguity attacks. Protocols often forget to specify this correctly and a number of attacks have been found on PlonKish protocols due to that. See Weak Fiat-Shamir Attacks on Modern Proof Systems for more detail.

linearization

The second step is to compute the linearization of the composition polynomial. The composition polynomial is a term I stole from STARKs but I like it. It's THE polynomial, the one that combines all the checks. In PlonK this means the polynomial that combines checks for all the gates of the circuits and the wiring (permutation).

I'm not going to explain too much about the linearization because I already have a post on it here. But to recap, linearizing is when you evaluate parts of your polynomial. So anything that's evaluated at $z$ is linearized. And anything that has a bar above it (e.g. $\bar{a}$) is linearized as well. Note that the prover could evaluate everything if they wanted to, which would let the verifier compute the entire check "in the clear". But doing that means that the proof is larger, and there are more evaluation proofs to aggregate. It's a tradeoff, that might pay off if you also want to implement recursive zero-knowledge proofs (which require a verifier implemented in a circuit).

We're looking at the prover side in this picture. While the verifier does symmetrical things to the prover, the prover's job here is to form the composition polynomial and to prove that it evaluates to 0 on a number of points (or that it "vanishes on some domain"). So to do that, it has to prove that it is equal to the vanishing polynomial $Z_H$ times some quotient $t(X)$. If you don't understand that, you can read this other post I have on the subject.

The final piece of the puzzle to understand that equation is that we can simplify the $f(X) = Z_H(x) t(x)$ check using Maller's optimization which I talk about here. This is why we subtract our composition polynomial with $Z_H(X) \cdot t(X)$ and this is also why we linearize the vanishing polynomial by evaluating it at $Z_H(z)$.

kzg

Once we have formed the polynomial which checks that the composition polynomial is equal to the vanishing polynomial times some quotient ($f = Z_H \cdot t$) then we have to evaluate this at a random point. We already know the random point $z$, which we've already used to evaluate some parts of the polynomial (during the linearization). The equation you see in the picture is how you compute a KZG evaluation proof. If you don't know that you can check my article on KZG.

Note also that there are many evaluation proofs that are aggregated together using a random linear combination. This is a common technique to aggregate multiple KZG evaluation proofs (and the verifier will have to compute the same random linear combination on the other side to verify the aggregation). In order to be more efficient (at the cost of tiny amount of security loss) we use 6 powers of $v$ instead of using 6 random values.

one more evaluation

In the polynomial above, within the composition polynomial you might have noticed the value $\bar{z_{\omega}}$. It is the permutation polynomial $Z$ evaluated at $z \omega$. The only explanation I have of why you need that is in my video on the permutation of PlonK. Since the evaluation point ($z \omega$) is different from the first evaluation proof, we need a separate evaluation proof for that one (unfortunately).

The output is the pair of evaluation proofs. That's it!

]]>
zksecurity.xyz David Wong Tue, 30 May 2023 19:39:44 +0200 http://www.cryptologie.net/article/593/zksecurityxyz/ http://www.cryptologie.net/article/593/zksecurityxyz/#comments Today, along with my two other cofounders Gregor Mitscha-Baude and Brandon Kase we are launching www.zksecurity.xyz an auditing platform for zero-knowledge applications.

Smart contracts have been at the source of billions of dollars of loss (see our previous project https://dasp.co). Nobody is sheltered from bugs. ZK smart contracts will have bugs, some devastating. Let's be proactive when it comes to zkApps!

In the coming month we'll be posting more about the kind of bugs that we have found in the space, from ZKP systems' bugs to frontend compiler bugs to application bugs. If you're looking for real experts to audit your ZK stack, you now have the ones behind zkSecurity.

We're a mix of engineers & researchers who have been working in the smart contract and ZK field before you were born (jk). On top of that, we've also been in the security consulting industry for a while, so we're professionals ;)

Stay tuned for more blogposts on http://zksecurity.xyz and reach out to me if you need an audit :)

Also, the launch blogpost is much more interesting than this one. Go read it here: Private delegated computation is here, and there will be bugs!

]]>
Two And A Half Coins episode 5: Bitcoin transactions, the Bitcoin script and UTXOs David Wong Mon, 22 May 2023 19:04:24 +0200 http://www.cryptologie.net/article/592/two-and-a-half-coins-episode-5-bitcoin-transactions-the-bitcoin-script-and-utxos/ http://www.cryptologie.net/article/592/two-and-a-half-coins-episode-5-bitcoin-transactions-the-bitcoin-script-and-utxos/#comments In the 5th episode of this series I interview Arik Sosman (our very first guest!) in order to learn more about Bitcoin transactions. Specifically, how the Bitcoin scripting language works, and what UTXOs are!

]]>
Paillier's additively homomorphic cryptosystem David Wong Fri, 31 Mar 2023 17:46:10 +0200 http://www.cryptologie.net/article/591/pailliers-additively-homomorphic-cryptosystem/ http://www.cryptologie.net/article/591/pailliers-additively-homomorphic-cryptosystem/#comments Pascal Paillier released his asymmetric encryption algorithm in 1999, which had the particularity of being homomorphic for the addition. (And unlike RSA, the homomorphism was secure.)

Homomorphic encryption, if you haven't heard of it, is the ability to operate on the ciphertext without having to decrypt it. If that still doesn't ring a bell, check my old blogpost on the subject. In this post I will just explain the intuition behind the scheme, for a less formal overview check Lange's excellent video.

Paillier's scheme is only homomorphic for the addition, which is still useful enough that it's been used in different kind of cryptographic protocols. For example, cryptdb was using it to allow some types of updates on encrypted database rows. More recently, threshold signature schemes have been using Paillier's scheme as well.

The actual algorithm

As with any asymmetric encryption scheme, you have the good ol' key gen, encryption, and decryption algorithms:

Key generation. Same as with RSA, you end up with a public modulus $N = pq$ where $p$ and $q$ are two large primes.

Encryption. This is where it gets weird, encryption looks more like a Pedersen commitment (which does not allow decryption). To encrypt, sample a random $r$ and produce the ciphertext as:

$$(N+1)^m \cdot r^N \mod{N^2}$$

where $m$ is the message to be encrypted. My thought at this point was "WOOT. A message in the exponent? How will we decrypt?"

Decryption. Retrieve the message from the ciphertext $c$ as

$$\frac{c^{\varphi(N)} -1}{N} \cdot \varphi(N)^{-1} \mod{N^2}$$

Wait, what? How is this recovering the message which is currently the discrete logarithm of $(N+1)^m$?

How decryption works

The trick is in expanding this exponentiation (using the Binomial expansion).

The relevant variant of the Binomial formula is the following:

$$(1+x)^n = \binom{n}{0}x^0 + \binom{n}{1}x^1 + \cdots + \binom{n}{n} x^n$$

where $\binom{a}{b} = \frac{a!}{b!(a-b)!}$

So in our case, if we only look at $(N+1)^m$ we have:

$$ \begin{align} (N+1)^m &= \binom{m}{0} + \binom{m}{1} N + \binom{m}{2} N^2 + \cdots + \binom{m}{m} N^m \\ &= \binom{m}{0} + \binom{m}{1} N \mod{N^2}\\ &= 1 + m \cdot N \mod{N^2} \end{align} $$

Tada! Our message is now back in plain sight, extracted from the exponent. Isn't this magical?

This is of course not exactly what's happening. If you really want to see the real thing, read the next section, otherwise thanks for reading!

The deets

If you understand that, you should be able to reverse the actual decryption:

$$ \begin{align} c^{\varphi(N)} &= ((N+1)^m \cdot r^N)^{\varphi(N)}\\ &= (N+1)^{m\cdot\varphi(N)} \cdot r^{N\varphi(N)} \mod{N^2} \end{align} $$

It turns out that the $r^{N\varphi(N)} = 1 \mod{N^2}$ because $N\varphi(N)$ is exactly the order of our group modulo $N^2$. You can visually think about why by looking at my fantastic drawing:

On the other hand, we get something similar to what I've talked before:

$$ (N+1)^{m\varphi(N)} = (1 + mN)^\varphi(N) = 1 + m\varphi(N)N \mod{N^2} $$

Al that is left is to cancel the terms that are not interesting to us, and we get the message back.

]]>
Learn How to Code a zkApp Hello World With Me Using TypeScript David Wong Sat, 11 Mar 2023 14:01:45 +0100 http://www.cryptologie.net/article/590/learn-how-to-code-a-zkapp-hello-world-with-me-using-typescript/ http://www.cryptologie.net/article/590/learn-how-to-code-a-zkapp-hello-world-with-me-using-typescript/#comments Recorded this video for the Mina Foundation going through the first tutorial for zkapps. If you're interested in understanding what goes into these zk smart contracts then this is for you!

]]>