david wong

Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

Are you into finding bugs and learning ZK? Here's a challenge for you posted February 2024

Spent some time to write a challenge focused on GKR (the proof system) on top of the gnark framework (which is used to write ZK circuits in Golang).

It was a lot of fun and I hope that some people are inspired to try to break it :)

We're using the challenge to hire people who are interested in doing security work in the ZK space, so if that interests you, or if you purely want a new challenge, try it out here: https://github.com/zksecurity/zkBank

And of course, since this is an active wargame) please do not release your own solution or write up!

comment on this story

Zero-knowledge proofs in stateful applications posted January 2024

Something that might not be immediately obvious if you're not used to zero-knowledgifying your applications, is that the provable circuits you end up using are pure functions. They do not have access to long-lasting memory and cannot have side effects. They just take some input, and produce some output.

Note: circuits are actually not strictly pure, as they are non-deterministic. For example, you might be able to use out-of-circuit randomness in your circuit.

So when mutation of persistent state is needed, you need to provide the previous state as input, and return the new state as output. This not only produces a constraint on the previous state (time of read VS time of write issues), but it also limits the size of your state.

I've talked about the first issue here:

The problem of update conflicts comes when one designs a protocol in which multiple participants decide to update the same value, and do so using local execution. That is, instead of having a central service that executes some update logic sequentially, participants can submit the result of their updates in parallel. In this situation, each participant locally executes the logic on the current state assuming that it will not have changed. But this doesn't work as soon as someone else updates the shared value. In practice, someone's update will invalidate someone else's.

The second issue of state size is usually solved with Merkle trees, which allow you to compress your state in a verifiable way, and allow you to access or update the state without having to decompress the ENTIRE state.

That's all.

comment on this story

Verifying zero-knowledge proofs on Bitcoin? posted January 2024

zkbitcoin

A few months ago Ivan told me "how cool would it be if we could verify zero-knowledge proofs on Bitcoin?" A week later, we had a prototype of the best solution we could come up with: a multi-party computation to manage a Bitcoin wallet, and a committee willing to unlock funds only in the presence of valid zero-knowledge proofs. A few iterations later and we had something a bit cooler: stateful apps with states that can be tracked on-chain, and committee members that don't need to know anything about Bitcoin. Someone might put it this way: a Bitcoin L2 with minimal trust assumption of a "canonical" Bitcoin blockchain.

From what we understand, a better way to verify zero-knowledge proofs on Bitcoin is not going to happen, and this is the best we ca have. And we built it! And we're running it in testnet. Try it here!

comment on this story

What's out there for ECDSA threshold signatures posted January 2024

In the realm of multi-party computation (MPC) protocols, threshold signing is the protocol that address how multiple participants can sign something under a "shared" private key. In other words, instead of one guy signing something with a private key, we want $N$ guys doing the same thing and obtaining the same result without any of them actually knowing the private key (each of them holds a share of the private key, revealing nothing about the private key itself).

The threshold part means that not every participant who has a share has to participate. If there's $N$ participants, then only $t < N$ has to participate for the protocol to succeed. The $t$ and $N$ depend on the protocol you want to design, on the overhead you're willing to eat, the security you want to attain, etc.

Threshold protocols are not just for signing, they're everywhere. The NIST has a Multi-Party Threshold Cryptography competition, in which you can see proposals for threshold signing, but also threshold decryption, threshold key exchanges, and others.

This post is about threshold signatures for ECDSA specifically, as it is the most commonly used signature scheme and so has attracted a number of researchers. In addition, I'm only going to talk about the history of it, because I haven't written an actual explainer on how these works, and because the history of threshold signing for ECDSA is really messy and confusing and understanding what constructions exist out there is near impossible due to the naming collisions and the number of papers released without proper nicknames (unlike FROST, which is the leading threshold signing algorithm for schnorr signatures).

So here we are, the main line of work for ECDSA threshold signatures goes something like this, and seems to mainly involve two Gs (Gennaro and Goldfeder):

  1. GG18. This paper is more officially called "Fast Multiparty Threshold ECDSA with Fast Trustless Setup" and improves on BGG: Using level-1 homomorphic encryption to improve threshold DSA signatures for bitcoin wallet security (2017) and GGN: Threshold-optimal dsa/ecdsa signatures and an application to bitcoin wallet security (2016).
  2. GG19. This has the same name as GG18, but fixes some of the issues in GG18. I think this is because GG18 was published in a journal, so they couldn't update it. But GG18 on eprint is the updated GG19 one. (Yet few people refer to it as GG19.) It fixes a number of bugs, including the ones brought by the Alpha-Rays attack, and A note about the security of GG18.
  3. GG20. This paper is officially called "One Round Threshold ECDSA with Identifiable Abort" and builds on top of GG18/GG19 to introduce the ability to identify who caused the abort. (In other words, who messed up if something was messed up during the multi-party computation.) Note that there are still some bugs in this paper.
  4. CGGMP21. This one combines GG20 with CMP20 (another work on threshold signatures). This is supposed to be the latest work in this line of work and is probably the only version that has no known issues.

Note that there's also another line of work that happened in parallel from another team, and which is similar to GG18 except that they have different bugs: Lindell-Nof: Fast secure multiparty ecdsa with practical distributed key generation and applications to cryptocurrency custody (2018).

PS: thanks to Rosario Gennaro for help figuring this out :)

comment on this story

The ZK update conflict issue in multi-user applications posted January 2024

I haven't seen much ink being spewed on the ZK update conflict issue so I'll write a short note here.

Let's take a step back. Zero-knowledge proofs allow you to prove the result of the execution of some logic. Like signatures attached to data you receive, ZK proofs can be attached to a computation result. This means that with ZK, internet protocols can be rethought and redesigned. If execution of the protocol logic had to happen somewhere trusted, now some of it can be moved around and delegated to untrusted places, or for privacy-reasons some of it can be moved to places where private data should remain.

How do we design protocols using ZK? It's easy, assume that when a participant of your protocol computes something, they will do it honestly. Then, when you implement the protocol, use ZK proofs to enforce that they behave as intended.

The problem of update conflicts comes when one designs a protocol in which multiple participants decide to update the same value, and do so using local execution. That is, instead of having a central service that executes some update logic sequentially, participants can submit the result of their updates in parallel. In this situation, each participant locally executes the logic on the current state assuming that it will not have changed. But this doesn't work as soon as someone else updates the shared value. In practice, someone's update will invalidate someone else's.

This issue is not just a ZK issue, if you know anything about databases then how to perform conflict resolution has been an issue for a very long time. For example, in distributed databases with more than one writer, conflicts could happen as two nodes attempt to update the same value at the same time. Conflict can also happen in the same way in applications where multiple users want to update the same data, think Google Docs.

The solutions as far as I know can be declined in the following categories:

  1. Resolve conflicts automatically. The simplest example is the Thomas write rule which discards any outdated update. In situations were discarding updates is unacceptable more algorithm can take over. For example, Google Docs uses an algorithm called Operational Transformation to figure out how to merge two independent updates.
  2. Ask the user for help if needed. For example, the git merge command that can sometimes ask for your help to resolve conflicts.
  3. Refuse to accept any conflicts. This often means that the application is written in such a way that conflicts can't arise, and in distributed databases this always mean that there can only be a single node that can write (with all other nodes being read-only). Although applications can also decide to simply deny updates that lead to conflicts, which would lead to poor performance in concurrency-heavy scenarios, as well as poor user experience.

As one can see, the barrier between application and database doesn't matter too much, besides the fact that a database has poor ways of prompting a user: when conflict resolution must be done by a user it is generally the role of the application to reach out.

What about ZK though? From what I've seen, the last "avoid conflicts" solution is always chosen. Perhaps this is because my skewed view has only been within the blockchain world, which can't afford to play conflict resolution with $$$.

For example, simpler ZK protocols like Zcash will often massage their protocol such that proofs are only computed on immutable data. For example, arguments of a function cannot be the latest root of a merkle tree (as it might get updated before we can publish the result of running the function) but it can easily be the root of a merkle tree that was seen previously (we're using a previous state, not the latest state, that's fine).

Another technique is to extract the parts of updates that occur on a shared data structure, and sequence them before running them. For example, the set of nullifiers in zcash is updated outside of a ZK execution by the network, according to some logic that only gets executed sequentially. More complicated ZK platforms like Aleo and Mina do that as well. In Aleo's case, the user can split the logic of its smart contracts by choosing what can be executed locally (provided a proof) and what has to be executed serially by the network (Ethereum-style). In Mina's case, updates that have the potential to lead to conflicts are queued up and later on a single user can decide (if authorized) to process the queued updates serially but in ZK.

comment on this story

Cairo's public memory posted November 2023

Here are some notes on how the Cairo zkVM encodes its (public) memory in the AIR (arithmetization) of the STARK.

If you'd rather watch a 25min video of the article, here it is:

The AIR arithmetization is limited on how it can handle public inputs and outputs, as it only offer boundary constraints. These boundary constraints can only be used on a few rows, otherwise they're expensive to compute for the verifier. (A verifier would have to compute $\prod_{i \in S} (x - g^i)$ for some given $x$, so we want to keep $|S|$ small.)

For this reason Cairo introduce another way to get the program and its public inputs/outputs in: public memory. This public memory is strongly related to the memory vector of cairo which a program can read and write to.

In this article we'll talk about both. This is to accompany this video and section 9.7 of the Cairo paper.

Cairo's memory

Cairo's memory layout is a single vector that is indexed (each rows/entries is assigned to an address starting from 1) and is segmented. For example, the first $l$ rows are reserved for the program itself, some other rows are reserved for the program to write and read cells, etc.

Cairo uses a very natural "constraint-led" approach to memory, by making it write-once instead of read-write. That is, all accesses to the same address should yield the same value. Thus, we will check at some point that for an address $a$ and a value $v$, there'll be some constraint that for any two $(a_1, v_1)$ and $(a_2, v_2)$ such that $a_1 = a_2$, then $v_1 = v_2$.

Accesses are part of the execution trace

At the beginning of our STARK, we saw in How STARKs work if you don't care about FRI that the prover encodes, commits, and sends the columns of the execution trace to the verifier.

The memory, or memory accesses rather (as we will see), are columns of the execution trace as well.

The first two columns introduced in the paper are called $L_1.a$ and $L_1.v$. For each rows in these columns, they represent the access made to the address $a$ in memory, with value $v$. As said previously, we don't care if that access is a write or a read as the difference between them are blurred (any read for a specific address could be the write).

These columns can be used as part of the Cairo CPU, but they don't really prevent the prover from lying about the memory accesses:

  1. First, we haven't proven that all accesses to the same addresses $a_i$ always return the same value $v_i$.
  2. Second, we haven't proven that the memory contains fixed values in specific addresses. For example, it should contain the program itself in the first $l$ cells.

Let's tackle the first question first, and we will address the second one later.

Another list to help

In order to prove that the two columns in the $L_1$ part of the execution trace, Cairo adds two columns to the execution trace: $L_2.a'$ and $L_2.v'$. These two columns contain essentially the same things as the $L_1$ columns, except that these times the accesses are sorted by address.

One might wonder at this point, why can't L1 memory accesses be sorted? Because these accesses represents the actual memory accesses of the program during runtime, and this row by row (or step by step). The program might read the next instruction in some address, then jump and read the next instruction at some other address, etc. We can't force the accesses to be sorted at this point.

We will have to prove (later) that $L_1$ and $L_2$ represent the same accesses (up to some permutation we don't care about).

So let's assume for now that $L_2$ correctly contains the same accesses as $L_1$ but sorted, what can we check on $L_2$?

The first thing we want to check is that it is indeed sorted. Or in other words:

  • each access is on the same address as previous: $a'_{i+1} = a'_i $
  • or on the next address: $a'_{i+1} = a'_i + 1$

For this, Cairo adds a continuity constraint to its AIR:

Screenshot 2023-11-21 at 10.55.07 AM

The second thing we want to check is that accesses to the same addresses yield the same values. Now that things are sorted its easy to check this! We just need to check that:

  • either the values are the same: $v'_{i+1} = v'_i$
  • or the address being accessed was bumped so it's fine to have different values: $a'_{i+1} = a'_i + 1$

For this, Cairo adds a single-valued constraint to its AIR:

Screenshot 2023-11-21 at 10.56.11 AM

And that's it! We now have proven that the $L_2$ columns represent correct memory accesses through the whole memory (although we didn't check that the first access was at address $1$, not sure if Cairo checks that somewhere), and that the accesses are correct.

That is, as long as $L_2$ contains the same list of accesses as $L_1$.

A multiset check between $L_1$ and $L_2$

To ensure that two list of elements match, up to some permutation (meaning we don't care how they were reordered), we can use the same permutation that Plonk uses (except that plonk fixes the permutation).

The check we want to perform is the following:

$$ { (a_i, v_i) }_i = { (a'_i, v'_i) }_i $$

But we can't check tuples like that, so let's get a random value $\alpha$ from the verifier and encode tuples as linear combinations:

$$ { a_i + \alpha \cdot v_i }_i = { a'_i + \alpha \cdot v'_i }_i $$

Now, let's observe that instead of checking that these two sets match, we can just check that two polynomials have the same roots (where the roots have been encoded to be the elements in our lists):

$$ \prod_i [X - (a_i + \alpha \cdot v_i)] = \prod_i [X - (a'_i + \alpha \cdot v'_i)] $$

Which is the same as checking that

$$ \frac{\prod_i [X - (a_i + \alpha \cdot v_i)]}{\prod_i [X - (a'_i + \alpha \cdot v'_i)]} = 1 $$

Finally, we observe that we can use Schwartz-Zippel to reduce this claim to evaluating the LHS at a random verifier point $z$. If the following is true at the random point $z$ then with high probability it is true in general:

$$ \frac{\prod_i [z - (a_i + \alpha \cdot v_i)]}{\prod_i [z - (a'_i + \alpha \cdot v'_i)]} = 1 $$

The next question to answer is, how do we check this thing in our STARK?

Creating a circuit for the multiset check

Recall that our AIR allows us to write a circuit using successive pairs of rows in the columns of our execution trace.

That is, while we can't access all the $a_i$ and $a'_i$ and $v_i$ and $v'_i$ in one shot, we can access them row by row.

So the idea is to write a circuit that produces the previous section's ratio row by row. To do that, we introduce a new column $p$ in our execution trace which will help us keep track of the ratio as we produce it.

$$ p_i = p_{i-1} \cdot \frac{z - (a_i + \alpha \cdot v_i)}{z - (a'_i + \alpha \cdot v'_i)} $$

This is how you compute that $p$ column of the execution trace as the prover.

Note that on the verifier side, as we can't divide, we will have to create the circuit constraint by moving the denominator to the right-hand side:

$$ p(g \cdot x) \cdot [z - (a'(x) + \alpha \cdot v'(x))] = p(x) \cdot [z - (a(x) + \alpha \cdot v(x))] $$

There are two additional (boundary) constraints that the verifier needs to impose to ensure that the multiset check is coherent:

  • the initial value $p_0$ should be computed correctly ($p_0 = \frac{z - (a_0 + \alpha \cdot v_0)}{z - (a'_0 + \alpha \cdot v'_0)}$)
  • the final value $p_{-1}$ should contain $1$

Importantly, let me note that this new column $p$ of the execution trace cannot be created, encoded to a polynomial, committed, and sent to the verifier in the same round as other columns of the execution trace. This is because it makes uses of two verifier challenges $z$ and $\alpha$ which have to be revealed after the other columns of the execution trace have been sent to the verifier.

Note: a way to understand this article is that the prover is now building the execution trace interactively with the help of the verifier, and parts of the circuits (here a permutation circuit) will need to use these columns of the execution trace that are built at different stages of the proof.

Inserting the public memory in the memory

Now is time to address the second half of the problem we stated earlier:

Second, we haven't proven that the memory contains fixed values in specific addresses. For example, it should contain the program itself in the first $l$ cells.

To do this, the first $l$ accesses are replaced with accesses to $(0,0)$ in $L_1$. $L_2$ on the other hand uses acceses to the first parts of the memory and retrieves values from the public memory $m^*$ (e.g. $(1, m^*[0]), (2, m^*[1]), \cdots$).

This means two things:

  1. the nominator of $p$ will contain $z - (0 + \alpha \cdot 0) = z$ in the first $l$ iterations (so $z^l$). Furthermore, these will not be cancelled by any values in the denominator (as $L_2$ is supposedly using actual accesses to the public memory)
  2. the denominator of $p$ will contain $\prod_{i \in [[0, l]]} [z - (a'_i + \alpha \cdot m^*[i])]$, and these values won't be canceled by values in the nominator either

As such, the final value of the accumulator should look like this if the prover followed our directions:

$$ \frac{z^l}{\prod_{i \in [[0, l]]} [z - (a'_i + \alpha \cdot m^*[i])]} $$

which we can enforce (as the verifier) with a boundary constraint.

Section 9.8 of the Cairo paper writes exactly that:

Screenshot 2023-11-21 at 11.31.39 AM

comment on this story

How STARKs work if you don't care about FRI posted November 2023

Here's some notes on how STARK works, following my read of the ethSTARK Documentation (thanks Bobbin for the pointer!).

Warning: the following explanation should look surprisingly close to PlonK or SNARKs in general, to anyone familiar with these other schemes. If you know PlonK, maybe you can think of STARKs as turboplonk without preprocessing and without copy constraints/permutation. Just turboplonk with a single custom gate that updates the next row, also the commitment scheme makes everything complicated.

The execution trace table

Imagine a table with $W$ columns representing registers, which can be used as temporary values in our program/circuit. The table has $N$ rows, which represent the temporary values of each of these registers in each "step" of the program/circuit.

For example, a table of 4 registers and 3 steps:

r0 r1 r2
1 0 1 534
2 4 1 235
3 3 4 5

The constraints

There are two types of constraints which we want to enforce on this execution trace table to simulate our program:

  • boundary constraints: if I understand correctly this is for initializing the inputs of your program in the first rows of the table (e.g. the second register must be set to 1 initially) as well as the outputs (e.g. the registers in the last two rows must contain $3$, $4$, and $5$).
  • state transitions: these are constraints that apply to ALL contiguous pairs of rows (e.g. the first two registers added together in a row equal the value of the third register in the next row). The particularity of STARKs (and what makes them "scallable" and fast in practice) is that the same constraint is applied repeatidly. This is also why people like to use STARKs to implement zkVMs, as VMs do the same thing over and over.

This way of encoding a circuit as constraints is called AIR (for Algebraic Intermediate Representation).

Straw man 1: Doing things in the clear coz YOLO

Let's see an example of how a STARK could work as a naive interactive protocol between a prover and verifier:

  1. the prover constructs the execution trace table and sends it to the verifier
  2. the verifier checks the constraints on the execution trace table by themselves

This protocol works if we don't care about zero-knowledge, but it is obviously not very efficient: the prover sends a huge table to the verifier, and the verifier has to check that the table makes sense (vis a vis of the constraints) by checking every rows involved in the boundary constraints, and checking every contiguous pair of rows involved in the state transition constraints.

Straw man 2: Encoding things as polynomials for future profit

Let's try to improve on the previous protocol by using polynomials. This step will not immediately improve anything, but will set the stage for the step afterwards. Before we talk about the change to the protocol let's see two different observations:

First, let's note that one can encode a list of values as a polynomial by applying a low-degree extension (LDE). That is, if your list of values look like this: $(y_0, y_1, y_2, \cdots)$, then interpolate these values into a polynomial $f$ such that $f(0) = y_0, f(1) = y_1, f(2) = y_2, \cdots$

Usually, as we're acting in a field, a subgroup of large-enough size is chosen in place of $0, 1, 2$ as domain. You can read why's that here. (This domain is called the "trace evaluation domain" by ethSTARK.)

Second, let's see how to represent a constraint like "the first two registers added together in a row equal the value of the third register in the next row" as a polynomial. If the three registers in our examples are encoded as the polynomials $f_1, f_2, f_3$ then we need a way to encode "the next row". If our domain is simply $(0, 1, 2, \cdots)$ then the next row for a polynomial $f_1(x)$ is simply $f_1(x + 1)$. Similarly, if we're using a subgroup generated by $g$ as domain, we can write the next row as $f_1(x \cdot g)$. So the previous example constraint can be written as the constraint polynomial $c_0(x) = f_1(x) + f_2(x) - f_3(x \cdot g)$.

If a constraint polynomial $c_0(x)$ is correctly satisfied by a given execution trace, then it should be zero on the entire domain (for state transition constraints) or on some values of the domain (for boundary constraints). This means we can write it as $c_0(x) = t(x) \cdot \sum_i (x-g^i)$ for some "quotient" polynomial $t$ and the evaluation points $g^i$ (that encode the rows) where the constraint should apply. (In other words, you can factor $c_0$ using its roots $g^i$.)

Note: for STARKs to be efficient, you shouldn't have too many roots. Hence why boundary constraints should be limited to a few rows. But how does it work for state transition constraints that need to be applied to all the rows? The answer is that since we are in a subgroup there's a very efficient way to compute $\sum_i (x - g^i)$. You can read more about that in Efficient computation of the vanishing polynomial of the Mina book.

At this point, you should understand that a prover that wants to convince you that a constraint $c_1$ applies to an execution trace table can do so by showing you that $t$ exists. The prover can do so by sending the verifier the $t$ polynomial and the verifier computes $c_1$ from the register polynomials and verifies that it is indeed equal to $t$ multiplied by the $\sum_i (x-g^i)$. This is what is done both in Plonk and in STARK.

Note: if a constraint doesn't satisfy the execution trace, then you won't be able to factor it with $\sum_i (x - g^i)$ as not all of the $g^i$ will be roots. For this reason you'll get something like $c_1(x) = t(x) \cdot \sum_i (x - g^i) + r(x)$ for $r$ some "rest" polynomial. TODO: at this point can we still get a $t$ but it will have a high degree? If not then why do we have to do a low-degree test later?

Now let's see our modification to the previous protocol:

  1. Instead of sending the execution trace table, the prover encodes each column of the execution trace table (of height $N$) as polynomials, and sends the polynomials to the verifier.
  2. The prover then creates the constraint polynomials $c_0, c_1, \cdots$ (as described above) for each constraint involved in the AIR.
  3. The prover then computes the associated quotient polynomials $t_0, t_1, \cdots$ (as described above) and sends them to the verifier. Note that the ethSTARK paper call these quotient polynomials the constraint polynomials (sorry for the confusion).
  4. The verifier now has to check two things:
    • low-degree check: that these quotient polynomials are indeed low-degree. This is easy as we're doing everything in the clear for now (TODO: why do we need to check that though?)
    • correctness check: that these quotient polynomials were correctly constructed. For example, the verifier can check that for $t_0$ by computing $c_0$ themselves using the execution trace polynomials and then checking that it equals $t_0 \cdot (x - 1)$. That is, assuming that the first constraint $c_0$ only apply to the first row $g^0=1$.

Straw man 3: Let's make use of the polynomials with the Schwartz-Zippel optimization!

The verifier doesn't actually have to compute and compare large polynomials in the correctness check. Using the Schwartz-Zippel lemma one can check that two polynomials are equal by evaluating both of them at a random value and checking that the evaluations match. This is because Schwartz-Zippel tells us that two polynomials that are equal will be equal on all their evaluations, but if they differ they will differ on most of their evaluations.

So the previous protocol can be modified to:

  1. The prover sends the columns of the execution trace as polynomials $f_0, f_1, \cdots$ to the verifier.
  2. The prover produces the quotient polynomials $t_0, t_1, \cdots$ and sends them to the verifier.
  3. The verifier produces a random evaluation point $z$.
  4. The verifier checks that each quotient polynomial has been computed correctly. For example, for the first constraint, they evaluate $c_0$ at $z$, then evaluate $t_0(z) \cdot (z - 1)$, then check that both evaluations match.

Straw man 4: Using commitments to hide stuff and reduce proof size!

As the title indicates, we eventually want to use commitments in our scheme so that we can add zero-knowledge (by hiding the polynomials we're sending) and reduce the proof size (our commitments will be much smaller than what they commit).

The commitments used in STARKs are merkle trees, where the leaves contain evaluations of a polynomial. Unlike the commitments used in SNARKs (like IPA and KZG), merkle trees don't have an algebraic structure and thus are quite limited in what they allow us to do. Most of the complexity in STARKs come from the commitments. In this section we will not open that pandora box, and assume that the commitments we're using are normal polynomial commitment schemes which allow us to not only commit to polynomials, but also evaluate them and provide proofs that the evaluations are correct.

Now our protocol looks like this:

  1. The prover commits to the execution trace columns polynomials, then sends the commitments to the verifier.
  2. The prover commits to the quotient polynomials, the sends them to the verifier.
  3. The verifier sends a random value $z$.
  4. The prover evaluates the execution trace column polynomials at $z$ and $z \cdot g$ (remember the verifier might want to evaluate a constraint that looks like this $c_0(x) = f1(x) + f2(x) - f3(x \cdot g)$ as it also uses the next row) and sends the evaluations to the verifier.
  5. The prover evaluates the quotient polynomials at $z$ and sends the evaluations to the verifier (these evaluations are called "masks" in the paper).
  6. For each evaluation, the prover also sends evaluation proofs.
  7. The verifier verifies all evaluation proofs.
  8. The verifier then checks that each constraint is satisfied, by checking the $t = c \cdot \sum_i (x - g^i)$ equation in the clear (using the evaluations provided by the prover).

Straw man 5: a random linear combination to reduce all the checks to a single check

If you've been reading STARK papers you're probably wondering where the heck is the composition polynomial. That final polynomial is simply a way to aggregate a number of checks in order to optimize the protocol.

The idea is that instead of checking a property on a list of polynomial, you can check that property on a random linear combination. For example, instead of checking that $f_1(z) = 3$ and $f_2(z) = 4$, and $f_3(z) = 8$, you can check that for random values $r_1, r_2, r_3$ you have:

$$r_1 \cdot f_1(z) + r_2 \cdot f_2(z) + r_3 \cdot f_3(z) = 3 r_1 + 4 r_2 + 8 r_3$$

Often we avoid generating multiple random values and instead use powers of a single random value, which is a tiny bit less secure but much more practical for a number of reasons I won't touch here. So things often look like this instead, with a random value $r$:

$$f_1(z) + r \cdot f_2(z) + r^2 \cdot f_3(z) = 3 + 4 r + 8 r^2$$

Now our protocol should look like this:

  1. The prover commits to the execution trace columns polynomials, then sends the commitments to the verifier.
  2. The verifier sends a random value $r$.
  3. The prover produces a random linear combination of the constraint polynomials.
  4. The prover produces the quotient polynomial for that random linear combination, which ethSTARK calls the composition polynomial.
  5. The prover commits to the composition polynomial, then sends them to the verifier.
  6. The protocol continues pretty much like the previous one

Note: in the ethSTARK paper they note that the composition polynomial is likely of higher degree than the polynomials encoding the execution trace columns. (The degree of the composition polynomial is equal to the degree of the highest-degree constraint.) For this reason, there's some further optimization that split the composition polynomial into several polynomials, but we will avoid talking about it here.

We now have a protocol that looks somewhat clean, which seems contradictory with the level of complexity introduced by the various papers. Let's fix that in the next blogpost on FRI...

comment on this story