David Wong

cryptologie.net

cryptography, security, and random thoughts

Hey! I'm David, cofounder of zkSecurity, research advisor at Archetype, and author of the Real-World Cryptography book. I was previously a cryptography architect of Mina at O(1) Labs, the security lead for Libra/Diem at Facebook, and a security engineer at the Cryptography Services of NCC Group. Welcome to my blog about cryptography, security, and other related topics.

← back to all posts

What is an inner product argument? Part 1

blog

The inner product argument is the following construction: given the commitments (for now let’s say the hash) of two vectors a and b of size n and with entries in some field 𝔽, prove that their inner product a,b is equal to z.

There exist different variants of this inner product argument. In some versions, none of the values (a, b and z) are given, only commitments. In some other version, which is interesting to us and that I will explain here, only a is unknown.

How is that useful?

Inner products arguments are useful for several things, but what we’re using them for in Mina is polynomial commitments. The rest of this post won’t make too much sense if you don’t know what a polynomial commitment is, but briefly: it allows you to commit to a polynomial f and then later prove its evaluation at some point s. Check my post on Kate polynomial commitments for more on polynomial commitment schemes.

How does that translate to the inner product argument though? First, let’s see our polynomial f as a vector of coefficients:

f=(f0,,fn) such that f(x)=f0+f1x+f2x2++fnxn

Then notice that

f(s)=f,(1,s,s2,,sn)

And here’s our inner product again.

The idea behind Bootleproof-type of inner product argument

The inner product argument protocol I’m about to explain was invented by Bootle et al. It was later optimized in the Bulletproof paper (hence why we unofficially call the first paper bootleproof), and then some more in the Halo paper. It’s the later optimization that I’ll explain here.

A naive approach

So before I get into the weeds, what’s the high-level? Well first, what’s a naive way to prove that we know the pre-image of a hash h, the vector a, such that a,b=z? We could just reveal a and let anyone verify that indeed, hashing it gives out h, and that it also verifies the equation a,b=z.

&a,b=z&given bz, and a hash of aopen proofa

Obliviously, we have to reveal a itself, which is not great. But we’ll deal with that later, trust me. What we want to tackle first here is the proof size, which is the size of the vector a. Can we do better?

Reducing the problem to a smaller problem to prove

The inner product argument reduces the opening proof by using an intermediate reduction proof:

&a,b=z\ &given bz, and a hash of areduction proof&a,b=z\ & given bz, and a hash of aopen proofa

Where the size of a is half the size of a, and as such the final opening proof (a) is half the size of our naive approach.

The reduction proof is where most of the magic happens, and this reduction can be applied many times (log2(n) times to be exact) to get a final opening proof of size 1. Of course the entire proof is not just the final opening proof of size 1, but all the elements involved in the reduction proofs. It can still be much smaller than the original proof of size n.

So most of the proof size comes from the multiple reduction subproofs that you’ll end up creating on the way. Our proof is really a collection of miniproofs or subproofs.

One last thing before we get started: Pedersen hashing and commitments

To understand the protocol, you need to understand commitments. I’ve used hashing so far, but hashing with a hash function like SHA-3 is not great as it has no convenient mathematical structure. We need algebraic commitments, which will allow us to prove things on the committed value without revealing the value committed. Usually what we want is some homomorphic property that will allow us to either add commitments together or/and multiply them together.

For now, let’s see a simple non-hiding commitment: a Pedersen hash. To commit to a single value x simply compute:

xG

where the discrete logarithm of G is unknown. To open the commitment, simply reveal the value x.

We can also perform multi-commitments with Pedersen hashing. For a vector of values (x1,,xk), compute:

x1G1++xkGk

where each Gi is distinct and has an unknown discrete logarithm as well. I’ll often shorten the last formula as the inner product x,G for x=(x1,,xk) and G=(G1,,Gk). To reveal a commitment, simply reveal the values xi.

Pedersen hashing allow commitents that are non-hiding, but binding, as you can’t open them to a different value than the originally comitted one. And as you can see, adding the commitment of x and y gives us the commitment of x+y:

xG+yG=(x+y)G

which will be handy in our inner product argument protocol

The protocol

Set up

Here are the settings of our protocol. Known only to the prover, is the secret vector

a=(a1,a2,a3,a4)

The rest is known to both:

  • G=(G1,G2,G3,G4), a basis for Pedersen hashing
  • A=a,G, the commitment of a
  • b=(b1,b2,b3,b4), the powers of some value s such that b=(1,s,s2,s3)
  • the result of the inner product z=a,b

For the sake of simplicity, let’s pretend that this is our problem, and we just want to halve the size of our secret vector a before revealing it. As such, we will only perform a single round of reduction. But you can also think of this step as being already the reduction of another problem twice as large.

We can picture the protocol as follows:

  1. The prover first sends a commitment to the polynomial f.
  2. The verifier sends a point s, asking for the value f(s). To help the prover perform a proof of correct evaluation, they also send a random challenge x.
  3. The prover sends the result of the evaluation, z, as well as a proof.
Prover->Verifier: com(f) Verifier->Prover: s, random x Prover->Verifier: z = f(s), proof of opening

Does that make sense? Of course what’s interesting to us is the proof, and how the prover uses that random x.

Reduced problem

First, the prover cuts everything in half. Then they use x to construct linear combinations of these cuts:

  • a=x1(a1a2)+x(a3a4)
  • b=x(b1b2)+x1(b3b4)
  • G=x(G1G2)+x1(G3G4)

This is how the problem is reduced to a,b=z.

At this point, the prover can send a, b, and z and the verifier can check if indeed a,b=z. But that wouldn’t make much sense would it? Here we also want:

  • a proof that proving that statement is the same as proving the previous statement (a,b=z)
  • a way for the verifier to compute z and b and A (the new commitment) by themselves.

The actual proof

The verifier can compute b as they have everything they need to do so.

What about A, the commitment of a which uses the new G basis. It should be the following value:

A=&a,G\ =&(x1a1+xa3)(xG1+x1G3)+(x1a2+xa4)(xG2+x1G4)\ =&A+x2(a1G3+a2G4)+x2(a3G1+a4G2)\ =&A+x2La+x2Ra

So to compute this new commitment, the verifier needs:

  • the previous commitment A, which they already have
  • some powers of x, which they can compute
  • two curve points La and Ra, which the prover will have to provide to them

What about z? Recall:

  • a=(x1a1+xa3x1a2+xa4)
  • b=(xb1+x1b3xb2+x1b4)

So the new inner product should be:

z=&a,b\ =&(x1a1+xa3x1a2+xa4),(xb1+x1b3xb2+x1b4)\ =&(a1b1+a2b2+a3b3+a4b4)+x2(a1b3+a2b4)+x2(a3b1+a4b2)\ =&z+x2(Lz)+x2(Rz)

Similarly to A, the verifier can recompute z from the previous value z and two scalar values Lz and Rz which the prover needs to provide.

So in the end, the proof has becomes:

  • the vector a which is half the size of a
  • the La,Ra curve points (around two field elements, if compressed)
  • the Lz,Rz scalar values

We can update our previous diagram:

Prover->Verifier: com(f) Verifier->Prover: s, random x Prover->Verifier: z = f(s) Prover->Verifier: a', L_a, R_a, L_z, R_z

In our example, the naive proof was to reveal a which was 4 field elements. We are now revealing instead 2 + 2 + 2 = 6 field elements. This is not great, but if a was much larger (let’s say 128), the reduction in half would still be of 64 + 2 + 2 = 68 field elements. Not bad no? We can do better though… Stay tuned for the next post.

← back to all posts blog • 2021-08-04
currently reading:
What is an inner product argument? Part 1
08-04 blog
📖 my book
Real-World Cryptography is available from Manning Publications.
A practical guide to applied cryptography for developers and security professionals.
🎙️ my podcast
Two And A Half Coins on Spotify.
Discussing cryptocurrencies, databases, banking, and distributed systems.
📺 my youtube
Cryptography videos on YouTube.
Video explanations of cryptographic concepts and security topics.