Take a look at the following program that you can run in Golang's playground.

```
// sign a message
hash, _ := hex.DecodeString("ffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632552")
r, s, err := ecdsa.Sign(rand.Reader, privateKey, hash[:])
if err != nil {
panic(err)
}
// print the signature
signature := r.Bytes()
signature = append(signature, s.Bytes()...)
fmt.Println("signature:", hex.EncodeToString(signature))
// verify the signature
if !ecdsa.Verify(&privateKey.PublicKey, hash[:], r, s) {
panic("wrong signature")
} else {
fmt.Println("signature valid for", hex.EncodeToString(hash[:]))
}
// I modify the message, this should invalidate the signature
var hash2 [32]byte
hash2[31] = 1
if !ecdsa.Verify(&privateKey.PublicKey, hash2[:], r, s) {
panic("wrong signature")
} else {
fmt.Println("signature valid for", hex.EncodeToString(hash2[:]))
}
```

this should print out:

```
signature: 4f3e60dc53ab470d23e82567909f01557f01d521a0b2ae96a111d107741d8ebb885332d790f0691bdc900661bf40c595a07750fa21946ed6b88c61c43fbfc1f3
signature valid for ffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632552
signature valid for 0000000000000000000000000000000000000000000000000000000000000001
```

Can you tell what's the problem? Is ECDSA broken? Is Golang's standard library broken? Is everything fine?

On August 11th, 2015, Andrew Ayer sent an email to the IETF mailing list starting with the following words:

I recently reviewed draft-barnes-acme-04 and found vulnerabilities in the DNS, DVSNI, and Simple HTTP challenges that would allow an attacker to fraudulently complete these challenges.

The draft-barnes-acme-04 mentioned by Andrew Ayer is a document specifying **ACME**, one of the protocols behind the *Let's Encrypt* certificate authority.
A *certificate authority* is the thing that your browser trusts and that signs the public keys of websites you visit.
It is called a "certificate" authority due to the fact that it does not sign public keys, but *certificates*.
A certificate is just a blob of data bundling a website's public key, its domain name, and some other relevant metadata.

The attack was found merely 6 weeks before major browsers were supposed to start trusting Let's Encrypt's public key. The draft has since become RFC 8555: Automatic Certificate Management Environment (ACME),
mitigating the issues.
Since then no cryptographic attacks are known on the protocol.

This blog post will go over the accident, and explain why it happened, why it was a surprising bug, and what you should watch for when using signatures in cryptography.

## How Let's Encrypt used signatures

Let's Encrypt is a pretty big deal. Created in 2014, it is a certificate authority run as a nonprofit, providing trust to hundreds of millions of websites.

The key to Let's Encrypt's success are twofold:

- It is
*free*. Before Let's Encrypt most certificate authorities charged fees from webmasters who wanted to obtain certificates.
- It is
*automated*. If you follow their standardized protocol, you can request, renew and even revoke certificates via a web interface. Contrast that to other certificate authorities who did most processing manually, and took time to issue certificates.

If a webmaster wants her website `example.com`

to provide a secure connection to her users (via HTTPS), she can request a certificate from Let's Encrypt (essentially a signature over its domain name and public key), and after proving that she owns the domain `example.com`

and getting her certificate issued, she will be able to use it to negotiate a secure connection with any browser trusting Let's Encrypt.

That's the theory.

In practice the flow goes like this:

- Alice registers on Let's Encrypt with an RSA public key.
- Alice asks Let's Encrypt for a certificate for
`example.com`

.
- Let's Encrypt asks Alice to prove that she owns
`example.com`

, for this she has to sign some data and upload it to `example.com/.well-known/acme-challenge/some_file`

.
- Once Alice has signed and uploaded the signature, she asks Let's Encrypt to go check it.
- Let's Encrypt checks if it can access the file on
`example.com`

, if it successfully downloaded the signature and the signature is valid then Let's Encrypt issues a certificate to Alice.

In 2015, Alice could request a signed certificate from Let's Encrypt by uploading a signature (from the key she registered with) on her domain. The certificate authority verifies that Alice owns the domain by downloading the signature from the domain and verifying it. If it is valid, the authority signs a certificate (which contains the domain's public key, the domain name example.com, and some other metadata) and sends it to Alice who can then use it to secure her website in a protocol called TLS.

Let's see next how the attack worked.

## How did the Let's Encrypt attack work?

In the attack that Andrew Ayer found in 2015, Andrew proposes a way to gain control of a Let's Encrypt account that has already validated a domain (let's pick `example.com`

as an example)

The attack goes something like this (keep in mind that I'm simplifying):

- Alice registers and goes through the process of verifying her domain
`example.com`

by uploading some signature over some data on `example.com/.well-known/acme-challenge/some_file`

. She then successfully manages to obtain a certificate from Let's Encrypt.
- Later, Eve signs up to Let's Encrypt with a new account and an RSA public key, and request to recover the
`example.com`

domain
- Let's Encrypt asks Eve to sign some new data, and upload it to
`example.com/.well-known/acme-challenge/some_file`

(note that the file is still lingering there from Alice's previous domain validation)
- Eve crafts a new malicious keypair, and updates her public key on Let's Encrypt. She then asks Let's Encrypt to check the signature
- Let's Encrypt obtains the signature file from
`example.com`

, the signature matches, Eve is granted ownership of the domain `example.com`

. She can then ask Let's Encrypt to issue valid certificates for this domain and any public key.

The 2015 Let's Encrypt attack allowed an attacker (here Eve) to successfully recover an already approved account on the certificate authority. To do this, she simply forges a new keypair that can validate the already existing signature and data from the previous valid flow.

Take a few minutes to understand the attack.
It should be quite surprising to you.
Next, let's see how Eve could craft a new keypair that worked like the original one did.

## Key substitution attacks on RSA

In the previously discussed attack, Eve managed to create a valid public key that validates a given signature and message.
This is quite a surprising property of RSA, so let's see how this works.

A digital signature does not uniquely identify a key or a message. -- Andrew Ayer, Duplicate Signature Key Selection Attack in Let's Encrypt (2015)

Here is the problem given to the attacker:
for a fixed signature and (PKCS#1 v1.5 padded) message, a public key $(e, N)$ must satisfy the following equation to validate the signature:

$$signature = message^e \pmod{N}$$

One can easily craft a key pair that will (most of the time) satisfy the equation:

- a public exponent $e = 1$
- a private exponent $d = 1$
- a public modulus $N = \text{signature} - \text{message}$

You can easily verify that the validation works with this keypair:

$$\begin{align}
&\text{signature} = \text{message}^e \mod{N} \\
\iff &\text{signature} = \text{message} \mod{\text{signature} - \text{message}} \\
\iff &\text{signature} - \text{message} = 0 \mod{\text{signature} - \text{message}}
\end{align}$$

Is this issue surprising?

It should be.

This property called "key substitution" comes from the fact that there exists a gap between the theoretical cryptography world and the applied cryptography world, between the security proofs and the implemented protocols.

Signatures in cryptography are usually analyzed with the *EUF-CMA model*, which stands for *Existential Unforgeability under Adaptive Chosen Message Attack*.

In this model YOU generate a key pair, and then I request YOU to sign a number of arbitrary messages.
While I observe the signatures you produce, I win if I can at some point in time produce a valid signature over a message I hadn't requested.

Unfortunately, even though our modern signature schemes seem to pass the EUF-CMA test fine, they tend to exhibit some *surprising properties* like the key substitution one.

To learn more about key substitution attack and other signature shenanigans, take a look at my book Real-World Cryptography.

I've now spent 2 years writing my introduction on applied cryptography: **Real-World Cryptography**, which you can already read online here.
(If you're wondering why I'm writing another book on cryptography check this post.)

I've written all the chapters, but there's still a lot of work to be done to make sure that it's good (collecting feedback), that it's consistent (unification of diagrams, of style, etc.), and that it's well structured.

For the latter point, I thought I would leverage the fact that I'm an engineer and use a tool that's commonly used to measure performance: a flamegraph!

It looks like this, and you can click around to zoom on different chapters and sections:

## How does this work?

The bottom layer shows all the chapter in order, and the width of the boxes show how lengthy they are.
The more you go up, the more you "nest" yourself into a section.
For example, clicking on the chapter 9: Secure transport, you can see that it is composed of several sections with the longest being "How does TLS work", which itself is composed of several subsections with the longest being "The TLS handshake".

## What is it good for?

Using this flamegraph, I can now analyze how consistent the book is.

### Distribution

The good news is that the chapters all seem pretty evenly distributed, for the exception of shorter chapters 3 (MACs), 6 (asymmetric encryption), and 16 (final remarks).
This is also expected are these chapters are much more straightforward than the rest of the book.

### Too length

Looks like the bigger chapters are in order: post-quantum crypto, authenticated encryption, hardware cryptography, user authentication, secure transport.
This is not great, as post-quantum crypto is supposed to be a chapter for the curious people who get to the end of the book, not a chapter to make the book bigger...
The other chapters are also unnecessary long.
My goal is going to be to reduce these chapters' length in the coming weeks.

### Too nested

This flamegraph is also useful to quickly see if there are sections that are way too nested. For example, Chapter 9 on secure transport has a lot of mini sections on TLS.
Also, look at some of the section in chapter 5: Key exchanges > Key exchange standards > ECDH > ECDH standard. That's too much.

### Not nested enough

Some chapters have almost no nested sections at all. For example, chapter 8 (randomness) and 16 (conclusion) are just successions of depth-1 sections. Is this a bad thing? Not necessarily, but if a section becomes too large it makes sense to either split it into several sections, or have subsections in it.

I've noticed, for example, that the first section of chapter 3 on MACs titled "What is a MAC?" is quite long, and doesn't have subsections.

(Same for section 6.2 asymmetric encryption in practice and section 8.2 what is a PRNG)

### Errors

I also managed to spot some errors in nested sections by doing this! So that was pretty cool as well :)

EDIT: If you're interested in doing something like this with your own project, I published the script here.

Schnorr just released a new paper Fast Factoring Integers by SVP Algorithms with the words "This destroyes the RSA cryptosystem." (spelling included) in the abstract.

What does this really mean? The paper is honestly quite dense to read and there's no conclusion in there.

**UPDATE**: Several people have pointed out that the "This destroyes the RSA cryptosystem" is not present in the paper itself, that is until the paper was updated to include the sentence without the typo.

**UPDATE**: There was some discussion about a potential fake, but several people in the industry are confirming that this is from Schnorr himself:

**UPDATE**: Sweis is calling for a proof of concept:

According to the claims in Schnorr’s paper, it should be practical to set significant new factoring records. There is a convenient 862-bit RSA challenge that has not been factored yet. Posting its factors, as done for the CADO-NFS team’s records, would lend credence to Schnorr’s paper and encourage more review of the methodology.

**UPDATE**: Léo Ducas has been trying to implement the claim, without success.

**UPDATE**: Geoffroy Couteau thinks the claim is wrong:

several top experts on SVP and CVP algorithms have looked at the paper and concluded that it is incorrect (I cannot provide names, since it was in the context of anonymous reviews).

**UPDATE**: Daniel Shiu pointed out an error in the paper

**UPDATE**: Pedro Fortuny Ayuso is very skeptical of the claim. Will he end up eating his shirt?

Schnorr is 78 years old. I am not gerontophobic (being 50 I am approaching that age) but: Atiyah claimed the Riemann Hypothesis, Hironaka has claimed full resolution of singularities in any characteristic... And I am speaking of Fields medalists.
So: you do really need peer-review for strong arguments.

You might wonder how people find bugs. Low-hanging fruit bugs can be found via code review, static analysis, dynamic analysis (like fuzzing), and other techniques. But what about deep logic bugs. Those you can’t find easily. Perhaps the protocol implemented is quite complicated, or correctness is hard to define, and edge-cases hard to detect. One thing I’ve noticed is that re-visiting protocols are an excellent way to find logic bugs.

Ian Miers once said something like that: "you need time, expertise, and meaningful engagement”. I like that sentence, although one can point out that these traits are closely linked--you can’t have meaningful engagement without time and expertise--it does show that finding bugs take "effort".

OK. Meaningful engagement can lead to meaningful bugs, and meaningful bugs can be found at different levels.
So you're here, seating in your undies in the dark, with a beer on your side and some uber eats lying on the floor.
Your computer is staring back at you, blinking at a frequency you can't notice, and waiting for you to find a bug in this protocol.
What do you do?
Perhaps the protocol doesn't have a proof, and this leads you to wonder if you can write one for it...

It worked for Ariel Gabizon, who in 2018 found a subtle error in a 2013 zk-SNARK paper used by the Zcash cryptocurrency he was working on.
He found it by trying to write a proof for the paper he was reading, realizing that the authors had winged it.
While protocols back in the days could afford to wing it, these days people are more difficult--they demand proofs.
The bug Ariel found could have allowed anyone to forge an unlimited amount of money undetected.
It was silently fixed months later in an upgrade to the network.

Ariel Gabizon, a cryptographer employed by the Zcash Company at the time of discovery, uncovered a soundness vulnerability. The key generation procedure of [BCTV14], in step 3, produces various elements that are the result of evaluating polynomials related to the statement being proven. Some of these elements are unused by the prover and were included by mistake; but their presence allows a cheating prover to circumvent a consistency check, and thereby transform the proof of one statement into a valid-looking proof of a different statement. This breaks the soundness of the proving system.

What if the protocol already had a proof though?
Well that doesn't mean much, people enjoy writing unintelligible proofs, and people make errors in proofs all the time.
So the second idea is that reading and trying to understand a proof might lead to a bug in the proof.
Here's some meaningful engagement for you.

In 2001, Shoup revisited some proofs and found some darning gaps in the proofs for RSA-OAEP, leading to a newer scheme OAEP+ which was never adopted in practice.
Because back then, as I said, we really didn't care about proofs.

[BR94] contains a valid proof that OAEP satisfies a certain technical property which they call “plaintext awareness.” Let us call this property PA1. However, it is claimed without proof that PA1 implies security against chosen ciphertext attack and non-malleability. Moreover, it is not even clear if the authors mean adaptive chosen ciphertext attack (as in [RS91]) or indifferent (a.k.a. lunchtime) chosen ciphertext attack (as in [NY90]).

Later in 2018, a series of discoveries on the proofs for the OCB2 block cipher quickly led to practical attacks breaking the cipher.

We have presented practical forgery and decryption attacks against OCB2, a high-profile ISO-standard authenticated encryption scheme. This was possible due to the discrepancy between the proof of OCB2 and the actual construction, in particular the interpretation of OCB2 as a mode of a TBC which combines XEX and XE.

We comment that, due to errors in proofs, ‘provably-secure schemes’ sometimes still can be broken, or schemes remain secure but nevertheless the proofs need to be fixed. Even if we limit our focus to AE, we have many examples for this, such as NSA’s Dual CTR [37,11], EAX-prime [28], GCM [22], and some of the CAESAR submissions [30,10,40]. We believe our work emphasizes the need for quality of security proofs, and their active verification.

Now, reading and verifying a proof is always a good idea, but it’s slow, it’s not flexible (if you change the protocol, good job changing the proof), and it’s limited (you might want to prove different things re-using parts of the proofs, which is not straight forward).
Today, we are starting to bridge the gap between pen and paper proofs and computer science: it is called formal verification.
And indeed, formal verification is booming, with a number of papers in the recent years finding issues here and there just by describing protocols in a formal language and verifying that they withstand different types of attacks.

Prime, Order Please! Revisiting Small Subgroup and Invalid Curve Attacks on Protocols using Diffie-Hellman:

We implement our improved models in the Tamarin prover. We find a new attack on the Secure Scuttlebutt Gossip protocol, independently discover a recent attack on Tendermint’s secure handshake, and evaluate the effectiveness of the proposed mitigations for recent Bluetooth attacks.

Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures:

We implement our models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies. In the process, we find new attacks on DRKey and SOAP’s WS-Security, both protocols which were previously proven secure in traditional symbolic models.

But even this kind of techniques has limitation! (OMG David when will you stop?)

In 2017 Matthew Green wrote:

I don’t want to spend much time talking about KRACK itself, because the vulnerability is pretty straightforward. Instead, I want to talk about why this vulnerability continues to exist so many years after WPA was standardized. And separately, to answer a question: how did this attack slip through, despite the fact that the 802.11i handshake was formally proven secure?

He later writes:

The critical problem is that while people looked closely at the two components — handshake and encryption protocol — in isolation, apparently nobody looked closely at the two components as they were connected together. I’m pretty sure there’s an entire geek meme about this.

pointing to the "2 unit tests. 0 integration tests." joke.

He then recognizes that it’s a hard problem:

Of course, the reason nobody looked closely at this stuff is that doing so is just plain hard. Protocols have an exponential number of possible cases to analyze, and we’re just about at the limit of the complexity of protocols that human beings can truly reason about, or that peer-reviewers can verify. The more pieces you add to the mix, the worse this problem gets.
In the end we all know that the answer is for humans to stop doing this work. We need machine-assisted verification of protocols, preferably tied to the actual source code that implements them. This would ensure that the protocol actually does what it says, and that implementers don’t further screw it up, thus invalidating the security proof.

Well, Matthew, we do have formally generated code! HACL* and fiat-crypto are two examples.
Anybody has heard of that failing? I’d be interested…

In any case, what’s left for us? A lot! Formally generated code is hard, and generally covers small parts of your protocol (e.g. field arithmetic for elliptic curves).
So what else can we do?
Implementing the protocol, if it hasn’t been implemented before, is a no-brainer.
In 2016, Taylor Hornby an engineer at Zcash wrote about a bug he found while implementing the zerocash paper into the Zcash cryptocurrency:

In this blog post, we report on the security issues we’ve found in the Zcash protocol while preparing to deploy it as an open, permissionless financial system.
Had we launched Zcash without finding and fixing the InternalH Collision vulnerability, it could have been exploited to counterfeit currency. Someone with enough computing power to find 128-bit hash collisions would have been able to double-spend money to themselves, creating Zcash out of thin air.

Perhaps re-implementing the protocol in a different language might work as well?

One last thing, most of the code out there is not formally verified.
So of course, reviewing code works, but you need time, expertise, money, etc.
So instead, what about testing?
This is what Wycheproof does by implementing a number of test vectors that are known to cause issues:

These observations have prompted us to develop Project Wycheproof, a collection of unit tests that detect known weaknesses or check for expected behaviors of some cryptographic algorithm. Project Wycheproof provides tests for most cryptographic algorithms, including RSA, elliptic curve crypto and authenticated encryption. Our cryptographers have systematically surveyed the literature and implemented most known attacks. We have over 80 test cases which have uncovered more than 40 bugs. For example, we found that we could recover the private key of widely-used DSA and ECDHC implementations.

In all of that, I didn't even talk about the benefits of writing a specification... that's for another day.