Hey! I'm David, the author of the Real-World Cryptography book. I'm a crypto engineer at O(1) Labs on the Mina cryptocurrency, previously I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

more on the next page...

# Hardware Solutions To Highly-Adversarial Environments Part 3: Trusted Execution Environment (TEE), SGX, TrustZone and Hardware Security Tokens posted April 2020

This is end of my blog post series on cryptography with hardware. I’ve written about smart cards and secure elements in part 1 and about HSMs and TPMs in part 2.

## Trusted Execution Environment (TEE)

So far, all of the hardware solutions we’ve talked about have been standalone secure hardware solutions (with the exceptions of smart cards which can be seen as tiny computers). Secure elements, HSMs, and TPMs can be seen as an additional computer.

(picture taken from The right secure hardware for your IoT deployment)

Let’s now talk about integrated secure hardware!

Trusted Execution Environment (TEE) is a concept that extends the instruction set of a processor to allow for programs to run in a separate secure environment. The separation between this secure environment and the ones we are used to deal with already (often called “rich” execution environment) is done via hardware. So what ends up happening is that modern CPUs run both a normal OS as well as a secure OS simultaneously. Both have their own set of registers but share most of the rest of the CPU architecture (and of course system). By using clever CPU-enforced logic, data from the secure world cannot be accessed from the normal world. Due to TEE being implemented directly on the main processor, not only does it mean a TEE is a faster and cheaper product than a TPM or secure element, it also comes for free in a lot of modern CPUs.

TEE like all other hardware solutions has been a concept developed independently by different vendors, and then a standard trying to play catch up (by Global Platform). The most known TEEs are Intel’s Software Guard Extensions (SGX) and ARM’s TrustZone. But there are many more like AMD PSP, RISC-V MultiZone and IBM Secure Service Container.

By design, since a TEE runs on the main CPU and can run any code given to it (in a separate environment called an “enclave”), it offers more functionality than secure elements, HSMs, TPMs (and TPM-like chips). For this reason TEEs are used in a wilder range of applications. We see it being used in clouds when clients don’t trust servers with their own data, multi-party computation (see CCF), to run smart contracts.

TEE’s goal is to first and foremost thwart software attacks. While the claimed software security seems to be really attractive, it is in practice hard to segregate execution while on the same chip as can attest the many software attacks against SGX:

Trustzone is not much better, Quarkslab has a list of paper successfully attacking it as well.

In theory a TPM can be re-implemented in software only via a TEE (which was done by Microsoft) but one must be careful as again, TEE as a concept provides no resistance against hardware attacks besides the fact that things at this microscopic level are way too tiny and tightly packaged together to analyze without expensive equipment. But by default a TEE does not mean you’ll have a secure internal storage (you need to have a fused key that can’t be read to encrypt what you want to store), or a hardware random number generator, and other wished hardware features. But every manufacturers sure has different offers with different levels of physical security and tamper resistance when it comes to chip that supports TEE.

## Hardware Security Tokens

Finally, hardware security tokens are keys that you can usually plug into your machine and that can do some cryptographic operations. For example yubikeys are small dongles that you can plug in the USB port of a laptop, and that will perform some cryptographic operations if you touch its yellow ring.

The word “token” in hardware security token comes from the fact that using it produces a “token” per-authentication request instead of sending the same credentials over and over again.

Yubikeys started as a way to provide 2nd factor authentication, usually in addition to a password, which an attacker can’t exploit in a phishing attack. The idea is that if an attacker calls your grandmother, and asks her to spell out the yubikey output, she won’t be able to. There is no output. Furthermore, modern yubikeys implement the FIDO 2 protocol which will not produce the correct response unless you are on the right webpage (if we are talking about usage for the web). The reason is that the protocol signs metadata that is linked to what’s in the url bar of your browser.

More recently laptops and mobile devices have started offering other ways to provide the same value as a hardware security token via their own secure module. For example Apple provides a biometric-protected (Touch ID or Face ID) authenticator via the secure enclave.

It’s not clear how much protection against hardware attacks your typical hardware security token has to implement since the compromise of one is not enough to authenticate as a user in most cases (unless you use one as single factor authentication). Yet yubikeys are known to have secure elements inside. Still, this doesn’t exclude software attacks if badly programmed. For example in 2013, a low-cost and non-intrusive side-channel attack managed to extract keys from a yubikey.

Cryptocurrency has similar dongles that will sign transactions for a user, but the threat model is different and they will usually have to authenticate the user in some ways and provide tamper resistance. Here is a picture of a Nano ledger.

As with any hardware solutions, attacks have been found there as well (for example one the trezor).

## Conclusion

As a summary, this 3-part blog series surveys different techniques that exist to deal with physical attacks:

• Smart cards are microcomputers that needs to be turned on by an external device like a payment terminal. They can run arbitrary java applications. Bank cards are smart cards for example.
• Secure elements are a generalization of smart cards, which rely on a set of Global Platform standards. SIM Cards are secure elements for example.
• TPMs are re-packaged secure elements plugged on personal and enterprise computers’ motherboards. They follow a standardized API (by the Trusted Computing Group) that are used in a multitude of ways from measured/secure boot with FDE to remote attestation.
• HSMs can be seen as external and big secure elements for servers. They’re faster and more flexible. Seen mostly in data centers to store keys.
• TEEs like TrustZone and SGX can be thought of secure elements implemented within the CPU. They are faster and cheaper but mostly provide resistance against software attacks unless augmented to be tamper-resistant. Most modern CPUs ship with TEEs and various level of defense against hardware attacks.
• Hardware Security Tokens are dongles like yubikeys that often repackage secure elements to provide a 2nd factor by implementing some authentication protocol (usually TOTP or FIDO2).
• There are many more that I haven’t talked about. In reality vendors can do whatever they want. We’ve seen a lot of TPM-like chips. Apple has the secure enclave, Google has Titan, Microsoft has Pluton, Atmel for example sells “crypto elements”.

Keep in mind that no hardware solution is the panacea, you're only increasing the attack's cost. Against a sophisticated attacker all of that is pretty much useless. For this reason design your system so that one device compromised doesn't imply all devices are compromised. Even against normal adversaries, compromising the main operating system often means that you can make arbitrary calls to the secure element. Design your protocol to make sure that the secure element doesn't have to trust the caller by either verifying queries, or relying on an external trusted part, or by relying on a trusted remote party, or by being self-contained, etc. And after all of that, you still have to worry about side channel attacks :)

PS: thanks to Gabe Pike for the many discussions around TEE!

comment on this story

# Hardware Solutions To Highly-Adversarial Environments Part 2: HSM vs TPM vs Secure Enclave posted April 2020

In the previous post (part 1) you learned about:

• The threat today is not just an attacker intercepting messages over the wire, but an attacker stealing or tampering with the device that runs your cryptography. So called Internet of Things (IoT) devices often run into this type of threats and are by default unprotected against sophisticated attackers.
• Hardware can help protect cryptography applications in highly-adversarial environment. One of the idea is to provide a device with a tamper-resistant chip to store and perform crypto operations. That is, if the device falls in the hands of an attacker, extracting keys or modifying the behavior of the chip will be hard. But hardware-protected crypto is not a panacea, it is merely defense-in-depth, effectively slowing down and increasing the cost of an attack.
• smart cards were one of the first such secure microcontroller that could be used as a micro computer to store secrets and perform cryptographic operations with them. These are supposed to use a number of techniques to discourage physical attackers.
• the concept of a smart card was generalized as a secure element, which is a term employed differently in different domains, but boils down to a smart card that can be used as a coprocessor in a greater system that already has a main processor.
• Google having troubles dealing with the telecoms to host credit card information on SIM cards (which are secure elements), the concept of secure element in the cloud was born. In the payment space this is called host card emulation (HCE). It works simply by storing the credit card information (which is a 3DES symmetric key shared with the bank) in a secure element in the cloud, and only giving a single-use token to the user: if the phone is compromised, the attacker can only use it to pay once.

All good?

In this part 2 of our blog series you will learn about more hardware that supports cryptographic operations! These are all secure elements in concept, and are all doing sort of the same things but in different contexts. Let’s get started!

## Hardware Security Module (HSM)

If you understood what a secure element was, well a hardware secure module (HSM) is pretty much a bigger secure element. Not only the form factor of secure elements require specific ports, but they are also slow and low on memory. (Note that being low on memory is sometimes OK, as you can encrypt keys with a secure element master key, and then store the encrypted keys outside of the secure element.) So HSM is a solution for a more portable, more efficient, more multi-purpose secure element. Like some secure elements, some HSMs can run arbitrary code as well.

HSMs are also subject to their own set of standards and security level. One of the most widely accepted standard is FIPS 140-2: Security Requirements for Cryptographic Modules, which defines security levels between 1 and 4, where level 1 HSMs do not provide any protection against physical attacks and level 4 HSMs will wipe their whole memory if they detect any intrusion!

Typically, you find an HSM as an external device with its own shelf on a rack (see the picture of a luna HSM below) plugged to an enterprise server in a data center.

(To go full circle, some of these HSMs can be administered using smart cards.)

Sometimes you can also find an HSM as a PCIe card plugged into a server’s motherboard, like the IBM Crypto Express in the picture below.

Or even as small dongles that you can plug via USB (if you don’t care about performance), see the picture of a YubiHSM below.

HSMs are highly used in some industries. Every time you enter your PIN in an ATM or a payment terminal, the PIN ends up being verified by an HSM somewhere. Whenever you connect to a website via HTTPS, the root of trust comes from a Certificate Authority (CA) that stores its private key in an HSM, and the TLS connection is possibly terminated by an HSM. You have an Android or iPhone? Chances are Google or Apple are keeping a backup of your phone safe with a fleet of HSMs. This last case is interesting because the threat model is reversed: the user does not trust the cloud with its data, and thus the cloud service provider claims that its service can’t see the user’s encrypted backup nor can access the keys used to encrypt it.

HSMs don’t really have a standard, but most of them will at least implement the Public-Key Cryptography Standard 11 (PKCS#11), one of these old standards that were started by the RSA company and that were progressively moved to the OASIS organization (2012) in order to facilitate adoption of the standards.

While PKCS#11 last version (2.40) was released in 2015, it is merely an update of a standard that originally started in 1994. For this reason it specifies a number of old cryptographic algorithms, or old ways of doing things. Nevertheless, it is good enough for many uses, and specifies an interface that allow different systems to easily interoperate with each other.

While HSMs’ real goals are to make sure nobody can extract key material from them, their security is not always shining. A lot about the security of these hardware solutions really relies on their high price, the protection techniques used not being disclosed, and the certifications (like FIPS and Common Criteria) mostly focusing on the hardware side of things. In practice, devastating software bugs have been found and it is not always straight forward to know if the HSM you use is vulnerable to any of these vulnerabilities (Cryptosense has a good summary of known attacks against HSMs).

By the way, not only the price of one HSM is high (it can easily be dozens of thousands of dollars depending on the security level), in addition to an HSM you often have another HSM you use for testing, and another one you use for backup (in case your first HSM dies with its keys in it). It can add up!

Furthermore, I still haven’t touched on the elephant in the room with all of these solutions: while you might prevent most attackers from reaching your secret keys, you can't prevent attackers from compromising the system and making their own calls to the secure hardware module (be it a secure element or an HSM). Again, these hardware solutions are not a panacea and depending on the scenario they provide more or less defense-in-depth.

By the way, if it applies to your situation modern cryptography can offer better ways of reducing the consequences of key material compromise and mis-use. For example using multi-signatures! Check my blog post on the subject.

## Trusted Platform Module (TPM)

A Trusted Platform Module (TPM) is first and foremost a standard (unlike HSMs) developed in the open by the non-profit Trusted Computing Group (TCG). The latest version is TPM 2.0, published with the ISO/IEC (International Organization for Standardization and the International Electrotechnical Commission).

A TPM complying with the TPM 2.0 standard is a secure microcontroller that carries a hardware random number generator also called true random number generator (TRNG), secure memory for storing secrets, cryptographic operations, and the whole thing is tamper resistant. If this description reminds you of smart cards, secure element, and HSMs well… I told you that everything we were going to be talking about in this chapter were going to be secure elements of some form. (And actually, it’s common to see TPMs implemented as repackaging of secure elements.)

You usually find a TPM directly soldered to the motherboard of many enterprise servers, laptops, and desktop computers (see picture below).

Unlike solutions that we’ve seen previously though, a TPM does not run arbitrary code. It offers a well-defined interface that a greater system can take advantage of. Due to these limitations, a TPM is usually pretty cheap (even cheap enough that some IoT devices will ship with one!).

Here is a non-exhaustive list of interesting applications that a TPM can enable:

• User authentication. Ever heard of the FBI iPhone fiasco? TPMs can be used to require a user PIN or password. In order to prevent low entropy credentials to be easily bruteforced, a TPM can rate limit or even count the number of failed attempts.
• Secure boot. Secure boot is about starting a system in a known trusted state in order to avoid tampering of the OS by malware or physical intrusion. This can be done by using a platform’s TPM and the Unified Extensible Firmware Interface (UEFI) which is the piece of code that launches an operating system. Whenever the image of a new boot loader or OS or driver is loaded, the TPM can store the associated expected hash and compare it before running the code, and failing if the hash of the image is different. If you hold a public key you can also verify that a piece of code has been signed before running it. This is a gross over-simplification of how secure boot works in practice, but the crypto is pretty straight forward.
• Full disk encryption (FDE). This allows to store the key (or encrypt the key) that encrypts all data on the device at rest. If the device has been proven to be in a known good state (via secure boot) and the user authenticates correctly, the key can be released to decrypt data. When the devices is locked or shut down, the key vanishes from memory and has to be released by the TPM again. This is a must feature if you lose, or get your device stolen.
• Remote attestation. This allows a device to authenticate itself or prove that it is running specific software. In other words, a TPM can sign a random challenge and/or metadata with a key that can be tied to a unique per-TPM key (and is signed by the TPM vendor). Every TPM comes with such a unique key (called an endorsement key) along with the vendor’s certificate authority signature on the public key part. For example, during employee onboarding a company can add a new employee’s laptop’s TPM endorsement key to a whitelist of approved devices. Later, if the user wants to access one of the company’s service, the service can request the TPM to sign a random challenge along with hashes of what OS was booted to authenticate the user and prove the well-being of the user’s device.

There are more functionalities that a TPM can enable (there's afterall hundreds of commands that a TPM implements) which might even benefit user applications (which should be able to call the TPM).

Note that having a standard is great for inter-operability, and for us to understand what is going on, but unfortunately not everyone use TPMs. Apple has the secure enclave, Microsoft has Pluton, Google has Titan.

Perhaps, on a darker note, it is good to note that TPMs have their own controversies and have also been subjected to devastating vulnerabilities. For example the ROCA attack found that an estimated million TPMs (and even smart cards) from the popular Infineon vendor had been wrongly generating RSA private keys for years (the prime generation was flawed).

• HSMs. They are external, bigger and faster secure elements. They do not follow any standard interface, but usually implement the PKCS#11 standard for cryptographic operations. HSMs can be certified with different levels of security via some NIST standard (FIPS 140-2).
• TPMs. They are chips that follow the TPM standard, more specifically they are a type of secure element with a specified interface. A TPM is usually a secure chip directly linked to the motherboard and perhaps implemented using a secure element. While it does not allow to run arbitrary programs like some secure elements, smart cards, and HSMs do, it enables a number of interesting applications for devices as well as user applications.

That’s it for now, check this blog again to read part 3 which will be about TEEs!

Many thanks to Jeremy O'Donoghue, Thomas Duboucher, Charles Guillemet, and Ryan Sleevi who provided help and reviews!

# Hardware Solutions To Highly-Adversarial Environments Part 1: Whitebox Crypto vs Smart Cards vs Secure Elements vs Host-Card Emulation (HCE) posted March 2020

If at some point you realize that doing cryptography means having to manage long-term keys, it means you’re standing in the world of key management.

Makes sense right?

In these lands, you are going to run into scenarios where attackers can be quite close to your applications. I call these highly-adversarial environments.

Imagine using your credit card on an ATM skimmer (a doodads that a thief can place on top of the card reader of an ATM in order to copy the content of your credit card, see picture bellow); downloading an application on your mobile phone that compromises the OS; hosting a web application in a colocated server shared with a malicious customer; managing highly-sensitive secrets in a data center that gets breached; and so on.

These scenarios suck, and are very counterintuitive to most cryptographers. This is because cryptography has come a long way since the historical “Alice wants to encrypt a message to Bob without Eve intercepting it”. Nowadays, it’s often more like “Alice wants to encrypt a message to Bob, but Alice is also Eve”.

The key here is that in these scenarios, there’s not much that can be done cryptographically (unless you believe in whitebox crypto) and hardware can go a long way to help.

OK, so now we have a whole world of new doohickeys to learn about, and there’s a lot of thingamabob believe me (hence the dense title). It can be quite confusing to learn about all of this, so here we go: my promise is that by the end of this blogpost series you’ll have a better understanding of what are all these different hardware solutions.

Keep in mind that none of these solutions are pure cryptographic solutions: they are all defense-in-depth (and sometimes dubious) solutions that serve to hide secrets and their associated sensitive cryptographic operations. They also all have a given cost, meaning that if a sophisticated attacker decides to break the bank, there’s not much we can do (besides raising the cost of an attack).

OK let's get started.

## Obfuscation

By definition obfuscation has nothing to do with security: it is the act of scrambling something so that it still work but is hard to understand. So for the laugh, let’s first mention whitebox cryptography which attempts to “cryptographically” obfuscate the key inside of an algorithm. That’s right, you have the source code of some AES-based encryption algorithm with a fixed key, and it encrypts and decrypts fine, but the key is mixed so well with the implementation that it is too confusing for anyone to extract the key from the algorithm. That's the theory. Unfortunately in practice, no published whitebox crypto algorithm has been found to be secure, and most commercial solutions are closed-source due to this fact (security through obscurity kinda works in the real world). Again, it’s all about raising the cost and making it harder for attackers.

All in all, whitebox crypto is a big industry that sells dubious products to businesses in need of DRM solutions. On the more serious side, there is a branch of cryptography called Indistinguishability obfuscation (iO) that attempts to do this cryptographically (so for realz). iO is a very theoretical, impractical, and so far not-really-proven field of research. We’ll see how that one goes.

(Timeline of whitebox cryptography, taken from Matthieu Rivain’s slides)

## Smart Cards

OK, whitebox crypto is not great, and worse: even if you can’t extract the key, you can still copy the program instead of trying to extract the key (and use it to do whatever cryptographic operation it features). It would be great if we could prevent people from copying secrets from sensitive devices though, or even prevent them from seeing what’s going on when the device performs cryptographic operations. A smart card is exactly this. It’s what you commonly find in credit cards, and is activated by either inserting it into, or using Near-field Communication (NFC) by getting the smart card close enough to, a payment terminal (also called Point of Sale or PoS terminal).

Smart cards are pretty old, and started as a practical way to get everyone a pocket computer. Indeed, a smart cart embarks a CPU, memory (RAM, ROM and EEPROM), input/output, hardware random number generator (so called TRNGs), etc.) unlike the not-so-smart cards that only had data stored in them via a magnetic stripe (which in turn can be easily copied via the skimmers I talked about previously). Today, it seems like the same people all have a much more powerful computer in their pockets, so smart cards are probably going to die. (Rob Wood is pointing to me that more than a quarter of the US still doesn’t have a smart phone, so there’s still some time before this prophecy come to fruition.)

Smart cards mix a number of physical and logical techniques to prevent observation, extraction, and modification of its execution environment and some of its memory (where secrets are stored). But as I said earlier, it’s all about how much money you want an attacker to be spending, and there exist many techniques that attempt at breaking these cards:

• Non-invasive attacks such as differential power analysis (DPA) analyze the power consumption of the smart card while it is doing cryptographic operations in order to extract the associated keys.
• Semi-invasive attacks require access to the chip’s surface to mount attacks such as differential fault analysis (DFA) which use heat, lasers, and other techniques to modify the execution of a program running on the smart card in order to leak the key via cryptographic attacks (see my post on RSA signature fault attacks for an example).
• Finally invasive silicon attacks can modify the circuitry in the silicon itself to alter its function and reveal secrets.

## Secure Elements

Smart cards got really popular really fast, and it became obvious that having such a secure blackbox in other devices could be useful. The concept of a secure element was born: a tamper-resistant microcontroller that can be found in a pluggable form factor like UUICs (SIM cards required by carriers to access their 3G/4G/5G network) or directly bonded on chips and motherboards like the embedded SE (eSE) attached to an iPhone’s NFC chip. Really just a small separate piece of hardware meant to protect your secrets and their usage in cryptographic operations.

SEs are an evolution of the traditional chip that resides in smart cards, which have been adapted to suit the needs of an increasingly digitalized world, such as smartphones, tablets, set top boxes, wearables, connected cars, and other internet of things (IoT) devices. (GlobalPlatform)

Secure elements are a key concept to protect cryptographic operations in the Internet of Things (IoTs), a colloquial (and overloaded) term to refer to devices that can communicate with other devices (think smart cards in credit cards, SIM cards in phones, biometric data in passports, garage keys, smart home sensors, and so on).

Thus, you can see all of the solutions that will follow in this blogpost series as secure elements implemented in different form factors, using different techniques, and providing different level of defense-in-depth.

If you are required to use a secure element (to store credit card data for example), you also most likely have to get it certified. The main definition and standards around a secure element come from GlobalPlatform, but there exist more standards like Common Criteria (CC), NIST’s FIPS, EMV (for Europay, Mastercard, and Visa), and so on. If you’re in the market of buying secure microcontrollers, you will often see claims like “FIPS 140-2 certified” and “certified CC EAL 5+” next to it. Claims that can be obtained after spending some quality time, and a lot of money, with licensed certification labs.

## Host Card Emulation (HCE)

It’s 2020, most people have a computer in their pocket: a smart phone. What’s the point of a credit card anymore? Well, not much, nowadays more and more payment terminals support contactless payment via the Near-field Communication (NFC) protocol, and more and more smartphones ship with an NFC chip that can potentially act as a credit card.

NFC for payment is specified as Card Emulation. Literally: it emulates a bank card. Banks allow you to do this only if you have a secure element.

Since Apple has full control over its hardware, it can easily add a secure element to its new iPhones to support payment, and this is what Apple did (with an embedded SE bonded to the NFC chip since the iPhone 6). iPhone users can register a bank card with the Apple wallet application, Apple can then obtain the card’s secrets from the issuing bank, and the card secrets can finally be stored in the eSE. The secure element communicates directly with the NFC chip and then to NFC readers, thus a compromise of the phone OS does not impact the secure element.

Google, on the other hand, had quite a hard time introducing payment to Android-based mobile phones due to phone vendors all doing different things. The saving technology for Google ended up being a cloud-based secure element called Host Card Emulation (HCE) introduced in 2013 in Android 4.4.

Wikipedia has some cool diagram to show what’s going on whenever you pay with Android Pay or Apple Pay:

Although Apple theoretically doesn't have to use tokenization, since iPhones have secure elements that can store the real PAN, they do use it in order to gain more privacy (it's afterall their new bread and butter).

In part 2 of this blog series I’ll cover HSMs, TPMs, and much more :)

(I would like to thank Rob Wood, Thomas Duboucher, and Lionel Rivière for answering my many questions!)

PS: I'm writing a book which will contain this and much more, check it out!

# Coronavirus and cryptography posted March 2020

The coronavirus is shaking the world, on its multiple layers, and cryptography hasn't been spared.

The IACR announced on March 14th that multiple conferences were postponed:

FSE 2020, which was supposed to be held in Athens, Greece, during 22-26 March 2020, has been postponed to 8-12 November 2020.

PKC 2020, which was supposed to be held in Edinburgh, Scotland, during 4-7 May 2020, has been postponed.

EUROCRYPT 2020, which was supposed to be held in Zagreb, Croatia, during 10-14 May 2020, has been postponed.

While some others were not:

No changes have been made at this time to the schedule of CRYPTO 2020, CHES 2020, TCC 2020, and ASIACRYPT 2020, but we will continue to closely monitor the situation and will inform members if changes are needed.

While many workplaces (including mine) are moving to a WFH (work from home) model, will conferences follow?

It seems to be the case at least for Consensus 2020, a cryptocurrency conference organized by coindesk, which is moving to an online model:

Consensus 2020 will now be a completely virtual experience, where attendees from all over the world can participate online at no charge.

On a more dramatic note it seems like several participants of EthCC, which was held in Paris almost a week ago, have contracted the virus. A google spreadsheet has been circulating in order to self-report and figure out who else could have potentially contracted the virus. Even Vitalik Buterin is rumored to have had mild COVID-19 symptoms. Nobody is out of reach.

On a lighter note, my coworker Kostas presented on proofs of solvency at the lightning talks of Real World Crypto 2020. With his merkle tree-like construction he hopes to make governments accountable when they count the number of people who counted positive to the virus.

comment on this story

# EdDSA, Ed25519, Ed25519-IETF, Ed25519ph, Ed25519ctx, HashEdDSA, PureEdDSA, WTF? posted March 2020

## The Edwards-curve Digital Signature Algorithm (EdDSA)

You've heard of EdDSA right? The shiny and new signature scheme (well new, it's been here since 2008, wake up).

Since its inception, EdDSA has evolved quite a lot, and some amount of standardization process has happened to it. It's even doomed to be adopted by the NIST in FIPS 186-5!

First, some definition:

• EdDSA stands for Edwards-curve Digital Signature Algorithm. As its name indicates, it is supposed to be used with twisted Edwards curves (a type of elliptic curve). Its name can be deceiving though, as it is not based on the Digital Signature Algorithm (DSA) but on Schnorr signatures!
• Ed25519 is the name given to the algorithm combining EdDSA and the Edwards25519 curve (a curve somewhat equivalent to Curve25519 but discovered later, and much more performant).

EdDSA, Ed25519, and the more secure Ed448 are all specified in RFC 8032.

## RFC 8032: Edwards-Curve Digital Signature Algorithm (EdDSA)

RFC 8032 takes some new direction from the original paper:

• It specifies a malleability check during verification, which prevents ill-intentioned people to forge an additional valid signature from an existing signature of yours. Whenever someone talks about Ed25519-IETF, they probably mean "the algorithm with the malleability check".
• It specifies a number of Ed25519 variants, which is the reason of this post.
• Maybe some other stuff I'm missing.

To sign with Ed25519, the original algorithm defined in the paper, here is what you're supposed to do:

1. compute the nonce as HASH(nonce_key || message)
2. compute the commitment R = [nonce]G with G the generator of the group.
3. compute the challenge as HASH(commitment || public_key || message)
4. compute the proof S = nonce + challenge × signing_key
5. the signature is (R, S)

where HASH is just the SHA-512 hash function.

At a high-level this is similar to Schnorr signatures, except for the following differences:

• The nonce is generated deterministically (as opposed to probabilistically) using a fixed nonce_key (derived from your private key, and the message M. This is one of the cool feature of Ed25519: it prevents you from re-using the same nonce twice.
• The challenge is computed not only with the commitment and the message to sign, but with the public key of the signer as well. Do you know why?

Important: notice that the message here does not need to be hashed before being passed to the algorithm, as it is already hashed as part of the algorithm.

Anyway, we still don't know WTF all the variants specified are.

## PureEdDSA, ContextEdDSA and HashEdDSA

Here are the variants that the RFC actually specifies:

• PureEdDSA, shortened as Ed25519 when coupled with Edwards25519.
• HashEdDSA, shortened as Ed25519ph when coupled with Edwards25519 (and where ph stands for "prehash").
• Something with no name we'll call ContextEdDSA, defined as Ed25519ctx when coupled with Edwards25519.

All three variants can share the same keys. They differ only in their signing and verification algorithms.

By the way Ed448 is a bit different, so from now on I'll focus on EdDSA with the Edwards25519 curve.

Ed25519 (or pureEd25519) is the algorithm I described in the previous section.

Easy!

Ed25519ctx (or ContextEd25519) is pureEd25519 with some additional modification: the HASH(.) function used in the signing protocol I described above is re-defined as HASH(x) = SHA-512(some_encoding(flag, context) || x) where:

• flag is set to 0
• context is a context string (mandatory only for Ed25519ctx)

In other words, the two instances of hashing in the signing algorithm now include some prefix. (Intuitively, you can also see that these variants are totally incompatible with each other.)

Right out of the bat, you can see that ContextEd25519 big difference is just that it mandates some domain separation to Ed25519.

Ed25519ph (or HashEd25519), finally, builds on top of ContextEd25519 with the following modifications:

• flag is set to 1
• context is now optional, but advised
• the message is replaced with a hash of the message (the specification says that the hash has to be SHA-512, but I'm guessing that it can be anything in reality)

OK. So the big difference now seems that we are doubly-hashing.

## Why HashEdDSA and why double hashing?

First, pre-hashing sucks, this is because it kills the collision resistance of the signature algorithm. In PureEdDSA we assume that we take the original message and not a hash. (Although this is not always true, the caller of the function can do whatever they want.) Then a collision on the hash function wouldn't matter (to make you create a signature that validates two different messages) because you would have to find a collision on the nonce which is computed using a secret (the nonce key).

But if you pre-hash the message, then finding a collision there is enough to obtain a signature that validates two messages.

Thus, you should use PureEdDSA if possible. And use it correctly (pass it the correct message.)

Why is HashEdDSA a thing then?

The EdDSA for more curves paper which was the first to introduce the algorithm has this to say:

The main motivation for HashEdDSA is the following storage issue (which is irrelevant to most well-designed signature applications). Computing the PureEdDSA signature of M requires reading through M twice from a buffer as long as M, and therefore does not support a small-memory “InitUpdate-Final” interface for long messages. Every common hash function H0 supports a smallmemory “Init-Update-Final” interface for long messages, so H0 -EdDSA signing also supports a small-memory “Init-Update-Final” interface for long messages. Beware, however, that analogous streaming of verification for long messages means that verifiers pass along forged packets from attackers, so it is safest for protocol designers to split long messages into short messages to be signed; this splitting also eliminates the storage issue.

## Why am I even looking at this rabbit hole?

Because I'm writing a book, and it'd be nice to explain what the hell is going on with Ed25519.

# What's a key exchange? posted March 2020

I've been writing about cryptography for a book for a year now, and it has brought me some interesting challenges. One of them is that I constantly have to throw away what I've learned a long time ago, and imagine what it feels like not to know about a concept.

For example what are key exchanges?

The most intuitive explanation that I knew of (up until recently) was the one given by the wikipedia page on key exchanges. You might already know about it (unless you're reading this post to learn about key exchanges). It's a picture that involves paint. Take a look at it, but don't try to understand what is going on if you don't know about key exchanges yet. You can come back to it later.

I thought this was great. At least until I tried to explain key exchanges to my friends using this analogy. Nobody got it.

Nobody.

The other problem was that I couldn't use colors to explain anything in my book, as it'll be printed in black & white.

So I sat on the sad realization that I didn't have a great explanation for key exchanges, this for a number of months, and that until a more intuitive idea came to my mind.

The idea goes like this. Imagine that Alice and Bob wants to share a secret, but are afraid that someone is intercepting their communications. What they do is that they go to the store and buy the same bottle of generic soda.

Once home, they both start a random timer and shake their respective bottles until their timer end.

What they obtain are some shaked, pressurized, ready to gush out bottles of sodas. Each of the bottles will release a different amount of pressure.

After that, they swap bottles. Now Alice has the bottle of Bob, and Bob has Alice's bottle.

What do they do now? They restart their timers and shake the other person's bottle for the same amount of time.

Shake shake shake!

What do they finally obtain? Try to guess.

If I did my job correctly, then I gave you an intuition of how key exchanges work. Both Alice and Bob should now have two bottle of sodas that will release the same pressure once opened. And that's the secret!

And even if I steal the two bottles, I can't get a bottle that combines both bottles' pressure.

I recap the whole flow in the picture below:

Did you know about key exchanges before? Did you get it? Or did you think the painting example made more sense?

Please tell me in the comment!

This is probably what I'll include in my book as an introduction of what key exchanges are, unless I find a better way to explain it :)

# Cryptographic Signatures, Surprising Pitfalls, and LetsEncrypt posted March 2020

## The Let's Encrypt Accident

On August 11th, 2015, Andrew Ayer posted the following email to the IETF mailing list:

I recently reviewed draft-barnes-acme-04 and found vulnerabilities in the DNS, DVSNI, and Simple HTTP challenges that would allow an attacker to fraudulently complete these challenges.

(The author has since then written a more complete explanation of the attack.)

The draft-barnes-acme-04 mentioned by Andrew Ayer is a document specifying ACME, one of the protocols behind the Let's Encrypt Certificate Authority. The thing that your browser trust and that signs the public keys of websites you visit.

The attack was found merely 6 weeks before major browsers were supposed to ship with Let's Encrypt's public keys in their trust store. The draft has since become RFC 8555: Automatic Certificate Management Environment (ACME), mitigating the issues. Since then no cryptographic attacks are known on the protocol.

But how did we get there? What's the deal with signature schemes these days? and are all of our protocols doomed? This is what this blog post will answer.

## Let's Encrypt Use Of Signatures

Let's Encrypt is a pretty big deal. Created in 2014, it is a certificate authority ran as a non-profit, and currently providing trust to ~200 million of websites. (You can read my article Let's Encrypt Overview to learn more about it.)

The key to Let's Encrypt's success are two folds:

• It is free. Before Let's Encrypt most certificate authorities charged fees from webmasters who wanted to obtain certificates.
• It is automated. If you follow their standardized protocol, you can request, renew and even revoke certificates via an API. Contrast that to other certificate authorities who did most processing manually, and took time to issue certificates.

If a webmaster wants her website example.com to provide a secure connection to her users (via HTTPS), she can request a certificate from Let's Encrypt, and after proving that she owns the domain example.com and getting her certificate issued, she will be able to use it to negotiate a secure connection with any browser trusting Let's Encrypt.

That's the theory.

In practice the flow is the following:

1. Alice registers on Let's Encrypt with an RSA public key.
2. Alice asks Let's Encrypt for a certificate for example.com.
3. Let's Encrypt asks Alice to prove that she owns example.com, for this she has to sign some data and upload it to example.com/.well-known/acme-challenge/some_file.
4. Once Alice has signed and uploaded the signature, she asks Let's Encrypt to go check it.
5. Let's Encrypt checks if it can access the file on example.com, if it successfully downloaded the signature and the signature is valid then Let's Encrypt issues a certificate to Alice.

I recapitulate some of this flow in the following figure:

Now, you might be wondering, what if Alice does not own example.com and manage to man-in-the-middle Let's Encrypt in step 5? That's a real issue that's been bothering me ever since Let's Encrypt launched, and turns out a team of researchers at Princeton demonstrated exactly this in Bamboozling Certificate Authorities with BGP:

We perform the first real-world demonstration of BGP attacks to obtain bogus certificates from top CAs in an ethical manner. To assess the vulnerability of the PKI, we collect a dataset of 1.8 million certificates and find that an adversary would be capable of gaining a bogus certificate for the vast majority of domains

The paper continues and proposes two solutions to sort of remediate, or at least reduce the risk of these attacks:

Finally, we propose and evaluate two countermeasures to secure the PKI: 1) CAs verifying domains from multiple vantage points to make it harder to launch a successful attack, and 2) a BGP monitoring system for CAs to detect suspicious BGP routes and delay certificate issuance to give network operators time to react to BGP attacks.

Recently Let's Encrypt implemented the first solution multi-perspective domain validation, which changes the way step 5 of the above flow is performed: now Let's Encrypt downloads the proof from example.com from multiple places.

## How Did The Let's Encrypt Attack Worked

But let's get back to what I was talking about, the attack that Andrew Ayer found in 2015.

In it, Andrew proposes a way to gain control of a Let's Encrypt account that has already validated a domain (let's say example.com)

The attack goes like this:

1. Alice registers and goes through the process of verifying her domain example.com by uploading some signature over some data on example.com/.well-known/acme-challenge/some_file. She then successfully manages to obtain a certificate from Let's Encrypt.
2. Later, Eve signs up to Let's Encrypt with a new account and new RSA public key, and request to recover the example.com domain
3. Let's Encrypt asks Eve to sign some new data, and upload it to example.com/.well-known/acme-challenge/some_file (note that the file is still lingering there from Alice's previous domain validation)
4. Eve crafts a new malicious keypair, and updates her public key on Let's Encrypt. She then asks Let's Encrypt to check the signature
5. Let's Encrypt obtains the signature file from example.com, the signature matches, Eve is granted ownership of the domain example.com.

I recapitulate the attack in the following figure:

Wait what?

What happened there?

## Key Substitution Attack With RSA

In the above attack Eve managed to create a valid public key that validates a given signature and message.

This is because, as Andrew Ayer wrote:

A digital signature does not uniquely identify a key or a message

If you remember how RSA works, this is actually not too hard to understand.

For a fixed signature and (PKCS#1 v1.5 padded) message, a public key (e, N) must satisfy the following equation to validate the signature:

$\text{signature} = \text{message}^e \pmod{N}$

One can easily craft a public key that will (most of the time) satisfy the equation:

• $e = 1$
• $N = \text{signature} - \text{message}$

You can easily verify that the validation works:

\begin{align} &\text{signature} = \text{message}^e \pmod{N}\\ \iff&\text{signature} = \text{message} \pmod{\text{signature} - \text{message}}\\ \iff&\text{signature} - \text{message} = 0 \pmod{\text{signature} - \text{message}} \end{align}

By definition the last line is true.

## Security of Cryptographic Signatures

Is this issue surprising?

It should be.

And if so why?

This is because of the gap that exists between the theoretical world and the applied world, between the security proofs and the implemented protocol.

Signatures in cryptography are usually analyzed with the EUF-CMA model, which stands for Existential Unforgeability under Adaptive Chosen Message Attack.

In this model YOU generated a key pair, and then I request YOU to sign a number of arbitrary messages. While I observe the signatures you produce, I win if I can at some point in time produce a valid signature over a message I hadn't requested.

Unfortunately, even though our modern signature schemes seem to pass the EUF-CMA test fine, they tend to exhibit some surprising properties.

## Subtle Behaviors of Signature Schemes

The excellent paper Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures by Dennis Jackson, Cas Cremers, Katriel Cohn-Gordon, and Ralf Sasse attempts to list these surprising properties and the signature schemes affected by them (and then find a bunch of these in protocols using formal verification, it's a cool paper read it).

Let me briefly describe each properties:

Conservative Exclusive Ownership (CEO)/Destructive Exclusive Ownership (DEO). This refers to what Koblitz and Menezes used to call Duplicate Signature Key Selection (DSKS). In total honesty, I don't think any of these terms are self-explanatory. I find these attacks easier to remember if thought of as the following two variants:

1. key substitution attacks (CEO), where a different keypair or public key is used to validate a given signature over a given message.
2. message key substitution attacks (DEO), where a different keypair or public key is used to validate given signature over a new message.

To recap: the first attack fixes both the message and the signature, the second one only fixes the signature.

Malleability. Most signature schemes are malleable, meaning that if you give me a valid signature I can tamper with it so that it becomes a different but still valid signature. Note that if I'm the signer I can usually create different signatures for the same message, but here malleability refers to the fact that someone who has zero knowledge of the private key can also create a new valid signature for the same signed message. It is not clear if this has any impact on any real world protocol, eventhough the bitcoin MtGox exchange blamed their loss of funds on this one. From the paper Bitcoin Transaction Malleability and MtGox:

In February 2014 MtGox, once the largest Bitcoin exchange, closed and filed for bankruptcy claiming that attackers used malleability attacks to drain its accounts.

Note that a newer security model called SUF-CMA (for strong EUF-CMA) attempts to include this behavior in the security definition of signature schemes, and some recent standards (like RFC 8032 that specifies Ed25519) are mitigating malleability attacks on their signature schemes.

Re-signability. This one is simple to explain. To validate a signature over message you often don't need the message itself but its digest. This would allow anyone to re-sign the message with their own keys without knowing the message itself. How is this impactful in real world protocols? Not sure, but we never know.

Collidability. This is another not-so-clear if it'll bite you one day: some schemes allow you to craft signatures that will validate under several messages. Worse, Ed25519 as designed allows one to craft a public key and a signature that would validate any message with high probability. (This has been fixed in some implementations like libsodium.)

I recapitulate the substitution attacks in the diagram below:

What to do with all of this information?

Well, for one signature schemes are definitely not broken, and you probably shouldn't worry if your use of them are mainstream.

But if you're designing cryptographic protocols, or if you're implementing something that's more complicated than the every day use of cryptography you might want to keep these in the back of your mind.

Did you like this content? This is part of a book about how to apply modern cryptography in real world applications. Check it out!

# Authentication What The Fuck: Part II posted February 2020

Writing about real world cryptography, it seems like what I end up writing a lot about is protocols and how they solve origin/identity authentication.

Don't get me wrong, confidentiality has interesting problems to (e.g. how to bring confidentiality to a blockchain), but authentication is most of what applied cryptography is about, for realz.

Do I need to convince you?

If you think about it, most protocols are about finding ways to provide authentication to different scenarios. And that's why they can get complicated!

I'll take my life for example, here is the authentication problems and solutions that I use:

• insecure → one-side authenticated. Every day I use HTTPS, which uses the web public-key infrastructure (web PKI) to allow my browser to authenticate any websites on the web. It's a mess, but that's how you scale machine-to-machine authentication nowadays.
• one-side authenticated → mutually-authenticated. Whenever I log into a website, over a secure HTTPS connection, this is what happens. A machine asks me to present some password (in clear, or oblivious via an asymmetric password-authenticated key exchange), or maybe a one-time password (via TOTP), or maybe I'll have to press my thumb on a yubikey (FIDO 2), or maybe I'll have to do a combination of several things (MFA). These are usually machine authenticating humans-type of flow.
• insecure → mutually-authenticated. Whenever I talk to someone on Signal, or connect to a new WiFi, or pair a bluetooth device (like my phone with a car), I go from an insecure connection to a mutually-authenticated connection. There is a bit more nuance here, as sometimes I'll authenticate a machine (a WiFi access point for example) and sometimes I'll authenticate a human (end-to-end encryption). So different techniques work best depending on the type of peer you're trying to talk to.

In the end, I think these are the main three big categories of origin authentication. Can you think of a better classification?

comment on this story