Hey! I'm David, the author of the Real-World Cryptography book. I'm a crypto engineer at O(1) Labs on the Mina cryptocurrency, previously I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

more on the next page...

# Crypto training at Black Hat US 2016 posted May 2016

I'll be in Vegas for the next Black Hat USA giving a crypto training with my coworkers Javed Samuel and Alex Balducci.

You have a nice summary on the official training page.

It's 2 days crash course on our knowledge as crypto consultants. There will be a lot of general culture, protocols, side-channels, random numbers, ... as well as deep segments and exercises on common crypto attacks.

If you're not interested, you can still hit me up for drinks around Black Hat or Defcon, I'll be in Vegas. Otherwise I'll be in Santa Barbara a few week after for CRYPTO and CHES. If you're in need of crypto conference, there is also SAC happening in Canada between Defcon and CRYPTO (I won't be there).

comment on this story

# Referencing eprint papers posted May 2016

Just realized that each ePrint paper has a BibTeX snippet to reference the paper in your own work. If you don't know what I'm talking about, I'm talking about that:

Go on any paper's page and click on the BibTeX Citation link. The snippet needs to be included in a .bib file and referenced from the your main .tex

@misc{cryptoeprint:2015:767,
author = {Daniel J. Bernstein and Tanja Lange and Ruben Niederhagen},
title = {Dual EC: A Standardized Back Door},
howpublished = {Cryptology ePrint Archive, Report 2015/767},
year = {2015},
note = {\url{http://eprint.iacr.org/}},
}

# How Gavin Andresen was duped into believing Wright is Satoshi posted May 2016

I won't be talking about how weird Wright is, how he was debunked a multitude of times, and how you shouldn't believe the press.

What I will be talking about, is how Gavin Andresen, a main bitcoin developer, could have been duped into thinking Wright was Satoshi:

I believe Craig Steven Wright is the person who invented Bitcoin.

First. How weird is it that instead of just signing something with Satoshi's public key and releasing it on the internet, Mr. Wright decides to demo a signature verification on closed door to TV channels, magazines and some bitcoin dev on a trip to London? From reddit, Gavin wrote:

Craig signed a message that I chose ("Gavin's favorite number is eleven. CSW" if I recall correctly) using the private key from block number 1. That signature was copied on to a clean usb stick I brought with me to London, and then validated on a brand-new laptop with a freshly downloaded copy of electrum. I was not allowed to keep the message or laptop (fear it would leak before Official Announcement).

That last sentence... is the rule number one in a magic trick or scam.

Now people are pointing out from the blogpost this intentional mistake in Wright's script to verify a signature:

Looks like the signature is also taken from an old transaction, so Wright is re-using a signature to prove he verified "something". Except he swapped the something.

Another way he might have done it, on his website he also has a oneliner:

>> base64 –decode signature > sig.asn1 & openssl dgst -verify sn-pub.pem -signature sig.asn1 sn7-message.txt

But the & sign is a single one instead of &&, making the two commands run at the same time instead of running the second command after the first one.

Someone else points at other tricks, like using doppelganger letters from the UTF-8 set of characters to trick someone in a demo.

myvar = "foo"
myvаr = "bar"  # This is a *different* variable.

print("first one:", myvar)
print("second one:", myvаr)

The amount of sleigh of hands that could have been done here is actually really interesting. We have cryptographic signatures so that we don't need to believe in human lies, but if the human is in between you and the verification of the signature, then good luck!

Also that python script makes me think of using this as a proof of concept backdoor.

comment on this story

# The Noise Protocol Framework posted April 2016

WhatsApp just announced their integration of the Signal protocol (formerly known as the Axolotl protocol). An interesting aspect of it is the use of a TLS-like protocol called Noise Pipes. A protocol based on the Noise protocol framework, a one-man work led by Trevor Perrin with only a few implementations and a moderately long specificiations available here. I thought it would be interesting to understand how protocols are made from this framework, and to condense it in a 25 minutes video. Here it is.

comment on this story

# The Juniper paper is out! posted April 2016

The Juniper paper is going to be a big deal

That's what I wrote at the end of Real World Crypto, 2 days after Hovav Shacham talked about the subject at the same conference.

If you have no idea what Juniper is or what happened here, go check the blogpost I wrote on the subject.

The paper is available on eprint. titled A Systematic Analysis of the Juniper Dual EC Incident, it contains some background on Dual EC, IKE and a timeline of event that should read like a nice story. This is the paper you're going to read next week! So go print it now =)

comment on this story

# CVE-2016-3959 posted April 2016

CVE-2016-3959 also called the golang infinite loop just triggered a new version of the language Go. The cherry on the cake is that it was discovered by me!

The issue was announced here on github, and in a bunch of different places. Here's a short Q/A to explain what's up:

Q: What's vulnerable?

A: Anything that uses the big.Exp function concurrently where the modulo parameter can be controlled by an attacker. For crypto this means mostly DSA and RSA which are used for example in TLS and SSH, as well as many other cryptographic applications, like recently in the Let's Encrypt client (hope they patched). ECDSA also use the big.Exp function but this elliptic curve version of DSA usually does not use custom curves so the attacker has usually no known way to make someone compute calculations over his curves parameters if they are non-standard.

Q: What's big.Exp?

A: It's the exponentiation function of Go. Declared in the big number library of Go: big.

func (z *Int) Exp(x, y, m *Int) *Int

Q: What's a big number library?

A: Programming languages have int types that can usually hold integers up to 64bits (For example, in C this type is called uint64_t). But crypto needs numbers that are much bigger than that, up to 4000 bits sometimes. So big number libraries are the libraries used to play with numbers of such sizes without getting a headache =)

Q: What was the problem with big.Exp?

A: It was a pretty simple one, the lack of a zero check for the modulus made the calculation turn into an infinite loop. Interestingly the developers of Go did not patch the vulnerability by adding the zero check inside of big.Exp but did it inside of the implementations of DSA, RSA and ECDSA. This means that other cryptographic functions or non-cryptographic functions that employ this big.Exp function, concurrently, with the third parameter controlled by the user, might still be subject to DoS attacks.

If you have any other questions feel free to ask! The comment section is here for that ;)

# Some Q/A on phone security posted March 2016

There is an excellent blogpost here on how would you go about getting into an encrypted phone.

I asked some questions to one of the co-author Adwait Nadkarni, I thought that it might be interesting to others:

But offline attacks can also be much harder, because they require either trying every single possible encryption key, or figuring out the user’s passcode and the device-specific key (the unique ID on Apple, and the hardware-bound key on newer versions of Android).

Q: I don't understand why you would have to guess that HBK, can't you just access it on the phone?

A: Thats a good question. To prevent the security from only being dependent on the user-supplied passcode, the HBK is supposed to be inaccessible from most software, similar to the iOS device-specific UID. That is, the HBK is supposed to be accessible only via the trusted executable environment (TEE), that is isolated on a different microprocessor. Untrusted software (even with compromise of the main kernel) is not supposed to be able to directly access the HBK.

That said, the reality is manufacturer-dependent. The TEE on different devices has been repeatedly compromised (Qualcomm TEE implementation compromised in 2014, HTC in 2015, etc.). Thus, there is threat of software compromise that may allow the attacker to retrieve or misuse the HBK.

Q: Is the secure enclave on the recent iPhones that TEE? Does that mean that most phones are vulnerable to offline brute force attacks? (since most phone don't have TEE (I'm not sure about that) and have a 4-6 digit PIN instead of an alphanumeric password)

A: The TEE is for Android; iOS uses secure enclave similarly. Most phones that do not use a device-specific key (i.e., old Android devices that run <Android 4.4, very old iOS devices) are vulnerable to offline brute-force attacks. Most phones (even old ones like our Nexus 4) do have TEE.

We built our own MDM application for our Android phone, and verified that the passcode can be reset without the user’s explicit consent; this also updated the phone’s encryption keys. We could then use the new passcode to unlock the phone from the lock screen and at boot time

Q: if you do that, the content on the phone would have to be decrypted with the real passcode and re-encrypted with your new passcode. I would imagine that the real passcode is not stored anywhere (there should be a password hash stored instead for verification only), so how does that technique works for the decryption phase? (don't know if the question is clear enough)

A: The phone is not directly encrypted with the passcode, but with a randomly generated DEK. The DEK is then encrypted with the KEK, which in turn is generated from the passcode and the HBK. Now, when the passcode is changed, the phone decrypts the DEK with the old KEK, then recreates the new KEK with the new passcode and HBK, and re-encrypts the DEK with the new KEK. Data is not touched when the passcode changes.

Q: so, specifically then, how does the phone uses the old KEK if it doesn't have the old passcode?

A: The old KEK is not created on the fly, but stored in the hardware-backed keystore (accessible only via the TEE). The TEE can retrieve it to decrypt the encrypted DEK.

Generally, operating system software is signed with a digital code that proves it is genuine, and which the phone requires before actually installing it.

this part made me wonder, what if we used lasers? Fault attacks are a big thing in the smart card industry, why is no one talking about it for cellphones? This prompted me to ask Frederico Menarini from Riscure:

Q: In iOS or Android if you want to update the phone, the update needs to be signed with Apple or Google or Samsung, etc... update key. But what you could do if you could mount a fault attack (lasers?) would be to target the point where the cellphone refuses the patch because of a false signature.

A: In principle, fault attacks are possible on phones – nothing prevents it and the scenario you described is valid. Laser attacks might be challenging though because certain chips use package-on-package or chip stacking, which means that you might not be able to directly affect the CPU using light.

In general attacking mobile phone chips will be complicated because they run at extremely high frequencies compared to smartcards (smartcards rarely run faster than 50 MHz) and because the feature size is much smaller (state of the art in smartcards is 90nm, which makes targeting the right area of the chip with a laser easier).

comment on this story

# Demo of the Diffie-Hellman backdoor posted March 2016

Here's a little demo of my work in progress research =)

The top right screen is the client, the bottom right screen is the server. I modified two numbers in some Socat file (hopefully it will be one number soon) and the backdoor is there. It's a public value and both the server and the client can generate their own certificates and use them in the TLS connection. For simplicity I don't do that, but just know that it would change nothing.

To get a Man-in-the-middle position I took the simplest approach I could think of: the screen on the left is a proxy, the client connect to the server through the proxy.

You will see that the proxy on the left will start parsing the server and the client packets as soon as it sees a TLS handshake. It then collects the server and the client Randoms, the server and the client DH public keys, and the DH parameters of the server to check if the backdoor is there. You will see a red message displaying that indeed, the backdoor is present.

For simplicity again (this is a proof of concept) I only use TLS 1.2 with AES128-CBC as the symmetric cipher and SHA-256 as the hash function used in the PRF/MAC/etc...

In a few seconds the premaster key, then the master key, then the MAC and encryption keys are computed and the traffic is then decrypted live.

comment on this story