Hey! I'm David, a security engineer at the Blockchain team of Facebook, previously a security consultant for the Cryptography Services of NCC Group. I'm also the author of the Real World Cryptography book. This is my blog about cryptography and security and other related topics that I find interesting.

# tl;dr of the sweet32 attack (On the Practical (In-)Security of 64-bit Block Ciphers) posted September 2016

I wrote about that 3-weeks-ago attack on the NCC Group's blog, but I had a bit more to say about it, so I made a tl;dr video =) enjoy!

1 comment

# Key Compromise Impersonation attacks (KCI) posted September 2016

You might have heard of KCI attacks on TLS: an attacker gets to install a client certificate on your device and can then impersonate websites to you. I had thought the attack was a highly impractical one, but looking at the video of the attack (done against facebook.com at the time) it seemed to contradict my first instinct.

I skimmed the paper and it answered some of my questions and doubts. So here's a tl;dr of it.

The issue is pretty easy to comprehend once you understand how a Diffie-Hellman key exchange works.

Imagine that you navigate to myCompany.com and you do a key exchange with both of your public keys.

• your public key: $g^a \pmod{n}$

• your company's public key: $g^b \pmod{n}$

where $g$ and $n$ are public parameters that you agreed on. Let's ignore $n$ from now on: here, both ends can create the shared secret by doing either $(g^b)^a$ or $(g^a)^b$.

In other words, shared secret = (other dude's public key)(my private key)

If you know either one of them's private key, you can observe the other public key (it's public!) and compute the shared secret. Then you have enough to replace one of the end in the TLS communication.

That's the theory.

If you replace the client's certificate with your own keypair, or know the private key of the certificate, you can break whatever key exchange they try to do with that certificate. What I mean by "break": from a key exchange you can compute the session keys and replace any party during the following communications.

My doubts had originally came from the fact that most implementations rarely use plain Diffie-Hellman, instead they usually offer ephemeral DH or RSA-based key exchanges (which are not vulnerable to this attack). The paper brought me back to reality:

Support for fixed DH client authentication has been very recently added to the OpenSSL 1.0.2 branch.

But then how would you make the client do such a un-used key exchange? The paper washed me from doubts once again:

the attacker, under the hood, interferes with the connection initialization to facebook, and forces the client to use an insecure handshake with client authentication, requesting the previously installed certificate from the system.

Now a couple of questions come to my mind.

How does facebook have these non-ephemeral DH ciphersuites?

→ from the paper, the server doesn't even need to use a static DH ciphersuite. If it has an ECDSA certificate, and didn't specify that it's only to be used to sign then you can use it for the attack (the keyUsage field in the certificate is apparently never used correctly, the few values listed in this RFC tell you how to correctly limit the authorized usages of your public key)

How can you trigger the client to use this ciphersuite?

→ just reply with a serverHello only displaying static ECDH in its ciphersuite list (contrarily to ECDHE, notice the last E for ephemeral). Then show the real ECDSA certificate (the client will interpret that as a ECDH cert because of the fake ciphersuites) and then ask the client for the specific cert you know the private key of.

In addition to the ECDSA → ECDH trumpery, this is all possible because none of these messages are signed at that moment. They are later authenticated via the shared secret, but it is too late =) I don't know if TLS 1.3 is doing a better job in this regard.

It also seems to me that in the attack, instead of installing a client cert you could just install a CA cert and MITM ALL TLS connections (except the pinned ones). But then, this attack is more sneaky, and it's another way of doing exactly this. So I appreciate that.

1 comment

# Fault attacks on RSA's signatures posted September 2016

Facebook was organizing a CTF last week and they needed some crypto challenge. I obliged, missed a connecting flight in Phoenix while building it, and eventually provided them with one idea I had wanted to try for quite some time. Unfortunately, as with the last challenge I wrote for a CTF, someone solved it with a tool instead of doing it by hand (like in the good ol' days). You can read the quick write up there, or you can read a more involved one here.

## The challenge

The challenge was just a file named capture.pcap. Opening it with Wireshark would reveal hundreds of TLS handshakes. One clever way to find a clue here would be to filter them with ssl.alert_message.

From that we could observe a fatal alert being sent from the client to the server, right after the server Hello Done.

Mmmm

Several hypothesis exist. One way of guessing what went wrong could be to run these packets to openssl s_client with a -debug option and see why the client decides to terminate the connection at this point of the handshake. Or if you had good intuition, you could have directly verified the signature :)

After realizing the signature was incorrect, and that it was done with RSA, one of the obvious attack here is the RSA-CRT attack! Faults happen in RSA, sometimes because of malicious reasons (lasers!) or just because of random errors that can happen in different parts of the hardware. One random bit shifting and you have a fault. If it happens at the wrong place, at the wrong time, you have a cryptographic failure!

## RSA-CRT

RSA is slow-ish, as in not as fast as symmetric crypto: I can still do 414 signatures per second and verify 15775 signatures per second (according to openssl speed rsa2048).

Let's remember a RSA signature. It's basically the inverse of an encryption with RSA: you decrypt your message and use the decrypted part as a signature.

To verify a signature over a message, you do the same kind of computation on the signature using the public exponent, which gives you back the message:

We remember here that $N$ is the public modulus used in both the signing and verifying operations. Let $N=pq$ with $p, q$ two large primes.

This is the basis of RSA. Its security relies on the hardness to factor $N$.

I won't talk more about RSA here, so check Wikipedia) if you need a recap =)

It's also obvious for a lot of you reading this that you do not sign the message directly. You first hash it (this is good especially for large files) and pad it according to some specifications. I will talk about that in the later sections, as this distinction is not immediately important to us. We have now enough background to talk about The Chinese Remainder Theorem (CRT), which is a theorem we use to speed up the above equation.

So what if we could do the calculation mod $p$ and $q$ instead of this huge number $N$ (usually 2048 bits)? Let's stop with the what ifs because this is exactly what we will do:

Here we compute two partial signatures, one mod $p$, one mod $q$. With $d_p = d \pmod{p-1}$ and $d_q = d \pmod{q-1}$. After that, we can use CRT to stich these partial signatures together to obtain the complete one.

I won't go further, if you want to know how CRT works you can check an explanation in my latest paper.

## RSA-CRT fault attack

Now imagine that a fault happens in one of the equation mod $p$ or $q$:

Here, because one of the operation failed ($\widetilde{s_2}$) we obtain a faulty signature $\widetilde{s}$. What can we do with a faulty signature you may ask? We first observe the following facts on the faulty signature.

See that? $p$ divides this value (that we can calculate since we know both the faulty signature, the public exponent $e$, and the message). But $q$ does not divide this value. This means that $\widetilde{s}^e - m$ is of the form $pk$ with some integer $k$. What follows is naturally that $gcd(\widetilde{s}^e - m, N)$ will give out $p$! And as I said earlier, if you know the factorization of the public modulus $N$ then it is game over.

## Applying the RSA-CRT attack

Now that we got that out of the way, how do we apply the attack on TLS?

TLS has different kind of key exchanges, some basic ones and some ephemeral (forward secure) ones. The basic key exchange we've used a lot in the past is pretty straight forward: the client uses the RSA public key found in the server's certificate to encrypt the shared secret with it. Then both parties derive the session keys out of that shared secret

Now, if the client and the server agree to do a forward-secure key exchange, they will use something like Diffie-Hellman or Elliptic Curve Diffie-Hellman and the server will sign his ephemeral (EC)DH public key with his long term key. In our case, his public key is a RSA key, and the fault happens during that particular signature.

Now what's the message being signed? We need to check TLS 1.2's RFC:

  struct {
select (KeyExchangeAlgorithm) {
...
case dhe_rsa:
ServerDHParams params;
digitally-signed struct {
opaque client_random[32];
opaque server_random[32];
ServerDHParams params;
} signed_params;
...
} ServerKeyExchange;
• the client_random can be found in the client hello message
• the server_random in the server hello message
• the ServerDHParams are the different parameters of the server's ephemeral public key, from the same TLS 1.2 RFC:
  struct {
opaque dh_p<1..2^16-1>;
opaque dh_g<1..2^16-1>;
opaque dh_Ys<1..2^16-1>;
} ServerDHParams;     /* Ephemeral DH parameters */

dh_p
The prime modulus used for the Diffie-Hellman operation.

dh_g
The generator used for the Diffie-Hellman operation.

dh_Ys
The server's Diffie-Hellman public value (g^X mod p).

TLS is old, they use a non provably secure scheme to sign: PKCS#1 v1.5. Instead they should be using RSA-PSS but it's a whole different story :)

PKCS#1 v1.5's padding is pretty straight forward:

• The ff part shall be long enough to make the bitsize of that padded message as long as the bitsize of $N$
• The hash prefix part is a hexstring representing the hash function being used to sign

And here's the sage code!

# hash the signed_params

h = hashlib.sha384()
h.update(client_nonce.decode('hex'))
h.update(server_nonce.decode('hex'))
h.update(server_params.decode('hex'))
hashed_m = h.hexdigest()

prefix_sha384 = "3041300d060960864801650304020205000430"

modulus_len = (len(bin(modulus)) - 2 + 7) // 8
pad_len = len(hex(modulus))//2 - 3 - len(hashed_m)//2 - len(prefix_sha384)//2

padded_m = "0001" + "ff" * pad_len + "00" + prefix_sha384 + hashed_m

# Attack to recover p
p = gcd(signature^public_exponent - int(padded_m, 16), modulus)

# recover private key
q = modulus // p
phi = (p-1) * (q-1)

privkey = inverse_mod(public_exponent, phi)

## What now?

Now what? You have a private key, but that's not the flag we're looking for... After a bit of inspection you realize that the last handshake made in our capture.pcap file has a different key exchange: a RSA key exchange!!!

What follows is then pretty simple, Wireshark can decrypt conversations for you, you just need a private key file. From the previous section we retrieved the private key, to make a .pem file out of it (see this article to know what a .pem is) you can use rsatool.

Tada! Hope you enjoyed the write up =)

# 64-bit ciphers attack in 75 hours => AES-GCM attack in 75 hours? posted August 2016

There is a new attack/paper from the INRIA (Matthew Green has a good explanation on the attack) that continues the trend introduced by rc4nomore of long attacks. The paper is branded as "Sweet32" which is a collision attack playing on the birthday paradox (hence the cake in the logo) to break 64-bit ciphers like 3DES or Blowfish in TLS.

Rc4nomore was showing off with numbers like 52 hours to decrypt a cookie. This new attack needed more queries ($2^{32}$, hence the 32 in the name) and so it took longer in practice: 75 hours. And if the numbers are correct, this should answer the question I raised in one of my blogpost a few weeks ago:

the nonce is the part that should be different for every different message you encrypt. Some increment it like a counter, some others generate them at random. This is interesting to us because the birthday paradox tells us that we'll have more than 50% chance of seeing a nonce repeat after $2^{32}$ messages. Isn't that pretty low?

The number of queries here are the same for these 64-bit ciphers and AES-GCM. As the AES-GCM attack pointed:

we discovered over 70,000 HTTPS servers using random nonces

Does this means that 70,000 HTTPS servers are vulnerable to a 75 hours BEAST-style attack on AES-GCM?

Fortunately not, As the sweet32 paper points out, Not every server will allow a connection to be kept alive for that long. (It also seems like no browser place a limit on the amount of time a connection can be kept alive.) An interesting project would be to figure out how many of these 70,000 HTTPS servers are allowing long-lived connections.

It's good to note that these two attacks have different results as well. The sweet32 attack targets 64-bit ciphers like 3DES and Blowfish that really nobody use. The attack is completely impractical in that sense. Whereas AES-GCM is THE cipher commonly negociated. On the other hand, while both of them are Beast-style attacks, The sweet32 attack results in decrypting a cookie whereas the AES-GCM attack only allows you to redirect the user to a valid (but tampered) https page of the misbehaving website. In this sense the sweet32 attack has directly heavier consequences.

PS: Both Sean Devlin and Hanno Bock told me that the attacks are quite different in that the 64-bit one needs a collision on the IV or on a previous ciphertext. Whereas AES-GCM needs a collision on the nonce. This means that in the 64-bit case you can request for longer messages instead of querying for more messages, you also get more information with a minimum sized query than in a AES-GCM attack. This should make the AES-GCM even less practical, especially in the case of re-keying happening around the birthday bound.

1 comment

# Crypto and CHES Debriefing posted August 2016

While Blackhat and Defcon are complete mayhem (especially Defcon), Crypto and CHES look like cute conferences compared to them. I've been twice at blackhat and defcon, and still can't seem to be productive and really find ways to take advantage of the learning well there. Besides meeting nice people it's more of a hassle to walk around and find what to listen to or do. On the other side, it's only my first time at Crypto and CHES but I already have an idea on how to make the most of it. The small size makes it easy to talk to great people (or at least be 2 meters away from them :D) and the small amount of tracks (2 for Crypto and 1 for CHES) makes it easy not to miss anything and avoid getting exhausted quickly from walking and waiting between talks. The campus is really nice and cozy, it's a cute university with plenty of rooms with pool tables and Foosball, couches and lounges.

The beach is right in front of the campus and most of the lunch and dinners are held outside on the campus' loans, or sometimes on the beach. The talks are sometimes quite difficult to get into, the format is of 25 minutes and the speakers are not all the time trying to pass along the knowledge but rather are trying to market their papers or checking their PHD todo list. On the bright side, the workshops are all awesome, the vendors are actually knowledgeable and will answer all your stupid questions, and posters are displayed in front of the conference with usually someone knowledgeable near by.

The rump session is a lot of fun as well, but prepare for a lot of private jokes unless you've been here for many years.

• Here are my notes on the whitebox and indistinghuishability obfuscation workshop. The tl;dr: iO is completely impractical, and probably will continue to be at least for a good decade. Cryptographically secure Whitebox currently do not exist, at least not in public publications, but even the companies involved seemed comparable to hardware vendors trying to find side-channel countermeasures: mostly playing with heuristics that raise the bar high enough (in dollars) for potential attackers to break their systems.

• Here are my notes on the smartcards and cache attacks workshop. The tl;dr is that when you build hardware that does crypto you need to go through some evaluations. Certification centers have labs for that with equipment to do EM analysis (more practical than power) and fault attacks. FIB (Focused Ion Beams) are too expensive and they will never have this kind of equipment, but more general facilities have them and you can ask to borrow the equipment for a few hours (although they never do that). Government have their own labs as well. Getting a certification is expensive and so you will probably want to do a pre-eval internally or with a third-party before that. For smartcards the consensus is to get a Common Criteria evaluation, it gives you a score calculated according to the possible attacks, and how long/how much money/how many experts they need to happen. Cache attacks in the cloud or in embedded devices where you corrupt a core/process are practical. And as with fault attacks, there is a lot you can do there.

• Here are my notes on the PROOFS workshop. No real take away here other than we've been ignoring hardware when trying to implement secure software, and we should probably continue to do so because it's a big clusterfuck down there.

I'll try to keep an up-to-date list of blogposts on the talks that were given at both Crypto and CHES. If you have any blogpost on a talk please share it in the comments and I will update this page! Also if you don't have a blog but you would be happy to write an invited blogpost on one of the talk on this blog, contact me!

Besides that, we had a panel at CHES that I did not pay too close attention too but Alex Gantman from Qualcomm said a few interesting things: 10 years ago, in 2006, was already a year before the first iPhone, and two years before the first android phone. Only 20 years ago did we discover side-channel attacks and stack smashing for fun and profit was released. Because of that it is highly questionable how predictable the next 10 years will be. (Although I'm sure car hacking will be one of the new nest of vulns). He also said something along the lines:

I've never seen, in 15 years of experience, someone saying "you need to add this security thingy so that we can reach regulation". It is usually the other way around: "you don't need to add more security measures, we've already reached regulation". Regulations are seen as a ceiling.

And because I asked a bunch of questions to laser people (thanks to AlphaNov, eShard and Riscure!), here's my idea on how you would go to do a fault attack:

• You need to get a laser machine. Riscure and a bunch of others are buying lasers from AlphaNov, a company from Bordeaux!
• You need to power the chip you're dealing with while you do your analysis
• look at the logic, it's usually the smallest component on the chip. The biggest is the memory.
• You need a trigger for your laser, which is usually made with an FPGA because it needs to be fast to create the laser beam. It usually takes 15 nanoseconds with more or less 8 picoseconds.
• You can see, with infrared camera, through the silicium where your laser is hitting as well. This way if you know where to hit, you can precisely set the hitting point.
• The laser will go through the objective of the microscope you're using to observe the chip/IC/SoCs with.
• Usually since you're targeting the logic and it's small you can't see the gates/circuit, so you blindly probe it with short burst of photons, cartographying it until you find the right XOR or operation you are trying to target.
• It seems like you would also need to use test vectors for that: known plaintext/ciphertext pairs where you know what the fault is supposed to look like on the ciphertext (if you're targeting an encryption)
• Once you have the right position, you're good to try your attack.

Anyway! Enough for the blogpost. Crypto will be held at Santa Barbara next year (comme toujours), and CHES will be in Taiwan!

comment on this story

# Security Proofs for Embedded Systems (PROOFS) posted August 2016

Crypto and CHES are now over. As always, a tiny conference takes place the day after CHES: the Security Proofs for Embedded Systems or PROOFS.

## Yuval Yarom - Thwarting cache-based side-channel attacks

Microarchitectures implement a specification called the instruction set architecture (ISA), and lots of the time it's about a lot more than the ISA!

There is contention at one place, the cache! Since it's shared (because it's smaller than the memory), if we know where the victim's memory is mapped in the cache, you have an attack!

(slides from hardware secrets)

Yarom explained the Prime+Probe attack as well, where you load a cache-sized buffer from the memory and see what the victim is replacing there. This attack is possible on the L1 cache as well as the LLC cache. For more information on these attacks check my notes on the CHES tutorial. He also talked about the Flush+Reload. The whole point was to explain that constant-time is one way to circumvent the problem.

You can look at these attacks in code with Yarom's tool Mastik. The Prime+Probe is implemented on the L1 cache just by filling the whole cache with a calloc (check L1-capture.c). It's on the whole cache because it's the L1 cache that we're sharing with the victim process on a single core, the LLC attack is quite different (check L3-capture.c).

Note that I don't agree here because constant-time doesn't mean same operations done/calls accessed. For example see "padding time" techniques by boneh, braun and jana.

But the 3-rules definition of Yarom for constant-time programming is quite different from mine:

• no instructions with data-dependent execution time
• no secret-dependent flow control
• no secret-dependent memory access

To test for constant-time dynamically, we can use ctgrind, a modified valgrind by Adam Langley. We can also do some static source analysis with FlowTracker, but doesn't always work since the compiler might completely change expected results. The last way of doing it is binary analysis with CacheAudit, not perfect either.

Interestingly, there were also some questions from Michael Hamburg about fooling the branch predictor and the branch predictor target.

## Stjepan Picek - Template Attack vs. Bayes Classifier

Profiled Attacks are one of the most powerful side-channel attacks we can do. One of them is Template Attack, the most powerful attack from the information theoretic point of view. Machine Learning (ML) is used heavily for these attacks.

What's a template attack? Found a good definition in Practical Template Attacks

The method used in the template attack is as follows: First a strategy to build a model of all possible operations is carried out. The term “all possible operations” usually refers to the cryptographic algorithm (or a part of it) being executed using all possible key values. The model of captured side-channel information for one operation is called template and consists of information about the typical signal to expect as well as a noise-characterization for that particular case. After that, the attack is started and the trace of a single operation (e.g. the key-scheduling of an algorithm) is captured. Using the templates created before which represent all key-values, the side-channel information of the attacked device is classified and assigned to one or more of the templates. The goal is to significantly reduce the number of possible keys or even come up with the used key

Someone else explained to me with an example. You encrypt many known plaintext with AES, with many different keys (you only change the first byte, so 256 possible ones), then you obtain traces and statistics on that. Then you do the real attack, obtain the traces, compare to find the first byte of the key.

The simplest ML attack you can do is Naive Bayes where there is no dependency (so the opposite of a template attacks). Between Naive Bayes and Template attacks, right in the middle, you have Averaged One-Dependence Estimators (A0DE)

Stjepan also talked about Probably approximately correct learning (PAC learning) but said that although it's great you couldn't use it all the time.

## Sylvain Guilley - Optimal Side-Channel Attacks for Multivariate Leakages and Multiple Models

In real life, leakages are multi-variate (we have traces with many points) as well as multi-model (we can use the hamming weight and other ways of finding bias).

You acquire traces (usually called an acquisition campaign), and then make a guess on the bytes of the key: it will give you the output of the first Sbox. If we look at traces, we want to get all the information out of it so we uses matrices to do the template attack.

you find the $\alpha$ by doing many measurements (profiling), the matrix will be well-conditioned and not ill-conditioned (unless you have redundant data, which never happens in practice).

In the field of numerical analysis, the condition number of a function with respect to an argument measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input.

## Jakub Breier - Mistakes Are Proof That You Are Trying: On Verifying Software Encoding Schemes' Resistance to Fault Injection Attacks

The first software information hiding scheme was done in 2001: dual-rail precharge logic (DPL) to reduce the dependence of the power consumption on the data. It's also done on software and that's what they looked at.

Static-DPL XOR uses look-up tables to implement all the logical gates. Bit Slicing is used as well, and 1 is encoded as 01, and 0 encoded as 10.

Bit slicing is a technique for constructing a processor from modules of smaller bit width. Each of these components processes one bit field or "slice" of an operand. The grouped processing components would then have the capability to process the chosen full word-length of a particular software design.

for example, bitslicing AES would mean convert all operations to operations like AND, OR, XOR, NOT that take two bits at most as input, and parallelize them all.

The static-DPL XOR's advantage is that any intermediate value at any point of time has a Hamming weight of 4. It is apparently explained further in the Prince cipher paper

Device-specific encoding XOR is minimizing the variance of the encoded intermediate values.

They wrote a fault simulator in java, doing bit flips, random byte fault, instruction skip and stuck-at fault. It seems that according to their tests only the Static-Encoding XOR would propagate faults.

## Margaux Dugardin - Using Modular Extension to Provably Protect Edwards Curves Against Fault Attacks

Fault attacks can be

• safe-error attacks if it's done on a dummy operation
• cryptosystems parameters alteration
• aiming for Differential Fault Analysis DFA (bellcore attack, sign-change attacks)

If you look at double and add for elliptic curves

countermeasure is to verify the input/ouput to see if it's on the right curve. But it's ineffective against sign-change faults (Blomer et al.)

A countermeasure for that is to use Shamir's countermeasure. You do another computation in a different field (In an integer ring actually, $Z_{pr}$ in the picture).

BOS countermeasure: You can also do the other computation on another smaller curve (Blomer et all)

But this countermeasure is not working for Weierstrass curves. To correct BOS' countermeasure they defined the curve to be an edwards curve or a twisted edwards curve.

## Ryan Kastner - Moving Hardware from “Security through Obscurity” to “Secure by Design”

For Ryan, the bigger iceberg (see Yarom's talk) is the hardware we don't think about when we want to design secure software.

System of Chips design are extremely complex, we take years to develop them and we release them "forever". After that no patch. We need to get it right the first time.

Everything is becoming more and more complex. We used to be able to verify stuff "easily" with Verilog

Mike Keating estimates that there is one bug for 10 lines of code in SoCs (from the art of good design). RedHat Linux has an evaluation thingy called Best Effort Safety (EAL 4+), it costs $30-$40 per LOC. Integrity RTOS: Design for Formal Evaluation (EAL 6+) is 10,000 for LOC. For big systems we can't really use them... A lot of constructors do security through obfuscation (look at ARM, Intel, ...). It's to hard to get back the RTL, by appeal to authority (e.g. "we're Intel"), by hand waving (e.g. "we do design reviews")... They look directly at gates, and one of the input bit is untrusted, the output is untrusted. Unless! If we have a trusted 0 input AND'ed with an untrusted input. The untrusted input won't influence the output so the output will be trusted (i.e. 0 & 1 = 0 and 0 & 0 = 0). Looks likes they're using GLIFT to do that. But they're also looking to commercialize these ideas with Tortuga Logic. Because they look at things from such a low perspective they can see timing leaks and key leaks and etc... It took them about 3 years to transform the theoretical research into a commercialized product. ## Matthias Hiller - Algebraic Security Analysis of Key Generation with Physical Unclonable Functions Physical Unclonable Functions (PUFs) allows you to derive a unique fingerprint, if you want to get a key out of that you need to derive it into the right format. Static RAM (SRAM) is a way to build PUFs, if uninitialized the bits are set "randomly" which you can use to generate your first randomness. Apparently some PUFs have 15% of errors. Others might have 1%. A lot of work needs to be done in error correction. But error correcting codes (ECC) can leak data about the key! Their work is to work on an upper bound on what is leaked by the ECC. PUF works by giving you a response, more from wikipedia: Rather than embodying a single cryptographic key, PUFs implement challenge–response authentication to evaluate this microstructure. When a physical stimulus is applied to the structure, it reacts in an unpredictable (but repeatable) way due to the complex interaction of the stimulus with the physical microstructure of the device. This exact microstructure depends on physical factors introduced during manufacture which are unpredictable (like a fair coin). The applied stimulus is called the challenge, and the reaction of the PUF is called the response. A specific challenge and its corresponding response together form a challenge–response pair or CRP. The device's identity is established by the properties of the microstructure itself. As this structure is not directly revealed by the challenge-response mechanism, such a device is resistant to spoofing attacks. They algebraically transformed the properties of a PUF, using a matrix, to facilitate their proofs: sometimes a PUF response is directly used as a key, here's how they get the algebraic representation of key generation for PUFs: comment on this story # CHES 2016 tutorial part 2: Micro-Architectural Side-Channel Attacks posted August 2016 Yuval Yarom starts the afternoon of the workshop with a few slides on timing attacks, showing us that they are becoming more and more common in hardware. Eventhough OpenSSL classifies timing attacks as low severities, because they are hard to perform, Yarom thinks they are easy to perform. He has a tool called "Mastik" for Micro-Architectural Side-channel toolkit. It currently implements prime+probe on L1-D, L1-l, L3 and flush+reload. There is no documentation, and he's hopping we'll give him feedback in the future. The slides and the Mastik tools are here. Yarom starts by classifying these kind of attacks: by core shared, packet shared, system shared... With time, the number of processors are increasing, so our odds of running on the same core as the victim are decreasing. He then looks at other ways to classify attacks: persistent-state attacks and transient-state attacks. The second one needs the attacker and the victim running at the same time it seems. Other classifications he talks about are: • time-driven, trace-driven, access-driven. This classification makes more sense if you look at how you do the attack, which is not great according to Yarom. • Internal vs external contention: if the contention is internal then you can do a remote attack (the AES timing attack of djb on OpenSSL) • degree of concurrency (multicore, hyperthreading, time slicing) Yarom then switch the topic to memory. Memory is slower than the processor and that's why they build the cache. It's a more expensive technology and much closer to the CPU so it is faster than memory access. The CPU uses the cache to store the recent lines of memory read there. If you have many cores, the other cores can use what has been cached by one of them, and so you save time. Inconsistency between memory and cache happens. So sometimes you want to flush the data in the cache to make sure that the next load is served directly from the memory (where it is insured to be correct). That's the basis of the flush+reload attack! A bit of recap on memory: processes execute themselves within a virtual address space (addresses are "fake"). Then these fake addresses are mapped to physical memory addresses. This way processes don't collude in memory. and don't need to use crazy addresses (they can start at zero). We use "sharing" of cache to save physical memory. Think shared libraries. If two programs use the same shared library, we can map this shared library to the same location. The operating system can be more agressive and do page deduplication: it detects duplicated memory patterns and de-duplicate them, removing one of them, then mapping the two program to the same region in memory. If one of the processes then wants to modify some of that shared memory, the system will copy it somewhere else so that he can do that. It was very popular with hypervisors back then but since cache attacks are a thing we don't do that anymore. The Flush+Reload attack is straight-forward: you wait a bit, then you read from memory (it will cache it) and then you flush that cache position. Then you do that again and again and again. If a read takes a long time, it's normal. If it doesn't, it means someone else accessed it in memory and it was cached. The assembly of the attack looks like that: mfence rdtscp mov %eax, %esi mov (%ebx), %eax rdtscp sub %esi, %eax clflush 0(%ebx) clflush is the instruction to flush the cache, rdtscp takes the time (rather the number of cycles since boot). So you can compute the delta of the two rdtscp to get a runtime of the mov operation in between. He demoed the thing via his tool FR-1-file-access.c. The program is pretty short and concise, it monitored a file continuously via reading and flushing, until it detected the use of the unix tool cat on the monitored file triggered by Yarom in another terminal. The code figures that a slow-read happened if it takes more than 100 cycles, it's not always a 100 so you have this FR-threshold.c program that you can use to see how long is the wait from a short-read to a long-read. If you want to play with the file, do a make in src/, then a make in demo. After that he showed how to monitor calls to a function, in particular the standard C functionputs. He used nm /usr/lib64/libc-2.17.so | grep puts to get the offset of the function puts in memory, then FR-function-call let us monitor that position in memory. One problem arrises if the victim access our monitored position right after we've reloaded the data from memory into the cache and right before we had time to flush it. The victim would then benefit from our load and get a short-read, and we will miss his access. That's a probe miss. He then demoed an attack on GnuPG via the tool FR-gnupg-1.4.13.c, in particular the square and multiply algorithm inside GnuPG. The goal here is as usual to detect square-reduce from square-reduce-multiply-reduce operations, this would allow you to deduce the private exponent. He looked at the code of GnuPG and noticed that they were using the lines 85, 281, etc... in the code. And then used gdb to see what were the addresses in memory for these lines (you can do the same using debuginfo.sh) Yarom ran GnuPG to sign a file, while he ran FR-gnupg-1.4.13.c. He then gnuploted the results of his monitoring to look for patterns. around the 50 line are the quick cache access, around the 200 line are the slow read from memory. If we look only at the quick cache access, that are quick probably because the victim just had accessed it and loaded it into the cache before us, we can see that: red is the square operation, blue is the multiply operation and we can read out the key directly like a good ol' Simple Power Analyzis. There will be errors/noise of course, so run it several time! But even if you don't know all thebits, you can probably already break RSA. There are some problems though: There are also ways to improve the flush+reload attack: It seems like the first trick slows down the victim by running a lot of processes in parallel. The second improvement flushes commonly used cache lines to slow down the victim as well. Variants of this flush+reload attack exist: • Evict+Reload uses cache contention instead of clflush, you replace the memory by loading something else instead of flushing it (naturally, flush happens rarely, eviction is what is really happening). • Flush+Flush (same people as Flush+Reload). They realized that if we used flush on some data that was already in the cache it's slower, if we do the flush and the data is not cached it will be faster. But the error rate is increased so... • Invalidate+Transfer, don't ask me about this one. Now a new attack, Prime+Probe! The attacker starts by choosing a block in memory, big enough so that it will cover all the cache, he then accesses that part of the memory to fill the entire cache with the attacker's data. You might want to fill parts of the cache that will be used by the victim's process as well. To do that you can trivially compute what will be the physical adresses mapped to your virtual cache. After you filled the memory, you let the victim execute his process and access some places in memory to load it into the cache. This is evicting/removing some of the data that the attacker already loaded in the cache. Now the attacker reads again the same block from memory and can see which parts have been removed because they take longer to load. By looking for patterns in there he can do another attack. One problem: our code uses the cache as well! (here's the observer effect for you). Another problem: you can't write something useless to fill the cache, the compiler removes what he thinks is dead code/useless code (optimization). Here you need to know your compiler or... use assembly. Thrashing is another problem that makes us miss victim's access. solution: zig zag on data. If someone knows what that means... Another problem is Hardware prefetcher. The CPU brings data to the cache before it is being accessed (to optimize). The solution to that is to use pointer chasing: each future access depends on the previous one, so it kills any optimization of that sort. Yarom demoed this. This time the tool was bigger. It's a L1 cache attack with L1-capture.c and L1-rattle.c. In L1, contrarily to flush+reload, we can monitor everything. So the default is to monitor everything. To do this attack you need to share a core with the process, which doesn't happen a lot, I'm not sure if you can do this attack on a shared cache. I don't see why not though, if someone can clarify... Finally, Yarom presented some countermeasures to these kind of cache-timing attacks: • don't share your hardware • use hardware cache partitioning (intel cache allocation technology), this way you can decide in which way processes should have separated access in the cache. • use cache randomisation instead of mirroring the memory. At the system level, we can reduce the way we share memory. Of course if you don't share cores you avoid a lot of attacks, but you will need a lot of cores... Cache coloring is the idea of protecting certain places of the cache. There is also CATalayst and CacheBar... On the software side, the big countermeasure is constant-time programming. Some blinding helps as well. 1 comment # CHES 2016 tutorial part 1: Common Criteria Certification of a Smartcard: A Technical Overview posted August 2016 CHES started with a Tutorial on Smartcards, their certifications and their vulnerabilities. The morning was mostly a talk by Victor Lomné (ANSSI) about the Common Criteria for smartcards (CC), a framework through which smartcards vendors can make claims of security and labs can get them certified. These certifications have different levels of security, from trivially testing the functionalities, to formally verifying them with the use of tools like COQ What Gemalto, Obertur, and others... do is to buy an new security IC from a manufacturer and develop a semi-open platform with an OS (often a Javacard OS) then get that thing certified. After that their client (banks?) buys it, develop applications on it via SDKs and re-apply for a certification after that. From what I got from both the whitebox and this workshop is that we actually don't know how to fully protect against side-channels attack and so we need to protect in hardware via anti-tampering and anti-observing measures. To check the security of these physical thingies you need a lab with really efficient microscopes (atomic-force microscopy (AFM) or scanning electron microscope (SEM)), because chips are crazy smalls nowadays! Chemical tools to unsolder components, ... All these tools cost a huge amount. Maintenance of the equipment is also another huge cost. You don't open a lab like that :) Here are different goals of your attack, first you need to bypass these detectors, like canaries in "protected" binaries: The labs do all these kind of attacks for every chips. For lasers attack, they look at the power consumption, when they detect certain pattern it triggers the laser. These kind of tools are not really known by the academic community, but in the certification world it happens all the time. In whitebox type evaluation (needed since it helps saving a lot of time), the evaluator knows the key already so that he can correctly debug his faults attacks, and verify as well the result of an attack. Side channel attacks use power and electromagnetic stations. Basic techniques were they record traces, align them, apply known attacks... Like other hardware thingies, they have "test features" (think JTAG). You can try to re-enter these test mode with faults and focused ion beams (FIB) and then you can dump the non-volatile memory (NVM) Attacks on RNG exist as well. Basically when you can do fault attacks you can have fun =) Attacks on the protocols as well... the language used (javacard) also introduce new vulnerabilities. You can try to inject applets in the system to recover secrets, via a smartcard reader, you can try to escape from javacard via type confusion and other common vulnerabilities in javacard. It gets crazier than that by mixing in fault attacks modifying the purpose of current applets. Another thing is that they need a large number of devices because fault attacks, especially lasers and FIBs, will often break the device. Your certification is basically a score, it's calculated with some table. If the attack took more time, you get more points. If it took many experts instead of one, you get more points. If you had helping documents to help you, more points. If you needed to use many devices/samples to success at an attack, more points. Same for the cost of the equipment. Victor gives an example of an fault attack on RSA. You do a fault attack on one part of the CRT, but then a verification is done at the end, so you need to do a fault attack on the verification as well. Now you need to find the correct timing to do these two laser shots. These spatial and temporal identifications take time. They also need open samples for that. Once they have the attack in place they need to try and perform it on a closed sample. He also told us to watch this video: You can read part 2 here comment on this story # WhibOx part 4: Whitebox crypto, vendors' perspective posted August 2016 Mike Wiener, an employee at Irdeto, the company who acquired Cloakware (the pioneering company of whitebox crypto), started this part. The security of Whitebox comes from the fact that details are kept secret. Most implementations of working whiteboxes are proprietary. Imagine that you want to obfuscate AES. The idea is that AES is essentialy a huge permutation. You could replace all the code with a huge look up table that would give you an output for some particular input. You could give that to someone and he would not be able to recover the key... BUT it's completely impractical. Wiener sees whitebox crypto as not totally "crypto". More at the intersection of software security and cryptography. Wiener said that Chow et al. brought us the 1st generation of WBC. Then Billet et al. found the first attack. They then made a 2nd generation of whitebox crypto, but they can't talk about it. Barak et al. proved that there exist programs that cannot be protected. Some people concluded that WBC can't work, which is not true: we don't need to protect ALL programs! Their 2nd generation of WBC resisted to attacks from Billet et al. but were still vulnerable to DFA and DCA (Differential Computation Analysis). So they made a 3rd generation of WBC! Resistant to DFA and DCA. Details of this 3rd generation were never published (proprietary), they even debated about showing this slide. I won't comment on this, I'm just glad that someone from the industry came and tried to share a bit about what they did. Wiener said hey had some success countering DCA. "SOME SUCCESS" And NOW THEY HAVE A 4TH GENERATION!!! with improved resistance. They have new attacks (that they don't want to talk about) and so they're looking for ways to protect against these. They think iO might be an answer in the future. They need to implement these whitebox solutions RIGHT AWAY for their clients. They don't have the luxury of waiting for better solutions. they have designs for each of the public-key crypto algorithm! But they don't want to talk about it either. Remember, no asymmetric whitebox design is publicly known. Also: not a lot of people talk about variable key designs, we always talk about whitebox in the context of "fixed-key". They have variant-key designs. oh oh oh! They're now trying to get rid of tables. See picture above. They also think secure elements in hardware are not the full answer. In a software-dominated world, the need for softaware protection is unavoidable ## Brecht Wyseur - Let's get real! We need WBC and Io DRM vs CAS: • DRM: restricting access • CAS: granting access • DRM: microsoft playready, apple fairplay, marlin intertrust • CAS: nagra (kudelski), cisco, irdeto, verimatrix, china digital tv, widevine (A google company) How to do fast WBC? Why not ditching AES and standardized algorithms and use new algorithms that also takes advantage of the platform (SIMD, CPU, ...). But often, you need to comply to the standards. People also tend to forget modes of operation, we need to whitebox them as well. A lot of companies are using whitebox crypto! They want evaluations or certifications. That might be a reason why we have a whitebox workshop :) There are many other vendors in the area (arxan, whitecryption, metaforic (inside secure), irdeto, ...). But we don't really know how secure they are. It's mostly security through obscurity. 10% of banks do that in-house, 90 use third-party tools

No certifications exist, no certifications exist, no certifications exist

## Panel

• Mike Wiener: the doors of our homes are unsecure and yet the world works. I think this where we are with whitebox crypto
• someone else: you need to look at the amount of money bad guys can put into this to know how secure your whitebox crypto should be
• Marc Witteman: when the hardware security in mobile phones will be more common, WBC will become less relevant
• Pascal Paillier: outside of hardware, there is no pure software solution
• someone is saying: we had the same problems years ago with side-channel attacks on hardware, and we ended up implementing standards algorithms like AES with side-channel countermeasures (that work more or less). non-standard algorithms is a bad idea
comment on this story

# WhibOx part 3: Attacks on Whitebox crypto posted August 2016

You can read part 1 here

Joppe Bos from NXP gave the same talk I had seen a couple month ago at SSTIC.

To recap the white box model:

• you can inject fault
• you can access the source code
• you can alter the memory, observe it
• ...

If a secure whitebox solution exists, then by definition it is secured to all current and future side-channel and fault attacks.

In practice we only know how to do it for symmetric crypto (although that's not what Mike Wiener said after (but his design is proprietary))

Chow et al. construction:

• converted every operation (mixcolumns, xor, etc...) in LUT (look up table)
• applied linear encoding to inputs and outputs of look up tables (input/output encoding)
• adding non-linear encodings to inputs and outputs as well

And this is the reason why it only works for symmetric crypto. For richer math problems like RSA we can't do all these things.

In practice you do not ship whitebox crypto like that though:

• you add more obfuscation on top, just to make the work of the hacker harder)
• you glue the environment to the whitbox, so that you can't just take the whitebox and hope it will work)
• you support for traitor tracing, if the program is copied to other people you can trace where it came from
• you create a mechanism for frequent updating, so a copy of the whitebox is useless after some time, or if you end up breaking it it might be too late
• ...

In practice to attack:

• you reverse-engineer the code (time-consuming)
• identify which WB scheme is used
• target the correct look up tables
• apply an algebraic attack

Bos' approach:

• is it AES or DES?
• launch the attack (that's the only thing you need, see how they pwned the CHES challenges)

The idea is to create software traces like power traces using dynamic binary instrumentation tools (Pin, Valgrind). They created plugins for both of them. Then they record every instructions in memory made by the whitebox and apply a DPA-style attack. If you don't know what that is check my video on the subject here.

The idea here is that you can't remove some correlation with input/ouput encoding

• they can visually tell what's the cipher by looking at the memory trace

picture of AES, the 10th round is different so you don't see it

And if memory access is made linear, you can still look at the stack access and it will reveal the algorithm as well

They can do better than a CPA (Correlation Power Analysis) attack, which they call a DCA (Differential Computation Analysis) . The hamming weight makes a lot of sense in the hardware setting, but here without the noise and the errors it does not so much. Also they need way less traces. All in all, these kind of DPA attacks are much more powerful in a software setting compared to a hardware setting.

By the definition of the WB attack model, implementations shouldn't leak any side channel data. tl;dr: they all do. Well, except when there is this extern encoding thing (transformation added to the beginning and the end of AES).

intuition why this works: Encodings do not sufficiently hide correlations when the correct key is used.

Countermeasures?

• most of the countermeasures against DPA/CPA rely on random data (but in the WB model we can't rely on random data...)
• if you take large encodings, this attack shouldn't work anymore (why? dunno)

The code is on github!

## Marc Witteman - Practical white-box topics: design and attacks II

The hype of WB crypto comes from mobile payment and streaming video. Today if you look in your pocket you most probably have a whitebox implementation.

In practice for an attack you need to:

• root the phone
• decompile/disassemble code
• use a debugger to step through the code
• instrumentate the binary to change the behavior of the code

They applied the Differential Fault Attack (DFA) on a bunch of whitebox implementations, broke almost all of them except the ones using extern encoding (apparently in banking there is no extern encoding because of "constraints")

But overall, if you're designing whitebox crypto, you don't need to make the implementation perfect, you just need to make it good enough

comment on this story

# Whibox part 2: Whitebox Crypto posted August 2016

You can read part 1 here

Matthieu Rivain from Crypto Experts did the transition to Whitebox Crypto.

After a bunch of funny slides about Obfuscation in general. He reminded us what were the real definitions of these new concepts.

iO is equivalent to BPO, it's the best possible way to obfuscate a program. It was Chow et al. who introduced the first whitebox crypto construction in 2002 at DRM. The main goal of whitebox crypto was to make key extraction difficult, not to stop the attacker from using the program! It forces the attacker to use the software. There is value for DRM system providers.

What is WBC AES? A key extraction must be "difficult".

What are the practical uses of such a scheme?

Making AES one-way and whiteboxing it turns it into a public-key cryptosystem (this is what happens in general, using whitebox on a symmetric system turns it into an asymmetric system).

There are some security notions for whitebox symmetric ciphers, the usual CPA, CCA, ... but also RCA!

RCA stands for recompilation attack: the attacker can get a new compilation of the program.

Here we have different goals for attacks:

• extract the key
• compress a whitebox implementation to make it smaller
• inverse the whitebox implementation (if it only encrypt, make it decrypt)
• be untraceable (often if you just copy the program and send it to someone else, it is true that they will be able to use it, but it might be tied to you)

whitebox crypto is about designing a compiler for an existing encryption scheme

A "one-way" compiler to be exact.

## Andrey Bogdanov - Towards secure whitebox cryptography

Unlike black box models like apps in the clouds, that deal with million of requests, probably the white box model like an app on your phone is not dealing with so much. So we don't mind it being a bit slower.

Bogdanov introduced the new Host Card Emulation trend (HCE). There are a bunch of insecure hardware out there (think old Android phones) and banks want them to be able to do secure payment and all that fuss with phones and Near Field Communication (NFC). Apple did it with a secure Enclave but we can't wait for all phones to have a secure element right? So now banks are doing it in software instead with an emulated smart card in the phone. And that's what HCE is.

• In 2014, Visa and Mastercard started supporting HCE.
• In 2017, 86% of North-america point of sales, and 78% of Europe point of sales, will support NFC.
• 2/3 of shipped phones will support NFC in 2018

Bogdanov remarks that so far, since the first construction of Chow et al. in 2002, all Whitebox crypto schemes have been broken:

Even the recent ASASA [BBK13] in 2014 seems unsecure

Bogdanov also adds on the security notions of whitebox crypto: *space hardness. This is the same notion M. Rivain called "compression of the whitebox". You should not be able to decompose internal component. Without all the code/tables, you shouldn't be able to compute encryption or decryption with good probability.

You can read part 3 here

comment on this story

# Whibox part 1: Indistinguishability Obfuscation posted August 2016

I landed in Santa Barbara in a tiny airplane, at a tiny airport, so cute that I could have mistakenly taken it for an small university building collecting toy aircrafts. Outside the weather was warm and I could already smell the ocean.

A shuttle dropped me at the registration building and I had to walk through a large patio filled with people sitting in silence, looking at their notes or their laptops, in what seemed like comfortable couches arranged in a Moroccan way. I thought to myself "they all seem so comfortable, it looks like they've been here many times". As it turns out, Crypto is taking place in Santa Barbara every year and people have got used to it.

I registered at the desk, then went to my assigned room in these summer-vacant student residences, passing by the numerous faces I will probably see again this coming week. I went to sleep early and woke up the next day for the first workshop: WhibOx.

First, let me say that in theory, workshops are cool. Cooler than conferences, because they are here to teach you something. Crypto conferences are filled with PHD students trying to get their publication ratio (the rule of thumb is 3 good papers accepted at 3 big conferences before defending) and researchers trying to market their papers. (Well, some of them are actually good speakers and care about teaching you something in their 45 minutes time frame.)

Back to our workshop, it is the first ever workshop on whitebox crypto and indistinguishability obfuscation (iO). Pascal Pailier -- the author of the homomorphic cryptosystem -- starts the course with a few words: whitebox crypto is something relatively old/new, but it's starting to rise. There is a clear problem that real companies are trying to solve, and whitebox seems to be the answer. Pailier notes that it's not a concept that is well understood, and the speakers will proove his point again and again later, trying to come up with a different definition. In this blogpost I will summarize the day, if you're interested to see the talks yourself the slides should shortly be uploaded, as well as the videos on the ECRYPT channel.

## Amit Sahai - Indistinguishability Obfuscation

Indistinguishability Obfuscation was the first topic to be discussed about. Amit reminded us that iO was quite recent, and aimed towards our modern world attack models: programs are routinely leaked to the adversary, often they are even given directly to the adversary.

For a while, they thought about multi-party computation (MPC), but it was not the answer. This is because it requires some portion of the computation to be completely hidden from the adversary.

In the idea of general purpose obfuscation, every part is known by the adversary. But the adversary can still use the program as he wishes! Amit gives an example where a program would give you a different output if you feed it an integer greater or lesser than some fixed value. By just querying the program an attacker could learn it... So for this to work, you would first need some function/program that is un-learnable by queries.

BGIRSVY2001 showed that the concept of a virtual black box (VBB) is impossible for all circuits. That is, a program that you entirely control, and that is indistinguishable from a black box (you can only see input and output).

But then... there are some contrived "self-eating programs", for which this idea of black box obfuscation is not ruled-out. The idea of these self-eating programs is that you have a program P that takes an input x, and it checks if x is a program equivalent to P itself. If so, it outputs a secret s, otherwise 0. But even these have "explicit attacks" against them. So they define an even weaker definition that is iO: a polynomial slowdown + indistinguishability.

The idea of iO is to bring the best possible obfuscation. The first construction was done in 2013, and surprisingly, it was useful! it was used to resolve long-standing open problems like:

• functional encryption
• deniable encryption
• multi-input FE
• patchable iO
• ...

It's kind of a central hub of crypto, anything you want to do in cryptography (except for efficiency), you can use iO to do it.

But first, if you want to obfuscate something, you need a language! They use a Matrix Branching Program (MBP), which is not a straight forward language at all...

It's a bunch of invertible matrix over $Z_n$, and the input tells you which matrix in each pair to choose. Then you repeat the same pattern over and over until you reach the last pair.

In the image below you can see the pairs on the right, here let's imagine that you take an input of 3 bits. If the input is 011 you would choose $M_{1,0}$ then $M_{2,1}$ and then $M_{3,1}$. You would then repeat this pattern until you reach the last matrix. The output of your program is the multiplication of all the matrices your input chose.

You can implement a bunch of stuff like that it seems, and here's a XOR:

But this is not enough to obfuscate (it's a start). To obfuscate you want to use Kilian randomization: multiply each matrice $M$ with some $R$ and its inverse like so: $RMR^{-1}$.

(slide from a crypto conf)

Now to hide everything you encode the matrices in some way that will still allow matrix multiplications. And for this you use the now famous Multilinear Maps (MMaps) aka Granded Encodings. There are still attacks on that (Annihilation attacks) but new ways to counter against these as well.

But it is still not enough, there are some input-mixing attacks (what if you do not respect the repeating pattern and multiply the wrong matrix), then they can't efficiently bound the information learned by these tricks.

There are of course ways to avoid that... Finally they add some more randomness to the mix... A final construction is made (Miles-Sahai-Zhandry, TTC2016):

They end up using $3\times3$ matrices instead of the initial matrices to finish the obfuscation. It seems like this technique is making the thing look random, so that the adversary now has to distinguish it from randomness as well.

It's of course confusing and it seems like reading the paper might help more. But the idea is here. A circuit, that is obfuscable according to some definition, is transformed into this MBP language, then obfuscated using an encoding (MMaps) and some other techniques.

It's an incredibly powerful construction, which is still at its infancy stage (they are still exploring many other constructions). And its biggest problem right now is efficiency. Amit talks about a $2^{100}$ overhead for programs like AES (It's almost as good as the complexity of an attack on AES).

So yeah. Completely theoretical at the moment. No way anyone is going to use that. But they're quickly improving the constructions and they're really happy that they have planted the seeds for a new generation of crypto primitives.

## Mariana Raykova - 5Gen: a framework for prototyping applications using multilinear maps and matrix branching programs

The workshop had a lot of speakers, and so a lot of repetions were made. Which is not necessarily a bad thing since it's always good to see new concepts being defined by different persons.

The point of obfuscation for Raykova is to protect proprietary algorithms, software patches and avoid malware detection.

Now, the talk is about Functional Encryption: some construction that allows you to generate decryption keys that reveal only functions of the encrypted message.

They implemented a bunch of stuff. For that they use a compiler (called Cryfms) that transforms Cryptol code ("Cryptol is used mostly in the intelligence community") to a minimal finite state machine and then to Matrix Branching Programs (MBPs). It's called 5gen and it's on github!

Like many speakers at this workshop, she made fun of the efficiency of iO

And then explained again how iO worked. Here you can see the same kind of explanation as Amit did.

Barrington has a construction to convert circuits to MBP, but the bad thing is that the size of the MBP is exponential in the circuit depth. Urg... That's why they transform the Cryptol code in a finite state automata first before converting it to MBP. It's better to transform a finite state automata in a MBP directly:

Raykova explained that they used a bunch of optimization, sometimes the second matrix wouldn't change anything (or wouldn't change a part of the state) she said, so they could pre-multiply them...

Now to obfuscate you use Kilian88 to do Randomized branching programs (RBP) as Amit already explained

And as Amit said, this is not enough for obfuscation, you need to apply encoding techniques (MMaps). MMaps gives you a way to encode an element (which is hiding the element), and give you some nice property on the encoding (to do operations).

But you still have these mix and match attacks. In the iO model the adversary can use the whole program and feed it all sort of inputs, but in FE we don't want that.

You can read part 2 here

comment on this story

# Breaking https' AES-GCM (or a part of it) posted August 2016

The coolest talk of this year's Blackhat must have been the one of Sean Devlin and Hanno Böck. The talk summarized this early year's paper, in a very cool way: Sean walked on stage and announced that he didn't have his slides. He then said that it didn't matter because he had a good idea on how to retrieve them.

He proceeded to open his browser and navigate to a malicious webpage. Some javascript there seemed to send various requests to a website in his place, until some MITM attacker found what they came for. The page refreshed and the address bar now whoed https://careers.mi5.gov.uk as well as a shiny green lock. But instead of the content we would have expected, the white title of their talk was blazing on a dark background.

What happened is that a MITM attacker tampered with the mi5's website page and injected the slides in a HTML format in there. They then went ahead and gave the whole presentation via the same mi5's webpage.

## How it worked?

The idea is that repeating a nonce in AES-GCM is... BAD. Here's a diagram from Wikipedia. You can't see it, but the counter has a unique nonce prepended to it. It's supposed to change for every different message you're encrypting.

AES-GCM is THE AEAD mode. We've been using it mostly because it's a nice all-in-one function that does encryption and authentication for you. So instead of shooting yourself in the foot trying to MAC then-and-or Encrypt, an AEAD mode does all of that for you securely. We're not too happy with it though, and we're looking for alternatives in the CAESAR's competition (there is also AES-SIV).

The presentation had an interesting slide on some opinions:

"Do not use GCM. Consider using one of the other authenticated encryption modes, such as CWC, OCB, or CCM." (Niels Ferguson)

"We conclude that common implementations of GCM are potentially vulnerable to authentication key recovery via cache timing attacks." (Emilia Käsper, Peter Schwabe, 2009)

"AES-GCM so easily leads to timing side-channels that I'd like to put it into Room 101." (Adam Langley, 2013)

"The fragility of AES-GCM authentication algorithm" (Shay Gueron, Vlad Krasnov, 2013)

"GCM is extremely fragile" (Kenny Paterson, 2015)

One of the bad thing is that if you ever repeat a nonce, and someone malicious sees it, that person will be able to recover the authentication key. It's the H in the AES-GCM diagram, and it is obtained by hashing the encryption key K. If you want to know how, check Antoine Joux' comment on AES-GCM.

Now, with this attack we lose integrity/authentication as soon as a nonce repeats. This means we can modify the ciphertext in the middle and no-one will realize it. But modifying the ciphertext doesn't mean we can modify the plaintext right? Wait for it...

Since AES-GCM is basically counter mode (CTR mode) with a GMac, the attacker can do the same kind of bitflip attacks he can do on CTR mode. This is pretty bad. In the context of TLS, you often (almost always) know what the website will send to a victim, and that's how you can modify the page with anything you want.

Look, this is CTR mode if you don't remember it. If you know the nonce and the plaintext, fundamentally the thing behaves like a stream cipher and you can XOR the keystream out of the ciphertext. After that, you can encrypt something else by XOR'ing the something else with the keystream :)

They found a pretty big number of vulnerable targets by just sending dozen of messages to the whole ipv4 space.

## My thoughts

Now, here's how the TLS 1.2 specification describe the structure of the nonce + counter in AES-GCM: [salt (4) + nonce (8) + counter (4)].

The whole thing is a block size in AES (16 bytes) and the salt is a fixed part of 4 bytes derived from the key exchange happening during TLS' handshake. The only two purposes of the salt I could think of are:

Pick the reason you prefer.

Now if you picked the second reason, let's recap: the nonce is the part that should be different for every different message you encrypt. Some increment it like a counter, some others generate them at random. This is interesting to us because the birthday paradox tells us that we'll have more than 50% chance of seeing a nonce repeat after $2^{32}$ messages. Isn't that pretty low?

# How to Backdoor Diffie-Hellman: quick explanation posted August 2016

I've noticed that since I published the How to Backdoor Diffie-Hellman paper I did not post any explanations on this blog. I just gave a presentation at Defcon 24 and the recording should be online in a few months. In the mean time, let me try with a dumbed-down explanation of the outlines of the paper:

I found many ways to implement a backdoor, some are Nobody-But-Us (NOBUS) backdoors, while some are not (I also give some numbers of "security" for the NOBUS ones in the paper).

The idea is to look at a natural way of injecting a backdoor into DH with Pohlig-Hellman:

Here the modulus $p$ is prime, so we can naturally compute the number of public keys (elements) in our group: $p-1$. By factoring this number you can also get the possible subgroups. If you have enough small subgroups $p_i$ then you can use the Chinese Remainder Theorem to stitch together the many partial private keys you found into the real private key.

The problem here is that, if you can do Pohlig-Hellman, it means that the subgroups $p_i$ are small enough for anyone to find them by factoring $p-1$.

The next idea is to hide these small subgroups so that only us can use this Pohlig-Hellman attack.

Here the prime $n$ is not so much a prime anymore. We instead use a RSA modulus $n = p \times q$. Since $n$ is not a prime anymore, to compute the number of possible public keys in our new DH group, we need to compute $(p-1)(q-1)$ (the number of elements co-prime to $n$). This is a bit tricky and only us, with the knowledge of $p$ and $q$ should be able to compute that. This way, under the assumptions of RSA, we know that no-one will be able to factor the number of elements ($(p-1)(q-1)$) to find out what subgroups there are. And now our small subgroups are well hidden for us, and only us, to perform Pohlig-Hellman.

There is of course more to it. Read the paper :)

If you want to generate good randomness, but are iffy about /dev/urandom because your machine has just booted, and you also don't know how long you should wait before /dev/urandom has enough entropy, then maybe you should consider using getrandom (thanks rwg!). From the manpage: