david wong

Hey! I'm David, cofounder of zkSecurity and the author of the Real-World Cryptography book. I was previously a crypto architect at O(1) Labs (working on the Mina cryptocurrency), before that I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.

Quick access to articles on this page:

more on the next page...

Socat new DH modulus posted February 2016

On February 1st 2016, a security advisory was posted to Openwall by a Socat developer: Socat security advisory 7 - Created new 2048bit DH modulus

In the OpenSSL address implementation the hard coded 1024 bit DH p parameter was not prime. The effective cryptographic strength of a key exchange using these parameters was weaker than the one one could get by using a prime p. Moreover, since there is no indication of how these parameters were chosen, the existence of a trapdoor that makes possible for an eavesdropper to recover the shared secret from a key exchange that uses them cannot be ruled out.
A new prime modulus p parameter has been generated by Socat developer using OpenSSL dhparam command.
In addition the new parameter is 2048 bit long.

This is a pretty weird message with a Juniper feeling to it.

Socat's README tells us that you can use their free software to setup an encrypted tunnel for data transfer between two peers.

Looking at the commit logs you can see that they used a 512 bits Diffie-Hellman modulus until last year (2015) january when it was replaced with a 1024 bits one.

Socat did not work in FIPS mode because 1024 instead of 512 bit DH prime is required. Thanks to Zhigang Wang for reporting and sending a patch.

The person who pushed the commit is Gerhard Rieger who is the same person who fixed it a year later. In the comment he refers to Zhigang Wang, an Oracle employee at the time who has yet to comment on his mistake.

The new DH modulus

There are a lot of interesting things to dig into now. One of them is to check if the new parameter was generated properly.

prime

It is a prime. Hourray! But is it enough?

It usually isn't enough. The developper claims having generated the new prime with openssl's dhparam command (openssl dhparam 2048 -C), but is it enough? Or even, is it true?

To get the order of the DH group, a simple \(p - 1\) suffice (\(p\) is the new modulus here). This is because \(p\) is prime. If it is not prime, you need to know its factorization. This is why the research on the previous non-prime modulus is slow... See Thai Duong's blogpost here, the stackexchange question here or reddit's thread.

Now the order is important, because if it's smooth (factorable into "small" primes) then active attacks (small subgroup attacks) and passive attacks (Pohlig-Hellman) become possible.

So what we can do, is to try to factor the order of this new prime.

Here's a small script I wrote that tries all the primes until... you stop it:

# the old "fake prime" dh params

dh1024_p = 0xCC17F2DC96DF59A446C53E0EB826550CE388C1CEA7BCB3BF1694D8A945A2CEA95B22255F9259941C22BFCBC8C857CBBFBC0EE840F98703BF609B08C68E99C605FC00D66D90A8F5F8D38D43C88F7ABDBB28AC04694A0B867337F06D4F04F6F5AFBFAB8ECE75534D7F7D17780E12464AAF9599EFBCA6C54177437AB9EC8E073C6D
dh1024_g = 2

# the new dh params

dh2048_p = 0x00dc216456bd9cb2acbec998ef953e26fab557bcd9e675c043a21c7a85df34ab57a8f6bcf6847d056904834cd556d385090a08ffb537a1a38a370446d2933196f4e40d9fbd3e7f9e4daf08e2e8039473c4dc0687bb6dae662d181fd847065ccf8ab50051579bea1ed8db8e3c1fd32fba1f5f3d15c13b2c8242c88c87795b38863aebfd81a9baf7265b93c53e03304b005cb6233eea94c3b471c76e643bf89265ad606cd47ba9672604a80ab206ebe07d90ddddf5cfb4117cabc1a384be2777c7de20576647a735fe0d6a1c52b858bf2633815eb7a9c0ee581174861908891c370d524770758ba88b3011713662f07341ee349d0a2b674e6aa3e299921bf5327363
dh2048_g = 2

# is_prime(dh2048_p) -> True

order = dh2048_p - 1

factors = [2]
print "2 divides the order"

# let's try to factorize the order by trial divisions
def find_factors(number):
    factors = []
    # use different techniques to get primes, dunno which is faster
    index = 0
    for prime in Primes():
        if Mod(number, prime) == 0:
            print prime, "divides the order"
            factors.append(prime)
        if index == 10000:
            print "tested up to prime", prime, "so far"
            index = 0
        else:
            index += 1

    return factors

factors += find_factors(order / 2)

It has been running for a while now (up to 82018837, a 27bits number) and nothing has been found so far...

The thing is, a Pohlig-Hellman attack is do-able as long as you can compute the discrete log modulo each factors. There is no notion of "small enough factor" defined without a threat model. This backdoor is not gonna be usable by small players obviously, but by bigger players? By state-sized attackers? Who knows...

EDIT: I forgot order/2 could be a prime as well. But nope.

nope

comment on this story

Another look on Public key crypto posted January 2016

I was watching this excellent video on the birth of elliptic curves by Dan Boneh, and I felt like the explanation of Diffie-Hellman (DH) felt short. In the video, Dan goes on to explain the typical DH key exchange:

Alice and Bob each choose a public point \(g\) and a public modulus \(N\).

By the way. If you ever heard of "DH-1024" or some big number associated to Diffie-Hellman, that was probably the bitsize of this public modulus \(N\).

The exchange then goes like this:

  1. Alice generates her private key \(a\) and sends her public key \(g^a\pmod{N}\) to Bob.

  2. Bob generates his own private key \(b\) and sends his public key \(g^b\pmod{N}\) to Alice.

  3. They can both generate their shared key by doing either \((g^b)^a \pmod{N}\) for Alice, or \((g^a)^b \pmod{N}\) for Bob.

Dan then explains why this is secure: because given \((g, g^a, g^b)\) (the only values an eavesdropper can observe from this exchange) it's very hard to compute \(g^{ab}\), and this is called the Computational Diffie-Hellman problem (CDH).

But this doesn't really explain how the scheme works. You could wonder: but why doesn't the attacker do the exact same thing Bob and alice just did? He could just iterate the powers of \(g\) until \(g^a\) or \(g^b\) is found, right?

A key exchange with hash functions

Let's replace the exponentiation by a hash function. Don't worry I'll explain:

\(g\) will be our public input and \(h\) will be our hash function (e.g. sha256). One more thing: \(h^{3}(g)\) translates to \(h(h(h(g)))\).

So now our key exchange looks like this:

  1. Alice generates an integer \(a\) large enough and compute \(a\) iteration of the hash function \(h\) over \(g\), then sends it to Bob.

  2. Bob does the same with an integer \(b\) and sends \(h^b(g)\) to Alice (exact same thing that Alice did, different phrasing.)

  3. They both compute the share private key by doing either \((h^b(g))^a\) for Alice, or \((h^a(g))^b\) for Bob.

So if you understood the last part: Alice and Bob both iterated the hash functions on the starting input \(g\) a number of \(a+b\) times. If Alice's public key was \(h(h(g))\) and Bob's public key was \(h(h(h(g)))\) then they both end up computing \(h(h(h(h(h(g)))))\).

That seems to work. But how is this scheme secure?

You're right, it is not. The attacker can just hash \(g\) over and over until he finds either Alice's or Bob's public key.

So let's ask ourselves this question: how could we make it secure?

If Bob or Alice had a way to compute \(h^c(x)\) without computing every single hash (\(c\) hash computations) then he or she would take way less time to compute their public key than an attacker would take to retrieve it.

Back to our discrete logarithm in Finite groups

This makes it easier to understand how the normal DH exchange in finite groups is secure.

The usual assumptions we want for DH to works were nicely summed up in Boneh's talk

dh

The point of view here is that discrete log is difficult AND CDH holds.

Another way to see this, is to see that we have algorithm to quickly calculate \(g^c \pmod{n}\) without having to iterate through every integers before \(c\).

To be more accurate: the algorithms we have to quickly exponentiate numbers in finite groups are way faster than the ones we have to compute the discrete logarithm of elements of finite groups. Thanks to these shortcuts the good folks can quickly compute their public keys while the bad folks have to do all the work.

1 comment

complexities of attacks on MD5 and SHA-1 posted January 2016

Taken from the SLOTH paper, the current estimated complexities of the best known attacks against MD5 and SHA-1:

Common-prefix collisionChosen-prefix collision
MD52^162^39
SHA-12^612^77
MD5|SHA-12^672^77

MD5|SHA-1 is a concatenation of the outputs of both hashes on the same input. It is a technique aimed at reducing the efficiency of these attacks, but as you can see it, it is not that efficient.

comment on this story

Firefox Add-on: Unencrypted Warnings posted January 2016

I never understood why Firefox doesn't display a warning when visiting non-https websites. Maybe it's too soon and there are too many no-tls servers out there and the user would learn to ignore the warning after a while?

I don't know, so I wrote a few lines and made the add-on here

unencrypted warnings

Just drag and drop the .xpi in your firefox. You can also review the ultra-minimal code in index.js and build the xpi yourself with Mozilla's JDK

comment on this story

Looking for a Boneh and Durfee attack in the wild posted January 2016

A few weeks ago I wrote about testing RSA public keys from the most recent Alexa's top 1 million domains handshake log that you can get on scans.io.

Most public exponents \(e\) were small and so no small private key attack (Boneh and Durfee) should have happened. But I didn't explained why.

Why

The private exponent \(d\) is the inverse of \(e\), that means that \(e * d = 1 \pmod{\varphi(N)}\).

\(\varphi(N)\) is a number almost as big as \(N\) since \(\varphi(N) = (p-1)(q-1)\) in our case. So that our public exponent \(e\) multiplied with something would be equal to \(1\), we would at least need to loop through all the elements of \(\mathbb{Z}_{\varphi(N)}\) at least once.

Or put differently, since \(e > 1 \pmod{\varphi(N)}\), increasing \(e\) over \(\varphi(N)\) will allow us to get a \(1\).

l = 1024
p = random_prime(2^(l/2), lbound= 2^(l/2 - 1))
q = random_prime(2^(l/2), lbound= 2^(l/2 - 1))
N = p * q
phiN = (p-1) * (q-1)
print len(bin(int(phiN / 3))) - 2 # 1024
print len(bin(int(phiN / 10000000))) # 1002

This quick test with Sage shows us that with a small public exponent (like 3, or even 10,000,000), you need to multiply it with a number greater than 1000 bits to reach the end of the group and possibly ending up with a \(1\).

All of this is interesting because in 2000, Boneh and Durfee found out that if the private exponent \(d\) was smaller than a fraction of the modulus \(N\) (the exact bound is \(d < N^{0.292}\)), then the private exponent could be recovered in polynomial time via a lattice attack. What does it mean for the private exponent to be "small" compared to the modulus? Let's get some numbers to get an idea:

print len(bin(N)) - 2 # 1024
print len(bin(int(N^(0.292)))) - 2 # 299

That's right, for a 1024 bits modulus that means that the private exponent \(d\) has to be smaller than 300 bits. This is never going to happen if the public exponent used is too small (note that this doesn't necessarely mean that you should use a small public exponent).

Moar testing

So after testing the University of Michigan · Alexa Top 1 Million HTTPS Handshakes, I decided to tackle a much much larger logfile: the University of Michigan · Full IPv4 HTTPS Handshakes. The first one is 6.3GB uncompressed, the second is 279.93GB. Quite a difference! So the first thing to do was to parse all the public keys in search for greater exponents than 1,000,000 (an arbitrary bound that I could have set higher but, as the results showned, was enough).

I only got 10 public exponents with higher values than this bound! And they were all still relatively small (633951833, 16777259, 1065315695, 2102467769, 41777459, 1073741953, 4294967297, 297612713, 603394037, 171529867).

Here's the code I used to parse the log file:

import sys, json, base64

with open(sys.argv[1]) as ff:
    for line in ff:
        lined = json.loads(line)
        if 'tls' not in lined["data"] or 'server_certificates' not in lined["data"]["tls"].keys() or 'parsed' not in lined["data"]["tls"]["server_certificates"]["certificate"]:
            continue
        server_certificate = lined["data"]["tls"]["server_certificates"]["certificate"]["parsed"]
        public_key = server_certificate["subject_key_info"]
        signature_algorithm = public_key["key_algorithm"]["name"]
        if signature_algorithm == "RSA":
            modulus = base64.b64decode(public_key["rsa_public_key"]["modulus"])
            e = public_key["rsa_public_key"]["exponent"]
            # ignoring small exponent
            if e < 1000000:
                continue
            N = int(modulus.encode('hex'), 16)
            print "[",N,",", e,"]"
comment on this story

Real World Crypto: debriefing posted January 2016

There is no day 4, this is over... And I've got a ton to work on/read about/catch up with.

But first! I'm spending the week end in San Francisco before flying to Austin, if anyone wants to hang out in SF feel free to contact me on twitter =)

(and if you work for Dropbox, feel free to invite me to eat at your one michelin star cafetaria)

Take-home message

  • Tor's security seems a bit shaky to me
  • QUIC crypto will die. Just look at tls 1.3
  • TLS 1.3 is still a clusterfuck
  • Lots of stuff to break in SSE and PPE
  • Intel is doing something really cool with SGX
  • The Juniper paper is going to be a big deal
  • The BREACH improvement is going to be a big deal

Papers to read

First, a bunch of slides are already available through the real world crypto webpage. And I've been taking notes every day: day1, day2, day3.

Now here's my to read list from the important talks:

And bonus, here are some paper that have nothing to do with RWC but that I still want to read right now:

Next conventions to attend

I actually have no idea about that. You?

comment on this story

Real World Crypto: Day 3 posted January 2016

This is the 3rd post of a series of blogpost on RWC2016. Find the notes from day 1 here.

I'm a bit washed out after three long days of talk. But I'm also sad that this comes to an end :( It was amazing seeing and meeting so many of these huge stars in cryptography. I definitely felt like I was part of something big. Dan Boneh seems like a genuine good guy and the organization was top notch (and the sandwiches amazing).

SGX morning

The morning was filled with talks on SGX, the new Intel technology that could allow for secure VMMs. I didn't really understood these talks as I didn't really know what was SGX. White papers, manual, blogposts and everything else is here.

10:20pm - Practical Attacks on Real World Cryptographic Implementations

tl;dw: bleichenbacher pkcs1 v1.5 attack, invalid curve attack

If you know both attacks, don't expect anything new.

  • many attacks nowadays are based on really old papers
    • BEAST in 2011 is from a 2004 paper
    • 2013/14 POODLE and lucky13 comes from a 2002 paper
    • 2012 xml encryption attack is from a 1998 bleichenbacher paper
  • bleichenbacher attack
    • rsa-pkcs#1 v1.5 is used to encrypt symmetric keys, it's vulnerable to CCA
    • 2 countermeasures:
      • OAEP (pkcs#1 v2)
      • if padding is incorrect return random
    • padding fail in RWC: in apache WSS4J XML Encryption they generated 128 bytes instead of 128 bits of random
    • practical attacks found as well in TLS on JSSE, Bouncy Castle, ...
      • exception occurs if padding is wrong, it's caught and the program generates a random. But exception consumes about 20 microseconds! -> timing attacks (case JSSE CVE-2014-411)
  • invalid curve attack
    • send invalid point to the server (of small order)
    • server doesn't check if the point is on the EC
    • attacker gets information on the discrete log modulo the small order
    • repeat until you have enough to do a large CRT
    • they analyzed 8 libraries, found 2 vulnerable
    • pretty serious attack -> allows you to extract server private keys really easily
    • works on ECDH, not on ECDHE (but in practice, it depends how long they keep the ephemeral key)
  • HSM scenarios: keys never leave the HSM
    • they are good candidates for these kind of "oracle" attacks
    • they tested and broke Ultimaco HSMs (CVE-2015-6924)
    • <100 queries to get a key

11:10am - On Deploying Property-Preserving Encryption

tl;dw: how it is to deploy SSE or PPE, and why it's not dead

  • lots of "proxy" companies that translates your queries to do EDB without re-teaching stuff to people (there was a good slide on that that I missed, if someone has it)
  • searchable symmetric encryption (SSE): you just replace words by token
    • threat model is different, clients don't care if they hold both the indexes and the keys
  • two kinds of order preserving encryption (OPE):
    • stateless OPE (deterministic -> unclear security)
    • interactive OPE (stateful)
    • talks about how hard it is to deploy a stateful scheme
  • many leakage-abused attacks on PPE
  • crypto researcher on PPE: "it's over!", but the cost and legacy are so that PPE will still be used in the future

I think the point is that there is nothing practical that is better than PPE, so rather than using non-encrypted DB... PPE will still hold.

11:30am - Inference Attacks on Property-Preserving Encrypted Databases

tl;dw: PPE is dead, read the paper

approach to EDB over time

implemented EDB

  • analysis have been done and it is known what leaks and cryptanalysis have been done from these information
  • real data tends to be "non-uniform" and "low entropy", not like assumptions of security proofs
  • inference attacks:
    • frequency analysis
    • sorting attack
    • Lp-optimization
    • cumulative attacks
  • frequency analysis: come on we all know what that is
    • Lp-optimization: better way of mapping the frequency of auxilliary data and the ciphertexts
  • sorting attacks: just sort ciphertextxs and your auxiliary data, map them
    • this fails if there is missing items in the ciphertexts set
    • cumulative attack improve on this

check page 6 of the paper for explanations on these attacks. All I was expecting from this talk was explanation of the improvements (Lp and cumulative) but they just flied through them (fortunately they seem to be pretty easy to understand in the paper). Other than that, nothing new that you can't read from their paper.

2:00pm - Cache Attacks on the Cloud

tl;dw: cache attacks can work, maybe

  • hypervisor (VMM) ensures isolation through virtualization
  • VMs might feel each other's load on some low-level resources -> potential side channels
  • covert channel in the cloud?
    • LLC is cross core (L3 cache)
  • cache attacks
    • prime+probe
      • priming: find eviction set: memory line that when loaded to cache L3 will occupy a line we want to monitor
      • probing: when trying to access the memory line again, if it's fast that means no one has used the L3 cache line

primeprobe

  • to get crypto keys from that you need to detect key-dependent cache accesses
    • for RSA check timing and number of times the cache is accessed -> multiplications
    • for AES detect the lookup table access in the last round (??)
  • cross-VM cache attacks are realistic?
    • attack 1 (can't remember) (hu)
    • co-location: detect if they are on the same machine (dropbox) [RTS09]
      • they tried the same on AWS EC2, too hard now (hu)
      • new technique: LLC Cache accesses (hu)
      • new technique: memory bus contention [xww15, vzrs15]
  • once they knew they were on the same machine through colocation what to target?
  • libgcrypt's RSA use CRT, sliding window exponentiation and message blinding (see end of my paper to see explanation of message blinding)

conclusion:

  • cache attacks in public cloud work
    • but still noise and colocation problem
  • open problem: countermeasures?
  • what about non-crypto code?

Why didn't they talk of flush+reload and others?

2:30am - Practicing Oblivious Access on Cloud Storage: the Gap, the Fallacy, and the New Way Forward

tl;dw: ORAM, does it work? Is it practical?

paper is here

  • Oblivious RAM, he doesn't want to explain how it works
  • how close is ORAM to practice?
  • implemented 4 different ORAM system from the litterature and got some results from it
  • CURIOUS, what they made from these research, is open-source. It's made in Java... such sadness.

Didn't get much from this talk. I know this is "real world" crypto but a better intro on ORAM would have been nicer, also where does ORAM stands in all the solutions we already have (fortunately the previous talk had a slide on that already). Also, I only read about it in FHE papers/presentations, but there was no mention of FHE in this talk :( well... no mention of FHE at all in this convention. Such sadness.

From their paper:

An Oblivious RAM scheme is a trusted mechanism on a client, which helps an application or the user access the untrusted cloud storage. For each read or write operation the user wants to perform on her cloud-side data, the mechanism converts it into a sequence of operations executed by the storage server. The design of the ORAM ensures that for any two sequences of requests (of the same length), the distributions of the resulting sequences of operations are indis-tinguishable to the cloud storage. Existing ORAM schemes typically fall into one of the following categories: (1) layered (also called hierarchical), (2) partition-based, (3) tree-based; and (4) large-message ORAMs.

2:50pm Replacing Weary Crypto: Upgrading the I2P network with stronger primitives

tl;dw: the i2p protocol

  • i2p is like Tor? both started around 2003, both using onion routing, both vulnerable to traffic confirmation attacks, etc...
    • but Tor is ~centralized, i2p is ~decentralized
    • tor use an asymmetric design, i2p is symmetric (woot?)
    • in i2p traffic works in circle (responses comes from another path)
      • so twice as many nodes are exposed
      • but you can only see one direction
      • this difference with Tor hasn't really been researched
    • ...

4:20pm - New developments in BREACH

tl;dw: BREACH is back

But first, what is BREACH/CRIME?

This talk was a surprise talk, apparently to replace a canceled one?

  • original BREACH attack introduced at blackhat USA 2013
    • compression/encryption attack (similar to CRIME)
    • crime was attaking the request, breach attack the response
    • based on the fact that tls leaks length
    • the https server compresses responses with gzip
    • inject content in victim when he uses http
      • the content injected is a script that queries the https server
    • attack is still not mitigated but now we use block cipher so it's OK
  • extending the BREACH attack:
    • attack noisy endpoints
    • attack block ciphers
    • optimized
    • no papers?
  • aes-128 is vulnerable
  • mitigation proposed:
    • google is introducing some randomness in their responsness (not really working)
    • facebook is trying to generate a mask XORed to the CSRF token (but CSRF tokens are not the only secrets)
  • they will demo that at blackhat asia 2016 in Singapore

4:40pm - Lucky Microseconds: A Timing Attack on Amazon's s2n Implementation of TLS

tl;dw: read the paper, attack is impractical

a debriefing of the convention can be found here

comment on this story

Real World Crypto: Day 2 posted January 2016

This is the 2nd post of a series of blogpost on RWC2016. Find the notes from day 1 here.

disclaimer: I realize that I am writing notes about talks from people who are currently surrounding me. I don't want to alienate anyone but I also want to write what I thought about the talks, so please don't feel offended and feel free to buy me a beer if you don't like what I'm writing.

And here's another day of RWC! This one was a particularly long one, with a morning full of blockchain talks that I avoided and an afternoon of extremely good talks, followed by a suicidal TLS marathon.

stanford

09:30 - TLS 1.3: Real-World Design Constraints

tl;dw: hello tls 1.3

DJB recently said at the last CCC:

"With all the current crypto talks out there you get the idea that crypto has problems. crypto has massive usability problems, has performance problems, has pitfalls for implementers, has crazy complexity in implementation, stupid standards, millions of lines of unauditable code, and then all of these problems are combined into a grand unified clusterfuck called Transport Layer Security.

For such a complex protocol I was expecting the RWC speakers to make some effort. But that first talk was not clear (as were the other tls talks), slides were tiny, the speaker spoke too fast for my non-native ears, etc... Also, nothing you can't learn if you already read this blogpost.

10:00 - Hawk: Privacy-Preserving Blockchain and Smart Contracts

tl;dw: how to build smart contracts using the blockchain

  • first slide is a picture of the market cap of bitcoin...
  • lots of companies are doing this block chain stuff:

blockchain

  • DAPS. No idea what this is, but he's talking about it.

Dapps are based on a token-economy utilizing a block chain to incentivize development and adoption.

  • bitcoin privacy guarantees are abysmal because of the consensus on the block chain.
  • contracts done through bitcoin are completely public
    • their solution: Hawk (between zerocash and ethereum)
    • uses zero knowledge proofs to prove that functions are computed correctly
    • blablabla, lots of cool tech, cool crypto keywords, etc.

if you're really interested, they have a tech report here (pdf)

As for me, this tweet sums up my interest in the subject.

blockchain

So instead of playing games on my mac (see bellow (who plays games on a mac anyway?)). I took off to visit the Stanford campus and sit in one of their beautiful library

guy

12:00 - Lightning talks.

lightning

I'm back after successfuly avoiding the blockchain morning. Lightning talks are mini talks of 1 to 3 minutes where slides are forbidden. Most were just people hiring or saying random stuff. Not much to see here but a good way to get into the talking thing it seems.

In the middle of them was Tancrede Lepoint asking for comments on his recent Million Dollar Curve paper. Some people quickly commented without really understanding what it was.

tanja

(Sorry Tanja :D). Overall the idea of the paper is how to generate a safe curve that the public can trust. They use the Blum Blum Shub PRNG to generate the parameters of the curve, iterating the process until it completes a list of checks (taken from SafeCurves), and seeding with several drawings from lotteries around the world in a particular timeframe (I think they use a commitment for the time frame) so that people can see that these numbers were not chosen in a certain ways (and would thus be NUMS).

14:00 - An Update on the Backdoor in Juniper's ScreenOS

tl;dw: Juniper

Slides are here. The talk was entertaining and really well communicated. But there was nothing majorly new that you can't already read in my blogpost here.

  • it happened around Christmas, lots of security people have nothing to do around this period of the year and so the Juniper code was reversed really quickly (haha).
  • the password that looks like a format string was already an idea taken straight from a phrack 2009 issue (0x42)

Developing a Trojaned Firmware for Juniper ScreenOS Platforms

  • unfiltered Dual EC outputs (the 30 bytes of output and 2 other bytes of a following Dual EC output) from a IKE nonce
    • but the Key Exchange is done before generating the nonce? They're still working on verifying this on real hardware (they will publish a paper later)
    • in earlier versions of ScreenOS the nonces used to be 20 bytes, the RNG would output 20 bytes only

timeline

  • When they introduced Dual EC in their code (Juniper), they also changed the nonce length from 20 bytes to 32 bytes (which is perfect for easy use of the Dual EC backdoor). Juniper did that! Not the hackers.
  • they are aware, through their disclosure, that it is "exploitable"
  • the new patch (17 dec 2015) removed the SSH backdoor and restored the Dual EC point.

A really good question from Tom Ritter: "how many bytes do you need to do the attack". Answer: truncated output of Dual EC is 30 bytes (instead of 32), so you need to bruteforce the 2 bytes. To narrow the search space, 2 bytes from the next output is practical and enough. So ideally 30 bytes and 2 bytes from a following output allows for easy use of the Dual EC backdoor.

(which is something I forgot to mention in my own explanation of Dual EC)

14:20 - Pass: Strengthening and Democratizing Enterprise Password Hardening

tl;dw: use a external PRF

  • Ashley Madison and other recent breaches taught us that hashing was not enough to protect passwords
  • smash and grab attacks

A smash and grab raid or smash and grab attack (or simply a smash and grab) is a particular form of burglary. The distinctive characteristics of a smash and grab are the elements of speed and surprise. A smash and grab involves smashing a barrier, usually a display window in a shop or a showcase, grabbing valuables, and then making a quick getaway, without concern for setting off alarms or creating noise.

  • The Ashley Madison breach is interesting because they used bcrypt and salting with high cost parameter, which is better than industry norms to protect passwords.
  • he cracked 4000 passwords from the leaks anyway

cracked

  • millions of password were cracked a few weeks after
  • He has done some research and has come up with a response: PASS, password hardening and typo correctors
  • facebook password onion from last year's RWC looks like an "archeological record"

facebook

  • the hmac with the private key transforms the offline attack in an online attack because the attacker now needs to query the PRF service repeatidly.
  • "the facebook approach" is to use a queriable "PRF service" for the hmac, it makes it easier to detect attacks.
  • but several drawbacks:
    • 1) online attackers can instead record the hashes (mostly because of this legacy code)
    • 2) the PRF is not called with a per-user granularity (same for all users) -> hard to implement fined-grained rate limiting (throtteling/rate limiting attempts, you are only able to detect global attacks)
    • 3) no support for periodic key rotations -> if they detect an attack, they now need to add new lines in their key hashing rotting onion
  • PASS uses a PRF Service, same as facebook but also:
    • 1) blinding (PRF can't see the password)
    • 2) graceful key rotation
    • 3) per-user monitoring

po-prf

  • the blinding is a hash raised to a power, unblinding is done by taking the square root of that power (but maybe he simplified an inverse modulo something?)
  • a tweek t is sent as well, basically the user id, it doesn't have to be blinded and so they invented a new concept of "partially oblivious PRF" (PO-PRF)

existing crypto primitives insufficient

  • the tweak and the blinded password are sent to the PRF which uses a bilinear pairing construction to do the PO-PRF thingy (this is a new use case fo bilinear pairing apparently).

  • it's easy to implement, completely transparent to users, highly scalable.

  • typos corrector: idea of a password correctors for famous typos (ex: a capitalized first letter)
    • facebook does this, vanguard does this...
  • intuition tells you it's bad: an attacker tries a password, and you help him find it if it's almost correct.
  • they instrumented dropbox for a period of 24 hours (for all users) to implement this thing
  • they took problems[:3] = [accidental caps lock key, not hitting the shift key to capitalize the first letter, extra unwanted character]
    • they corrected 9% of failed password submissions
    • minimal security impact, according to their research "virtually no security loss"
  • paper seems interesting
  • there is some open source code somewhere

Question from Dmitry Khovratovich: Makwa does something like this, exactly like this (outch!). Answer: "I'm not familiar with that"

14:50 - Argon2 and Egalitarian Computing

tl;dw: argon2 hash function efficient against ASICs

  • passwords are not long (PIN, human has to remember the password) -> brute force attacks are possible
  • password cracking is easier with GPU or FPGAs or even ASICs
  • ASICs? -> ex: bitcoin, they switched to ASICs (2^32 hashes/joule on ASIC, 2^17 hashes/joule on GPU)
  • Argon2 created for the password hashing competition
  • memory-intensive computation: make a password hashing function so that you need a lot of memory to use it -> the ASIC advantage vanishes (if someone wants to explain to me how is that, feel free).

password competition

  • winner: Argon2
  • they wanted the function to be as simple as possible (to simplify analysis)
  • you need the previous block to do the next computation (badly parallelizable) and a reference block (takes memory)

design of Argon2

  • add some parallelism... there was another slide I have no image and no comment :(
  • this concept of slowing down attackers has other applications -> egalitarian computing
  • for ex: in bitcoin they wanted every user to be able to mine on his laptop, but now there are pools taking up more than 50% (danger: 51% attack)
  • can use it for client puzzles for denial of service protection.
  • egalitarian computing -> ensures that attacker and defender are the same (no advantage using special computers)

samuel colt

15:10 - Cryptographic pitfalls

tl;dw: 5 stories about cute and subtle crypto fails

  • talker is explicit about his non-involvement with Juniper (haha)
  • he's narrating the tales of previously disclosed vulns, 5 case studies, mostly because of "following best practice" attitude (not that it's bad but usually not enough).

  • 1)
    • concept of zeroisation
    • HSM manufacturer had a sandbox for user code, always zeroed memory when it was freed
    • problem is, sometimes memory doesn't get freed, like when you pull the power out.
    • (reminds me of the cold boot attack of the other day).

zeroisation

  • 2)

    • concept of "reusing components rather than designing new ones"
    • vpn uses dsa/dh for ipsec
    • over prime fields
    • pkcs#3 defines something
    • PKIXS says something else, subtle difference

pkcs3

using dsa keys

  • 3)
    • concept of "using external events to seed your randomness pool", gotta get your entropy from somewhere!
    • entropy was really bad from the start because they would boot the device right after production line, nothing to build entropy from (the same thing happened in the carjacking talk at blackhat us2015)
    • so the key was almost the same because of that, juniper fixed it after customer report (the customer changed his device, but he didn't get an error that the key had changed)

customer report

  • 4)
    • concept of "randomness in RSA factors"
    • government of some country use smartcards
    • then they want to use cheaper smartcards, but re-used the same code
    • the new RNG was bad

rng fail

  • 5)
    • everything is blanked out (he can't really talk about it)
    • they used CRC for integrity (instead of a MAC/signature)

lessons

from the audience, someone from Netscape speaks out "yup we forget that things have to be random as well" (cf predictable Netscape seed)

16:00 - TLS at the scale of Facebook

tl;dw: how they deployed https, wasn't easy

Timeline of the https deployment:

  • In 2010: facebook uses https almost only for login and payments
  • during a hackaton they tried to change every http url to https. It turns out it's not that simple.
  • not too long after firesheep happened, then Tunisia only ISP started doing script injection to non-https traffic. They had to do something
  • In 2011, they tried mixing secure and insecure. Then tried to make ALL apps support https (outch!)
  • In 2012, they wanted https only (no https opt-in)
  • In 2013, https is the default. At the end of the year they finally succeed: https-only
    • (And thinking that not so long ago it was normal to login without a secure connection... damn things have changed)

present

  • Edge Networks: use of CDNs like Akamai or cloudflare or spread your servers in the world
  • Proxygen, open source c++ http framework
  • they have a client-side TLS (are they talking about mobile?) built on top of proxygen. This way they can ship improvement to TLS before the platform does, blablabla, there was a nice slide on that.
  • they really want 0-RTT, but tls 1.3 is not here, so they modified QUIC crypto to make it happen on top of TCP: it's called Zero.

zero

Server Name Indication (SNI) is an extension to the TLS computer networking protocol[1] by which a client indicates which hostname it is attempting to connect to at the start of the handshaking process. This allows a server to present multiple certificates on the same IP address and TCP port number and hence allows multiple secure (HTTPS) websites (or any other Service over TLS) to be served off the same IP address without requiring all those sites to use the same certificate

  • stats:
    • lots of session resumption by ticket -> this is good
    • low number of handshakes -> that means they store a lot of session tickets!
    • very low resumption by session ID (why is this a good thing?)
    • they haven't turned off RC4 yet!
      • something in the audience tells him about downgrade attacks, outch!
  • the referrer field in the http header is empty when you go on another website from a https page! Is that important... no?
  • it's easy for a simple website to go https (let's encrypt, ...), but for a big company, fiou it's hard!
  • still new feature phones that can't access tls (do they care? mff)

16:30 - No More Downgrades: Protecting TLS from Legacy Crypto

tl;dw: SLOTH

downgrade

  • brainstorming: "how do we fix that in tls 1.3?"
  • explanation of Logjam (see my blogpost here)
  • at the end of the protocol there is a finish message where is included all the negotiation in a mac:
    • but this is already too late: the attacker can forge the mac as well at this point
    • this is because the downgrade protection mechanism (this mac at the end) itself depends on downgradeable parameters (the idea behind logjam)
  • in tls 1.3 they use a signature instead of the mac
    • but you sign a hash function! -> SLOTH (which was released yesterday)

Didn't understand much, but I know that all the answers are in this paper. So stay tuned for a blogpost on the subject, or just read the freaking paper!

sloth

  • sloth is a transcript collision attack
  • he talks about sigma protocol for some reason (proof of knowledge)

primer on collision

  • tls 1.3 includes a version downgrade resilience system:
    • the server chooses the version
    • the server has to choose the highest common version
    • ...
    • only solution they came up with: put all the versions supported in the server nonce. This nonce value (server.random to be exact) is in all tls versions and is signed before the key exchange happens.

16:50 - The OPTLS Protocol and TLS 1.3

tl;dw: how does OPTLS works

  • paper is here
  • tls 1.3 improved RTT and PFS
  • agreement + confidentiality are the fundamental requirements for a key exchange protocol
  • OPTLS is a key exchange that they want tls 1.3 to use

The OPTLS design provides the basis for the handshake modes specified in the current TLS 1.3 draft including 0-RTT, 1-RTT variants, and PSK modes

I have to admit I was way too tired at that point to follow anything. Everything looked like David Chaum's presentation. So we'll skip the last talk in this blogpost.

day 3 notes are here

comment on this story