Paul Rösler and Christian Mainka and Jörg Schwenk released More is Less: On the End-to-End Security of Group Chats in Signal, WhatsApp, and Threema in July 2017.

Today Paul Rösler came to **Real World Crypto** to talk about the results, which is a good thing.
Interestingly, in the middle of the talk Wired released a worrying article untitled WhatsApp Security Flaws Could Allow Snoops to Slide Into Group Chats.

Interestingly as well, at some point during the day Matthew Green also wrote about it in Attack of the Week: Group Messaging in WhatsApp and Signal.

They make it seem really worrisome, but should we really be scared about the findings?

**Traceable delivery** is the first thing that came up in the presentation. What is it? It’s the check marks that appear when your recipient receives a message you sent. It's mostly a UI feature but the fact that no security is tied to it allows a server to fake them while dropping messages, making you think that your recipient has wrongly received the message. This was never a security feature to begin with, and nobody never claimed it was one.

**Closeness** is the fact that the WhatsApp servers can add a new participant into your private group chat without your consent (assuming you’re the admin). This could lead people to share messages to the group including to a rogue participant. The caveat is that:

Again, I do not see this as a security vulnerability. Maybe because I’ve understood how group chats can work (or miswork) from growing up with shady websites and applications. But I see this more as a UI/UX problem.

The paper is not bad though, and I think they’re right to point out these issues. Actually, they do something very interesting in it, they start it up with a nice **security model** that they use to analyse several messaging applications:

Intuitively, a secure group communication protocol should provide a level of security comparable to when a group of people communicates in an isolated room: everyone in the room hears the communication (**traceable delivery**), everyone knows who spoke (**authenticity**) and how often words have been said (**no duplication**), nobody outside the room can either speak into the room (**no creation**) or hear the communication inside (**confidentiality**), and the door to the room is only opened for invited persons (**closeness**).

Following this security model, you could rightfully think that we haven’t reached the best state in secure messaging. But the fuss about it could also wrongfully make you think that these are worrisome attacks that need to be dealt with.

The facts are here though, this paper has been blown out of proportion. Moxie (one of the creator of Signal) reacts on hackernews:

To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not.

I'd say the problem is in the reaction, not in the published analysis. But it's a sad reaction indeed.

Good night.

Early in 2016, I published a whitepaper (here on eprint)
on how to backdoor the Diffie-Hellman key agreement algorithm. Inside the whitepaper,
I discussed three different ways to construct such a backdoor; two of these were considered nobody-but-us (NOBUS) backdoors.

A NOBUS backdoor is a backdoor accessible only to those who have the knowledge of some secret (a number, a passphrase, ...). Making a NOBUS backdoor irreversible without the knowledge of the secret.

In October 2016, Dorey et al. from Western University (Canada) published a white paper called Indiscreet Logs: Persistent Diffie-Hellman Backdoors in TLS. The research pointed out that one of my NOBUS construction was **reversible**, while the other NOBUS construction was **more dangerous** than expected.

I wrote this blogpost resuming their discoveries a long time ago, but never took the time to publish it here. In the rest of this post, I'll expect you to have an understanding of the two NOBUS backdoors introduced in my paper.
You can find a summary of the ideas here as well.

## Reversing the first NOBUS construction

For those who have attended my talk at Defcon, Toorcon or a meetup; I should assure you that I did not talk about the first (now-known
reversible) NOBUS construction. It was left out of the story because it was not such a nice backdoor in the first place. Its security margins
were weaker (at the time) compared to the second construction, and it was also harder to implement.

### Baby-Step Giant-Step

The attack Dorey et al. wrote about comes from a 2005 white paper, where
Coron et al. published an attack on a
construction based on Diffie-Hellman. But before I can tell you about
the attack, I need to refresh your memory on how the **baby-step giant-step** (BSGS) algorithm works.

Imagine that a generator \(g\) generates a group \(G\) in
\(\mathbb{Z}_p\), and that we want to find the order of that group
\(|G| = p_1\).

Now what we could do if we have a good idea of the size of that order
\(p_1\), is to split that length in two right in the middle: \(p_1 = a + b \cdot 2^{\lceil \frac{l}{2} \rceil}\), where \( l \) is the bitlength of \(p_1\).

This allows us to write two different lists:

\[ \begin{cases}
L = { g^i \mod{p} \mid 0 < i < 2^{\lceil \frac{l}{2} \rceil} } \\
L' = { g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil} } \mod{p} \mid 0 \leq j < 2^{\lceil \frac{l}{2} \rceil} }
\end{cases}
\]

Now imagine that you compute these two lists, and that you then stumble
upon a collision between elements from these two sets. This would entail
that for some \(i\) and \(j\) you have:

\[ \begin{align} &g^i = g^{-j \cdot 2^{\lceil \frac{l}{2}
\rceil}} \pmod{p}\\ \Leftrightarrow &g^{i + j \cdot 2^{\lceil
\frac{l}{2} \rceil}} = 1 \pmod{p}\\ \Rightarrow &i + j \cdot
2^{\lceil \frac{l}{2} \rceil} = a + b \cdot 2^{\lceil
\frac{l}{2} \rceil} = p_1 \end{align} \]

We found \(p_1\) in time quasi-linear (via sorting, searching trees,
etc...) in \(\sqrt{p_1}\)!

### The Construction

Now let's review our first NOBUS construction, detailed in section 4 of my paper.

Here \(p - 1 = 2 p_1 p_2 \) with \( p_1 \) our small-enough
subgroup generated by \(g\) in \(\mathbb{Z}_p\), and \(p_2\)
our big-enough subgroup that makes the factorization of our modulus
near-impossible. The factor \(q\) is generated in the same way.

### Using BSGS on our construction

At this point, we could try to reverse the construction using BSGS by
creating these two lists and hopping for a collision:

\[ \begin{cases} L = { g^i \mod{p} \mid 0 < i <
2^{\lceil \frac{l}{2} \rceil} } \\ L' = { g^{-j \cdot
2^{\lceil \frac{l}{2} \rceil} } \mod{p} \mid 0 \leq j <
2^{\lceil \frac{l}{2} \rceil} } \end{cases} \]

Unfortunately, remember that \(p\) is hidden inside of \( n = p q
\). We have no knowledge of that factor. Instead, we could calculate
these two lists:

\[ \begin{cases} L = { g^i \mod{n} \mid 0 < i <
2^{\lceil \frac{l}{2} \rceil} } \\ L' = { g^{-j \cdot
2^{\lceil \frac{l}{2} \rceil} } \mod{n} \mid 0 \leq j <
2^{\lceil \frac{l}{2} \rceil} } \end{cases} \]

And this time, we can test for a collision between two elements of these
lists "mod \(p\)" via the \(gcd\) function:

\[ gcd(n, g^i - g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}})
\]

Hopefully this will yield \(p\), one of the factor of \(n\). If you
do not understand why, it works because if \(g^i\) and \(g^{-j
\cdot 2^{\lceil \frac{l}{2} \rceil}}\) collide "mod \(p\)", then
we have:

\[ p | g^i - g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}} \]

Since we also know that \( p | n \), it results that the \(gcd\) of
the two returns our hidden \(p\)!

Unfortunately at this point, the persnickety reader will have noticed
that this cannot be done in the same complexity as the original BSGS
attack. Indeed, we need to compute the \(gcd\) for all pairs and this
increases our complexity to \(\mathcal{O}(p_1)\), the same
complexity as the attack I pointed out in my paper.

### The Attack

Now here is the that trick Coron et al. found out. They could optimize
calls to \(gcd\) down to \(\mathcal{O}(\sqrt{p_1})\), which would
make the reversing as easy as using the backdoor. The trick is as
follow:

- Create the polynomial

\[ f(x) = (x - g) (x - g^2) \cdots (x - g^{2^{\lceil \frac{l}{2}
\rceil}}) \mod{n} \]

- For \(0 \leq j < 2^{\lceil \frac{l}{2} \rceil}\) compute
the following \(gcd\) until a factor of \(n\) is found (as
before)

\[ gcd(n, f(g^{-j \cdot 2^{\lceil \frac{l}{2} \rceil}})) \]

It's pretty easy to see that the \(gcd\) will still yield a factor, as
before. Except that this time we only need to call it at most
\(2^{\lceil \frac{l}{2} \rceil}\) times, which is \(\approx
\sqrt{p_1}\) times by definition.

## Improving the second NOBUS construction

The second NOBUS backdoor construction received a different treatment.
If you do not know how this backdoor works I urge you to first watch my talk on the subject.

Let's ask ourselves the question: what happens if the client and the
server do not negotiate an ephemeral Diffie-Hellman key exchange, and
instead use RSA or Elliptic Curve Diffie-Hellman to perform the key
exchange?

This could be because the client did not list a `DHE`

(ephemeral
Diffie-Hellman) cipher suite in priority, or because the server decided
to pick a different kind of key agreement algorithm.

If this is the case, we would observe an exchange that we could not spy
on or tamper with via our DHE backdoor.

Dorey et al. discovered that an **active** man-in-the-middle could
change that by tampering with the original client's `ClientHello`

message to single-out a `DHE`

cipher suite (removing the rest of the
non-`DHE`

cipher suites) and **forcing the key exchange to happen by way
of the Diffie-Hellman algorithm**.

This works because there are no countermeasures in TLS 1.2 (or prior) to
prevent this to happen.

## Final notes

My original white paper has been updated to reflect Dorey et al.'s
developments while minimally changing its structure (to retain
chronology of the discoveries). You can obtain it here.

Furthermore, let me mention that the new version of TLS —**TLS 1.3**—
will fix all of these issues in two ways:

- A server now signs the entire observed transcript at some point
during the handshake. This successfully prevents any tampering with
the
`ClientHello`

message as the client can verify the signature and
make sure that no active man-in-the-middle has tampered with the
handshake.
- Diffie-Hellman groups are now specified, exactly like how curves
have always been specified for the Elliptic Curve variant of
Diffie-Hellman. This means that unless you are in control of both
the client and the server's implementations, you cannot force one or
the other to use a backdoored group (unless you can backdoor one of
the specified group, which is what happened with RFC
5114).

I've talked about the SHA-3 standard FIPS 202 quite a lot, but haven't talked too much about the second function the standard introduces: **SHAKE**.

SHAKE is not a hash function, but an **Extendable Output Function** (or XOF). It behaves like a normal hash function except for the fact that it produces an “infinite” output. So you could decide to generate an output of one million bytes or an output of one byte. Obviously don't do the one byte output thing because it's not really secure. The other particularity of SHAKE is that it uses saner parameters that allow it to achieve the desired targets of 128-bit (for **SHAKE128**) or 256-bit (for **SHAKE256**) for security.
This makes it a faster alternative than SHA-3 while being a more flexible and versatile function.

## SP 800-185

SHAKE is intriguing enough that just a year following the standardization of SHA-3 (2016) another standard is released from the NIST's factory: Special Publication 800-185. Inside of it a new customizable version of SHAKE (named cSHAKE) is defined, the novelty: it takes an additional "customization string" as argument. This string can be anything from an empty string to the name of your protocol, but the slightest change will produce entirely different outputs for the same inputs. This
customization string is mostly used as domain separation for the other functions defined in the new document: **KMAC**, **TupleHash** and **ParallelHash**. The rest of this blogpost explains what these new functions are for.

## KMAC

Imagine that you want to send a message to your good friend Bob. You do not care about encrypting your message, but to make sure that nobody modifies the message in transit, you hash it with SHA-256 (the variant of SHA-2 with an output length of 256-bit) and append the hash to the message you're sending.

`message || SHA-256(message)`

On the other side, Bob detaches the last 256-bit of the message (the hash), and computes SHA-256 himself on the message. If the obtained result is different from the received hash, Bob will know that someone has modified the message.

**Does this work? Is this secure?**

Of course not, I hope you know that. A hash function is public, there are no secrets involved, someone who can modify the message can also recompute the hash and replace the original one with the new one.

Alright, so you might think that doing the following might work then:

`message || SHA-256(key || message)`

Both you and Bob now share that symmetric `key`

which should prevent any man-in-the-middle attacker to recompute that hash.

**Do you really think this is working?**

Nope it doesn't. The reason, not always known, is that SHA-256 (and most variants of SHA-2) are vulnerable to what is called a **length extension attack**.

You see, unlike the sponge construction that releases just a part of its state as final output, SHA-256 is based on the Merkle–Damgård construction which outputs the entirity of its state as final output. If an attacker observes that hash, and pretends that the absorption of the input hasn't finished, he can continue hashing and obtain the hash of `message || more`

(pretty much, I'm omitting some details like padding). This would allow the attacker to add more stuff to the original message without being detected by Bob:

`message || more || SHA-256(key || message || more)`

Fortunately, every SHA-3 participants (including the SHA-3 winner) were required to be resistant to these kind of attacks. Thus, **KMAC** is a **Message Authentication Code** leveraging the resistance of SHA-3 to length-extension attacks. The construction `HASH(key || message)`

is now possible and the simplified idea of KMAC is to perform the following computation:

`cSHAKE(custom_string=“KMAC”, input=“key || message”)`

KMAC also uses a trick to allow pre-computation of the keyed-state: it pads the key up to the block size of cSHAKE. For that reason I would recommend not to come up with your own SHAKE-based MAC construction but to just use KMAC if you need such a function.

## TupleHash

**TupleHash** is a construction allowing you to hash a structure in an non-ambiguous way. In the following example, concatenating together the parts of an RSA public key allows you to obtain a fingerprint.

A malicious attacker could compute a second public key, using the bits
of the first one, that would compute to the same fingerprint.

Ways to fix this issue are to include the type and length of each
element, or just the length, which is what TupleHash does. Simplified,
the idea is to compute:

```
cSHAKE(custom_string=“TupleHash”,
input=“len_1 || data_1 || len_2 || data_2 || len_3 || data_3 || ..."
)
```

Where `len_i`

is the length of `data_i`

.

## ParallelHash

**ParallelHash** makes use of a tree hashing construction to allow
faster processing of big inputs and large files. The input is first
divided in several chunks of `B`

bytes (where `B`

is an argument of your
choice), each chunk is then separately hashed with
`cSHAKE(custom_string=“”, . )`

producing as many 256-bit output as
there are chunks. This step can be parallelized with SIMD instructions
or other techniques available on your architecture. Finally each output
is concatenated and hashed a final time with
`cSHAKE(custom_string=“ParallelHash”, . )`

. Again, details have
been omitted.