# Articles

• ### Arguments Against GCHQ's "Ghost"

January 26, 2019

Recently there’s been a bit of hubub about “Ghost,” a proposal by Ian Levy and Crispin Robinson of GCHQ to solve the end-to-end encrypted messaging “Going Dark” problem.

Since it’s buried in a bunch of text in the original article, I can sum up the proposal itself: Levy and Robinson suggest to mandate service providers (e.g. Apple) silently add GCHQ’s key to any and all conversations lawfully requested (e.g. some suspect’s iMessage conversations) and suppress any notification that the modification has been made.

I (shockingly) don’t like the scheme, and I’m not alone. A number of people have argued against it, and I encourage you to read all of their posts.Susan Landau, Josh Benaloh, and Bruce Schneier on Lawfare, and Matt Green on his own blog, to name a few

However, the reason that I’m writing this post is to play devil’s advocate against one of the anti-ghost arguments that I found incredibly unconvincing. Bad arguments against a position often lead a reader to believe that the entirety of the position is wrong. In this case the original position is correct even if this particular argument is bad, and I think it’s better to hear this from a supporter rather than a detractor.

## Good arguments against Ghost

Before I delve into the one I don’t like, I want to repeat: I don’t like Ghost. I won’t belabor the point, because I think you should read the (already great) posts I’ve shared above, but I think I can sum up the arguments against as:

1. The fact that Ghost works in the current infrastructure is a flaw that is being fixed by the rapid inclusion of transparency mechanisms, efforts ghost would inherently destroy if the scheme was mandated. These transparency mechanisms already exist, are well studied and practical. Hell, one is being used to secure the method you’re using to view this web page. The research on how to add this functionality to messaging is already there, we’re just waiting on adoption. See certificate transparency for something that is widely deployed for HTTPS and CONIKS which (hopefully) will happen some day.
2. The added complexity will result in bugs, sadness, poorly maintained code, and increase the prevalence of vulnerabilities massively.
3. Ghost harms the trust relationships users have with their service providers and software vendors, and further puts us into 1984-panopticon-wasteland territory where every technology we have is potentially spying on us in new and creative ways.
4. Dear god, GCHQ shouldn’t have this power anyway, the potential for abuse is huge, and even if GCHQ or some other friendly government by some miracle doesn’t abuse it, the probability that some horrible unfriendly oligarchy will is near 100%.E.g. Would you really trust the Trump administration to have access to this kind of tool right now?

Of course these summaries are pithy, but I think you get the point. They’re good arguments, which can and should be discussed on merit.

## A bad argument against Ghost

In a post on lawfare, the EFF made one of the worst arguments I’ve seen thus far. It’s bad enough that I thought it was worth discussing as a point of intellectual honesty, even if doing so was against my stance on this particular issue. To be clear, I love the EFF. I donate to them and have been a member of the EFF for years, and so should you.

Their argument is that the suspect would be able to detect when Ghost was being used, that the protocol would be subverted, and therefore the entire idea is undesirable for law enforcement. They further posit that cryptographic side channels, reverse engineering, network traffic, and crash logs will all “give up the Ghost”.

The problem is that none of the above is correct.

To begin, the premise is wrong: the discovery of the usage of these functions isn’t inherently bad from law enforcement’s point of view. The problem that law enforcement has is that unfettered encryption is really easy; if it’s hard again, as far as they’re concerned, it’s a win.

Jailbreaking an iPhone is relatively hard, reverse engineering is hard, and keeping a stable jailbreak is near impossible. Even in cases where jailbreaking isn’t necessary, like on Android and easier-to-hack-on services, the default application and settings still matter.

Maybe a researcher takes the time to remove Ghost, and issues an easy tool to use the same service with the removed functionality. Even in that case, it’ll require users installing that third-party app. The average person (and, luckily for our LE friends, most criminals) most likely won’t bother.

Second, all of the proposed detection solutions (reverse engineering, network analysis, and crash logs) confuse discovery of the protocol’s existence with discovery of being snooped on. I do not believe that discovery of the protocol matters, and that everyone should agree (including law enforcement) that details of the protocol should be public anyway. Levy and Robinson’s original article actually says as much in its section on transparency:

“…the details of any exceptional access solution may well become public and subject to expert scrutiny, which it should not fail. Given the unique and ubiquitous nature of these services and devices, we would not expect criminals to simply move if it becomes known that an exceptional access solution exists.”

In the converse, any attempt to discover if one is being snooped upon on can be made useless by making the “snooping” state indistinguishable from normal operation. In the case of Ghost, one can make adding a malicious key a part of normal operation in the following way:

1. Every time a group chat is initiated with $n$ users, a random $n+1$ key is added. The server has the private key, and the users have no idea if the key is known to law enforcement or not.
2. When law enforcement wants access, the service provider sends the already-added $n+1$ private key to the conversation.

Encryption, decryption, crash logs, and the application’s binary will all look the exact same, but the user has no idea if it is being eaves-dropped-upon.We could make it more secure by having the service provider issue a nonsense random value for the malicious key, and update it frequently at random intervals. That way, until law enforcement begs for the key, the server just rotates a real one in at the next scheduled key-update.

Again, Ghost, even with my little added scheme, is still bad for security and shouldn’t be implemented. All evidence makes it doubtful that any scheme that resolves the problems posed in the previous section exists.

## Conclusions

Many arguments I’ve seen from both sides hinge upon making perfect the enemy of the good. Many proponents of exceptional access seem to believe that the “going brightThat is, that the availability of metadata and other unencrypted comms can make up for the parts that they are unable to decrypt. I think this is a worthwhile thought argument falls flat because doesn’t provide 100% of access 100% of the time.

Other non-Ghost-related parts of Levy and Robinson’s article are laudable in that they actually make strides to avoid fatalistic reasoning. In particular, they make the concession that this sort of perfect exceptional access regime is impossible, and that law enforcement should just accept it.

If we are to find worthwhile solutions (including lawful hacking, going bright, and forcing Law Enforcement to measurably prove that “going dark” is a problem in the first place), we need to embrace and understand these trade-offs.

Mike Specter PhD candidate in computer science at MIT,member of the Internet Policy Research Initiative, and currently a student research fellow at Google

• ### On Deniability and Duress

January 24, 2017

Imagine you’re at a border crossing, and the guard asks you to hand over all of your electronics for screening. The guard then asks that you unlock your device, provide passwords and decryption keys. Right now, he’s asking nicely, but he happens to be carrying an unpleasant-looking rubber hose,Yes, cryptographers actually do call this “rubber hose cryptanalysis.” and appears to be willing to use it. Now imagine you’re a journalist covering war crimes in the country you’re trying to leave. So, what can you do?

This isn’t a hypothetical situation. The Freedom of the Press Foundation published an open letter to camera manufacturers requesting that they provide “encryption” by default. The thing is, what they want isn’t just encryption, it’s deniability, which is a subtly different thing.

DeniableI consider deniability in the tradition of Canetti et al. It’s important to note that deniability refers to the ability to deny some plaintext, not the ability to deny that you’re using a deniable algorithm. schemes let you lie about whether you’ve provided full access to some or all of the encrypted text. This is important because, currently, you can’t give the guard in the above example a fake password. He’ll try it, get locked out, and then proceed with the flogging.

I’m convinced that there’s a sociotechnical blind spot in how current technology handles access to personal devices. We, in the infosec community, need to start focusing more on allowing users the flexibility to handle situations of duress rather than just access control. Deniability and duress codes can go a long way in helping us get there.

Recent events in law have highlighted the need for deniability and duress codes in particular.

In particular, a recent precedent-setting court case in MinnesotaFull court opinion here: Minnesota V. Diamond has decided that fingerprints used for access control can be taken from a suspect without violating his fifth amendment rights. The logic of the decision, which I’m actually inclined to agree with, is that fingerprints are tantamount to similar evidence that is taken from suspects in the course of an investigation such as blood samples, handwriting samples, voice recordings, etc., all of which have been deemed by the Supreme Court to not be protected under the Fifth Amendment.

Orin Kerr has a great in-depth analysis of this decision here, but the gist is that the courts have decided that fingerprints don’t count as a “testimonial,” and therefore aren’t protected under the fifth amendment.

There’s an interesting wrinkle to the case in that the defendant willingly told the police which finger would have unlocked the phone. Admittedly, the court could just demand that the guy provide all of his fingerprints and try each of them in a row. If we take this to an extreme, this is not too different from arguing that the police have a right to try to crack a password for the device that they’ve gotten legally, it just happens to be that the characters of the password are physical objects.Well, in this case, the defendant’s fingers.

The good news is that other decisions have decided that passwords are constitutionally protected. In the esoterically-named “In re Grand Jury Subpoena Duces Tecum”, it was decided that traditional passwords are incriminating testimonial, and therefore that defendants can plead the fifth when asked.

However, the bad news is that hand-typed passwords are increasingly seen as the way of the past; hardware tokens and biometric sensing are considered to be far more usable, and will likely be employed more and more in the future. Google appears to be moving to hardware tokens and biometrics for instance, which is a much more usable instrument

## What We Can do Quickly: Add Duress Codes

As mentioned earlier, a key observation from these court cases is that the police can compel you to hand over a fingerprint, but cannot order you to tell the police which finger is used to unlock the device. This would be tantamount to ordering you to provide a passcode.

In the short term, Apple and Google can take steps to alleviate this threat by adding duress codes into their access control mechanisms. For instance, scanning anything but your right index finger might force a password-only lock. Scanning a pinky (or some other fingerprint / combination of fingerprints) might cause the phone to factory reset, or unlock and trigger deletion a specified portion of user data. Adding this functionality might take a few weeks of coding and months of UX research, but it can easily help make the current constitutional crisis void.

In the long term, we need to rethink deploying deniability as a set of strategies for helping users evade coercion in general. What is similarly important is that all devices must have some sort of deniability baked-in, full stop. Adding deniable systems to devices only when that person is targeted provides little protection to at-risk populations like journalists. If it isn’t baked-in to the operating system, the fact that the journalist was using some out-of-the-ordinary software itself, which may or may not have undeniable tells, would likely be a red flag and induce liberal use of the rubber hose.

Mike Specter PhD candidate in computer science at MIT, with thanks to Danny Weitzner (principal research scientist), Jonathan Frankle (also a PhD candidate at MIT) and the rest of the Internet Policy Research Initiative

blog - Mike Specter