Arguments Against GCHQ's "Ghost"

January 26, 2019

Recently there’s been a bit of hubub about “Ghost,” a proposal by Ian Levy and Crispin Robinson of GCHQ to solve the end-to-end encrypted messaging “Going Dark” problem.

Since it’s buried in a bunch of text in the original article, I can sum up the proposal itself: Levy and Robinson suggest to mandate service providers (e.g. Apple) silently add GCHQ’s key to any and all conversations lawfully requested (e.g. some suspect’s iMessage conversations) and suppress any notification that the modification has been made.

I (shockingly) don’t like the scheme, and I’m not alone. A number of people have argued against it, and I encourage you to read all of their posts.Susan Landau, Josh Benaloh, and Bruce Schneier on Lawfare, and Matt Green on his own blog, to name a few

However, the reason that I’m writing this post is to play devil’s advocate against one of the anti-ghost arguments that I found incredibly unconvincing. Bad arguments against a position often lead a reader to believe that the entirety of the position is wrong. In this case the original position is correct even if this particular argument is bad, and I think it’s better to hear this from a supporter rather than a detractor.

Good arguments against Ghost

Before I delve into the one I don’t like, I want to repeat: I don’t like Ghost. I won’t belabor the point, because I think you should read the (already great) posts I’ve shared above, but I think I can sum up the arguments against as:

  1. The fact that Ghost works in the current infrastructure is a flaw that is being fixed by the rapid inclusion of transparency mechanisms, efforts ghost would inherently destroy if the scheme was mandated. These transparency mechanisms already exist, are well studied and practical. Hell, one is being used to secure the method you’re using to view this web page. The research on how to add this functionality to messaging is already there, we’re just waiting on adoption. See certificate transparency for something that is widely deployed for HTTPS and CONIKS which (hopefully) will happen some day.
  2. The added complexity will result in bugs, sadness, poorly maintained code, and increase the prevalence of vulnerabilities massively.
  3. Ghost harms the trust relationships users have with their service providers and software vendors, and further puts us into 1984-panopticon-wasteland territory where every technology we have is potentially spying on us in new and creative ways.
  4. Dear god, GCHQ shouldn’t have this power anyway, the potential for abuse is huge, and even if GCHQ or some other friendly government by some miracle doesn’t abuse it, the probability that some horrible unfriendly oligarchy will is near 100%.E.g. Would you really trust the Trump administration to have access to this kind of tool right now?

Of course these summaries are pithy, but I think you get the point. They’re good arguments, which can and should be discussed on merit.

A bad argument against Ghost

In a post on lawfare, the EFF made one of the worst arguments I’ve seen thus far. It’s bad enough that I thought it was worth discussing as a point of intellectual honesty, even if doing so was against my stance on this particular issue. To be clear, I love the EFF. I donate to them and have been a member of the EFF for years, and so should you.

Their argument is that the suspect would be able to detect when Ghost was being used, that the protocol would be subverted, and therefore the entire idea is undesirable for law enforcement. They further posit that cryptographic side channels, reverse engineering, network traffic, and crash logs will all “give up the Ghost”.

The problem is that none of the above is correct.

To begin, the premise is wrong: the discovery of the usage of these functions isn’t inherently bad from law enforcement’s point of view. The problem that law enforcement has is that unfettered encryption is really easy; if it’s hard again, as far as they’re concerned, it’s a win.

Jailbreaking an iPhone is relatively hard, reverse engineering is hard, and keeping a stable jailbreak is near impossible. Even in cases where jailbreaking isn’t necessary, like on Android and easier-to-hack-on services, the default application and settings still matter.

Maybe a researcher takes the time to remove Ghost, and issues an easy tool to use the same service with the removed functionality. Even in that case, it’ll require users installing that third-party app. The average person (and, luckily for our LE friends, most criminals) most likely won’t bother.

Second, all of the proposed detection solutions (reverse engineering, network analysis, and crash logs) confuse discovery of the protocol’s existence with discovery of being snooped on. I do not believe that discovery of the protocol matters, and that everyone should agree (including law enforcement) that details of the protocol should be public anyway. Levy and Robinson’s original article actually says as much in its section on transparency:

“…the details of any exceptional access solution may well become public and subject to expert scrutiny, which it should not fail. Given the unique and ubiquitous nature of these services and devices, we would not expect criminals to simply move if it becomes known that an exceptional access solution exists.”

In the converse, any attempt to discover if one is being snooped upon on can be made useless by making the “snooping” state indistinguishable from normal operation. In the case of Ghost, one can make adding a malicious key a part of normal operation in the following way:

  1. Every time a group chat is initiated with users, a random key is added. The server has the private key, and the users have no idea if the key is known to law enforcement or not.
  2. When law enforcement wants access, the service provider sends the already-added private key to the conversation.

Encryption, decryption, crash logs, and the application’s binary will all look the exact same, but the user has no idea if it is being eaves-dropped-upon.We could make it more secure by having the service provider issue a nonsense random value for the malicious key, and update it frequently at random intervals. That way, until law enforcement begs for the key, the server just rotates a real one in at the next scheduled key-update.

Again, Ghost, even with my little added scheme, is still bad for security and shouldn’t be implemented. All evidence makes it doubtful that any scheme that resolves the problems posed in the previous section exists.

Conclusions

Many arguments I’ve seen from both sides hinge upon making perfect the enemy of the good. Many proponents of exceptional access seem to believe that the “going brightThat is, that the availability of metadata and other unencrypted comms can make up for the parts that they are unable to decrypt. I think this is a worthwhile thought argument falls flat because doesn’t provide 100% of access 100% of the time.

Other non-Ghost-related parts of Levy and Robinson’s article are laudable in that they actually make strides to avoid fatalistic reasoning. In particular, they make the concession that this sort of perfect exceptional access regime is impossible, and that law enforcement should just accept it.

If we are to find worthwhile solutions (including lawful hacking, going bright, and forcing Law Enforcement to measurably prove that “going dark” is a problem in the first place), we need to embrace and understand these trade-offs.

Mike Specter PhD candidate in computer science at MIT,member of the Internet Policy Research Initiative, and currently a student research fellow at Google

Arguments Against GCHQ's "Ghost" - January 26, 2019 - Mike Specter