On Deniability and Duress

Imagine you’re at a border crossing, and the guard asks you to hand over all of your electronics for screening. The guard then asks that you unlock your device, provide passwords and decryption keys. Right now, he’s asking nicely, but he happens to be carrying an unpleasant-looking rubber hose, Yes, cryptographers actually do call this "rubber hose cryptanalysis." and appears to be willing to use it. Now imagine you’re a journalist covering war crimes in the country you’re trying to leave. So, what can you do? Obligatory XKCD

This isn’t a hypothetical situation. The Freedom of the Press Foundation published an open letter to camera manufacturers requesting that they provide “encryption” by default. The thing is, what they want isn’t just encryption, it’s deniability, which is a subtly different thing.

Deniable I consider deniability in the tradition of Canetti et al . It's important to note that deniability refers to the ability to deny some plaintext, not the ability to *deny that you're using a deniable algorithm*. schemes let you lie about whether you’ve provided full access to some or all of the encrypted text. This is important because, currently, you can’t give the guard in the above example a fake password. He’ll try it, get locked out, and then proceed with the flogging.

I’m convinced that there’s a sociotechnical blind spot in how current technology handles access to personal devices. We, in the infosec community, need to start focusing more on allowing users the flexibility to handle situations of duress rather than just access control. Deniability and duress codes can go a long way in helping us get there.

Recent events in law have highlighted the need for deniability and duress codes in particular.

In particular, a recent precedent-setting court case in Minnesota Full court opinion here: Minnesota V. Diamond has decided that fingerprints used for access control can be taken from a suspect without violating his fifth amendment rights. The logic of the decision, which I’m actually inclined to agree with, is that fingerprints are tantamount to similar evidence that is taken from suspects in the course of an investigation such as blood samples, handwriting samples, voice recordings, etc., all of which have been deemed by the Supreme Court to not be protected under the Fifth Amendment.

Orin Kerr has a great in-depth analysis of this decision here, but the gist is that the courts have decided that fingerprints don’t count as a “testimonial,” and therefore aren’t protected under the fifth amendment.

There’s an interesting wrinkle to the case in that the defendant willingly told the police which finger would have unlocked the phone. Admittedly, the court could just demand that the guy provide all of his fingerprints and try each of them in a row. If we take this to an extreme, this is not too different from arguing that the police have a right to try to crack a password for the device that they’ve gotten legally, it just happens to be that the characters of the password are physical objects. Well, in this case, the defendant's fingers.

The good news is that other decisions have decided that passwords are constitutionally protected. In the esoterically-named “In re Grand Jury Subpoena Duces Tecum”, Specifically, "In re Grand Jury Subpoena Duces Tecum”, 670 F.3d 1335 (11th Cir. 2012) it was decided that traditional passwords are incriminating testimonial, and therefore that defendants can plead the fifth when asked.

However, the bad news is that hand-typed passwords are increasingly seen as the way of the past; hardware tokens and biometric sensing are considered to be far more usable, and will likely be employed more and more in the future. Google appears to be moving to hardware tokens and biometrics for instance, which is a much more usable instrument

What We Can do Quickly: Add Duress Codes

As mentioned earlier, a key observation from these court cases is that the police can compel you to hand over a fingerprint, but cannot order you to tell the police which finger is used to unlock the device. This would be tantamount to ordering you to provide a passcode.

In the short term, Apple and Google can take steps to alleviate this threat by adding duress codes into their access control mechanisms. For instance, scanning anything but your right index finger might force a password-only lock. Scanning a pinky (or some other fingerprint / combination of fingerprints) might cause the phone to factory reset, or unlock and trigger deletion a specified portion of user data. Adding this functionality might take a few weeks of coding and months of UX research, but it can easily help make the issue void.

In the long term, we need to rethink deploying deniability as a set of strategies for helping users evade coercion in general. What is similarly important is that all devices must have some sort of deniability baked-in, full stop. Adding deniable systems to devices only when that person is targeted provides little protection to at-risk populations like journalists. If it isn’t baked-in to the operating system, the fact that the journalist was using some out-of-the-ordinary software itself, which may or may not have undeniable tells, would likely be a red flag and induce liberal use of the rubber hose.

Mike Specter PhD candidate in computer science at MIT, with thanks to Danny Weitzner (principal research scientist), Jonathan Frankle (also a PhD candidate at MIT) and the rest of the Internet Policy Research Initiative