On Cryptographic Backdoors

In 1883, the Dutch cryptographer Auguste Kerckhoffs outlined his requirements for a cryptographic algorithm (a cipher) to be considered secure in the French Journal of Military Science. Perhaps the most famous of these requirements is this: “It [the cipher] must not be required to be secret, and must be able to fall into the hands of the enemy without inconvenience;”

Today, this axiom is widely regarded by most of the world’s cryptographers as a basic requirement for security: Whatever happens, the security of a cryptographic algorithm must rely on the key, not on the design of the algorithm itself remaining secret. Even if an adversary discovers all there is to know about the algorithm, it must not be feasible to decrypt encrypted data (the ciphertext) without obtaining the encryption key.

(This doesn’t mean that encrypting something sensitive with a secure cipher is always safe—the strongest cipher in the world won’t protect you if a piece of malware on your machine scoops up your encryption key when you’re encrypting your sensitive information—but it does mean that it will be computationally infeasible for somebody who later obtains your ciphertext to retrieve your original data unless they have the encryption key. In the world of cryptography, “computationally infeasible” is much more serious than it sounds: Given any number of computers as we understand them today, an adversary must not be able to reverse the ciphertext without having the key, not just for the near future, but until long after the Sun has swallowed our planet and exploded, and humanity, hopefully, has journeyed to the stars.)

This undesirable act of keeping a design detail secret and hoping no bad guys will figure it out is better known as “security through obscurity,” and though this phrase is often misused in criticisms of non-cryptosystems (in which secrecy can be beneficial to security,) it is as important and poignant for crypto now as it was in the nineteenth century.

A Dutchman Rolling Over In His Grave

These days, criminals are increasingly using cryptography to hide their tracks and get away with heinous crimes (think child exploitation and human trafficking, not crimes that perhaps shouldn’t be crimes, a complex discussion that is beyond the scope of this article.) Most people, including myself, agree that cryptography aiding these crimes is horrible. So how do we stop it?

A popular (and understandable) suggestion is to mandate that ciphers follow a new rule: “The ciphertext must be decryptable with the key, but also with another, ‘special’ key that is known only to law enforcement.” This “special key” has also been referred to as “a secure golden key.”

It rolls off the tongue nicely, right? It’s secure. It’s golden. What are we waiting for? Let’s do it.

Here’s the thing: A secure golden key is neither secure nor golden. It is a backdoor. At best, it is a severe security vulnerability—and it affects everyone, good and bad. To understand why, let’s look at two hypothetical examples:

Example A: A team of cryptographers design a cipher “Foo” that appears secure and withstands intense scrutiny over a long period of time. When there is a consensus that there are no problems with this cipher, it is put into use, in browsers, in banking apps, and so on.

(This was the course of events for virtually all ciphers that you are using every day!)

Ten years later, it is discovered that the cipher is actually vulnerable. There is a “shortcut” which allows an attacker to reverse a ciphertext even if they don’t have the encryption key, and long before humans are visiting family in other solar systems. (This shortcut is usually theoretical or understood to not weaken the cipher enough to pose an immediate threat, but the reaction is the same:) Confidence in the cipher is lost, and the risk of somebody discovering how to exploit the vulnerability is too great. The cipher is deemed insecure, and the process starts over…

Example B: After the failure of “Foo,” the team of cryptographers get together again to design a new cipher, “Bar”, which employs much more advanced techniques, and appears secure even given our improved understanding of cryptography. A few years prior, however, a law was passed that mandates that the cryptographers add a way for law enforcement to decrypt the ciphertext long before everyone has a personal space pod that can travel near light speed. A “shortcut” if you will, that will allow egregious crimes to be solved in a few days or weeks instead of billions of years.

The cryptographers design Bar in such a way that only the encryption key can decrypt the ciphertext, but they also design a special and secret program, GoldBar, which allows law enforcement to decrypt any Bar ciphertext, no matter what key was used, in just a few days, and using modest computing resources.

The cipher is put into use, in browsers and banking apps, and…

See the problem?

That’s right. Bar violates Kerckhoffs’ principle. It has a clever, intentional, and secret vulnerability that is exploited by the GoldBar program, the “secure golden key.” GoldBar must be kept secret, and so must the knowledge of the vulnerability in Bar.

Bar, like the first cipher Foo, is not computationally infeasible to reverse without the key, and therefore is not secure—only this time, this is known to the designers before the cipher is even used! And not only that: Bar’s vulnerability isn’t theoretical, but extremely practical. That’s the whole point!

Here’s the problem with “practical:” For GoldBar to be useful, it must be accessible to law enforcement. Even if it is never abused in any way by any law enforcement officer, “accessible to law enforcement” really means “accessible to anyone who has access to any machine or device GoldBar is stored or runs on.”

No individual, company, or government agency knows how to securely manage programs like GoldBar and their accompanying documentation. It is not a question of “if,” but “when” a system storing it (or a human using that system) is compromised. Like many other cyberattacks, it may go unnoticed for years. Unknown and untrusted actors—perhaps even everyone, if Bar’s secrets are leaked publicly—will have full access to all communication “secured” by Bar. That’s your communication, and my communication. It’s all secure files, chats, online banking sessions, medical records, and virtually all other sensitive information imaginable associated with every person who thought they were doing something securely, including information belonging to people who have never committed any crime. There is no fix—once the secret is out, everything is compromised. There is no way to “patch” the vulnerability quickly and avoid disaster: Anyone who has any ciphertext encrypted using Bar will be able to decrypt it using GoldBar, forever. We can only design a new cipher and use that to protect our future information.

Enforcing the Law Without Compromising Everyone

Here’s the good news: We don’t need crypto backdoors/“secure golden keys.” There are many ways to get around strong cryptography that don’t compromise the security of everyone—for example:

  • Strong cryptography does not prevent a judge from issuing a subpoena forcing a suspect to hand over their encryption key.

  • Strong cryptography does not prevent a person from being charged with contempt of court for failing to comply with a subpoena.

  • Strong cryptography does not prevent a government from passing laws that increase the punishment for failure to comply with a subpoena to produce an encryption key.

  • Strong cryptography does not prevent law enforcement from carrying out a court order to install software onto a suspect’s computer that will intercept the encryption key without the cooperation of the suspect.

  • Strong cryptography does absolutely nothing to prevent the exploitation of the thousands of different vulnerabilities and avenues of attack that break “secure” systems, including ones using “Military-grade, 256-bit AES encryption,” every single day.

How to go about doing any of these things, or whether to do them at all, is the subject of much discussion, but it’s also beside my point, which is this: We don’t need to break the core of everything we all use every day to combat crime. Most people, including criminals, don’t know how to use cryptography in a way that resists the active efforts of a law enforcement agency, and this whole discussion doesn’t apply to the ones who do, because they know about the one-time pad.

We’re still getting better at making secure operating systems and software, but we can never reach our goal if the core of all our software is rotten. Yes, strong cryptography is a tough nut to crack—but it has to be, otherwise our information isn’t protected.

Problems with Cyber-Attack Attribution

Things were easy back when dusting for prints and reviewing security camera footage was enough to find out who stole your stuff. The world of cyber isn’t so simple, for a few reasons:

Everything is accessible to a billion blurred-out faces

The Internet puts the knowledge of the world within reach of a large portion of its inhabitants. It also puts critical infrastructure and corporate networks within the reach of attackers from all over the world. No plane ticket or physical altercation is necessary to rob and sabotage even high-profile entities.

Keeping your face out of sight of the security cameras that are now commonplace in most cities around the world whilst managing to avoid arousing suspicion requires significant finesse. It’s likely the most difficult aspect of any physical crime in a public space. But if you look at the virtual equivalents of cameras and cautious bystanders, intrusion detection/SIEM systems and operations staff, you quickly realize that they monitor information about devices, not people. A system log which shows that a user “John” logged on to the corporate VPN at 2:42 AM on a Saturday may appear at first glance to indicate that John logged on to the corporate VPN at 2:42 AM on a Saturday, but what it actually shows is that one of John’s devices, or another one entirely, did so. John may be fast asleep. We (hopefully) have very little information about that.

When digital forensics teams are sifting through the debris after a cyberattack, this is what they find (if they find anything.) They don’t have the luxury of weeding out a grainy picture of a face that can be authenticated by examining official records or verifying with someone who knows the suspect.

The Internet stinks

Imagine if you could take your pick from any random passerby in the street, assume control of their body, and use it to carry out your crime from the safety and comfort of your living room. If the poor sap gets caught, they might exclaim that they have no idea what happened, and that they weren’t conscious of what they were doing, but to authorities the case is an open-and-shut one: It’s all right there on the camera footage, clear as day. And even if they were willing to believe this lunatic’s story, they know that random crimes (where the perpetrator has no connection to the victim) are nearly impossible to solve, and since they’d be embarking down a rabbit hole by entertaining more complex possibilities, it’s easier to just keep it simple.

In cyber, this strange hypothetical isn’t strange at all. It’s the norm. Very few attackers use their own devices to carry out crimes directly. They use other people’s compromised machines (e.g. botnet zombies), anonymization networks, and more. It’s virtually impossible to prove beyond a reasonable doubt that a person whose device or IP address has been connected to a crime was therefore complicit in it. All it takes is one link in a cleverly-crafted phishing email, or a Word attachment that triggers remote code execution, and John’s device now belongs to somebody whose politics differ greatly from his.

A forensic expert may find that John’s device was remote-controlled from another device located in Germany. Rudimentary analysis would lead to the conclusion that the real perpetrator is thus German. But what really happened is a layer has been pulled off an onion that may have hundreds of layers. Who’s to say our German friend Emma, the owner of the other device, is any more conscious of what it’s been doing than John was of his? It’s very difficult to know just how stinky this onion is based on a purely technical analysis.

It’s not just like in the movies; it’s worse.

Planting evidence is child’s play

It’s very difficult for me to appear as someone else on security footage, but it’s trivial to write a piece of malware that appears to have been designed by anyone, anywhere in the world. Digital false flag operations have virtually no barriers to entry.

Malicious code containing an English sentence with a structure that’s common for Chinese speakers may indicate that the author of the code is Chinese, or it may mean nothing more than someone wants you to think the author is Chinese. Malicious code that contains traces of American English, German, Spanish, Chinese, Korean and Japanese, but not Italian, is interesting, but ultimately gives the same false certainty.

But let’s say you know exactly who wrote the code. How do you know it’s not just being used by somebody else who may be wholly unaffiliated with the author?

Any technical person can be a criminal mastermind online

I worry about the future because any cyberattack of medium-or-higher sophistication will be near-impossible to trace, and we seem reluctant to even look beneath the surface (where things appear clear-cut) today, preferring instead to keep things simple. That an IP address isn’t easily linkable to an individual may be straightforward to technical readers, but it is less so to lawmakers and prosecutors. People are being convicted, on a regular basis, of crimes that are proven using the first one or two layers of the onion (“IP address X was used in this attack, and we know you’ve also used this IP address,”) and we seem to be satisfied with this.

Go up to the most competent hacker you know, and ask them how they’d go about figuring out who’s behind an IP address, or how you can distinguish between actions performed by a user and ones performed by malicious code on the user’s device, and they are likely to shrug their shoulders and say, “That’s pretty tricky,” or launch into an improv seminar on onion routing, mix networks and chipTAN. Yet we are willing to accept as facts the findings of individuals in the justice system who in many cases have performed only a simple analysis of the proverbial onion.

(Don’t get me wrong: Digital forensics professionals often do a fine job, but I’m willing to bet they are a lot less certain in the conclusions derived from their findings than the prosecutors presenting them and the presiding judges are.)

We have to be more careful in our approach to digital forensics if we want to avoid causing incidents more destructive than the ones we’re investigating, and if we want to ensure we’re putting the right people behind bars. If we can figure out who was behind a sophisticated attack in only a few days, there is a very real possibility we are being misled.

Technical details are important, but it’s only when we can couple them with flesh-and-blood witnesses, physical events, and a clear motive that we can reach anything resembling certainty when it comes to attribution in cyberspace.

Beware of "Read-Only Bank Access"

After moving to the United States, I have come across this reassuring statement fairly often:

<Product name> only has read access to your accounts. Nobody can authorize any transactions on your behalf, not even <product name>.

This is a particularly popular thing for services like Mint and Credit Karma to say in an effort to get you to give up the holiest of holies: The login credentials to your online banking accounts. This “guarantee” is also completely false, or at least incredibly deceptive.

There’s no such thing as “read-only access” to your Chase banking or American Express card accounts. Services like Mint and Credit Karma store your real usernames and passwords on their servers, not some kind of read-only token. If their servers get compromised, your linked bank accounts may very well be fully compromised as well.

Here’s the kicker: These services know this full-well. When you sign up for any of them, you’re agreeing that they bear no responsibility in the case of a compromise. (Read the fine print.) Your money is now gone, and they won’t be there to help you. FDIC won’t help you either—that only protects you if your financial institution becomes insolvent, not if your accounts are compromised.

What these companies actually seem to mean when they say that their access is “read-only” is that there is no functionality within their interfaces which allows people to authorize transactions and perform other changes, not that the credentials they’re storing can’t be used to do absolutely anything on your accounts. (The former borders on the irrelevant, of course, and the latter is what most people actually care about.)

Carefully evaluate whether you want to trust these companies based on these “protections” and the way they present them, and remember they won’t be there for you if things go wrong.

(To give a little perspective: Intuit, the company that develops Mint, Quicken, and QuickBooks, lets you encrypt your Quicken data file using a password, but only allows that password to consist of 15 characters or less. This is their supposedly “military-grade security system;” 15 characters isn’t even enough to reach 128-bit security, the lowest acceptable level for strong security.)

If you’re currently using any of these services, and want to reduce your risk, deleting your linked accounts within the service and/or the service account itself, as well as changing the password for each of your linked accounts should do the trick.

Gambling with Secrets: an Introduction to Cryptography

Art of the Problem is a team of people making web video series about great problems. Their first series is an introduction to cryptography and cryptanalysis, and it’s one of the most approachable I’ve seen.

If you’ve ever asked yourself questions like:

  • How can two people communicate securely even if somebody is listening in on the conversation?
  • How can two people have an encrypted conversation without meeting?
  • What does randomness mean? Are things that “look random” secure?
  • What is the most secure cryptographic cipher?
  • How did the Allied Forces break Nazi Germany’s “super-secure” Enigma machine during World War II?

…you’ll enjoy this miniseries.

No special background in mathematics is required, and it touches on many subtle mistakes that huge companies are still making today. It consists of 8 parts/chapters, each lasting about 5-10 minutes.

Part 1: Introduction to Cryptography

Part 2: Prime Factorization

Part 3: Probability Theory & Randomness

Part 4: Private Key Cryptography

Part 5: Encryption Machines

Part 6: Perfect Secrecy & Pseudorandomness

Part 7: Diffie-Hellman Key Exchange

Part 8: RSA Encryption

For more, visit their website.

The Secure Remote Password Protocol Isn't Bad

Blizzard Entertainment has been receiving a lot of flak recently for using the Secure Remote Password protocol for password authentication in their Battle.net service because SRP doesn’t provide the same level of protection against offline attacks that one-way key derivation and password hash functions like PBKDF2, bcrypt, and scrypt do.

I applaud them. Well done, Blizzard. You’ve done more to protect your users than most other companies that handle user passwords. It is great to see a company employ real safeguards like SRP and two-factor authentication (which you introduced long before it was cool.)

All of the recent criticism of Blizzard’s design decisions kind of misses the point. SRP was designed to prevent eavesdropping attacks (by never transmitting the password over the wire,) not dictionary attacks against the password verifiers (the kind of digests that are stored on the server side.) This is akin to blaming Diffie-Hellman key exchange for the fact that DES is easy to break, since the SRP authors never made any claim that the verifiers were resistant to dictionary attacks.

Blizzard absolutely made the right choice by choosing not to transmit passwords over the wire. The people who are suggesting that they throw out SRP on the client side for a KDF on the server side seem to completely miss that this would only switch out one security vulnerability with another. A better solution would be to employ a one-way key derivation function on the client side, store the salt on the server side (so any client can produce the same digest for the same account, even if it’s on another machine,) and then transmit the verification “digest” (or proof of it) in a non-revealing/non-reusable way (if the traffic is snooped, or the verifiers are compromised,) the latter being precisely what SRP does.

The above would provide more protection against password compromise than the password authentication used by virtually all web applications and almost all desktop clients. It seems strange to me to criticize Blizzard so aggressively for not doing both when nobody else does.