On Cryptographic Backdoors
In 1883, the Dutch cryptographer Auguste Kerckhoffs outlined his requirements for a cryptographic algorithm (a cipher) to be considered secure in the French Journal of Military Science. Perhaps the most famous of these requirements is this: “It [the cipher] must not be required to be secret, and must be able to fall into the hands of the enemy without inconvenience;”
Today, this axiom is widely regarded by most of the world’s cryptographers as a basic requirement for security: Whatever happens, the security of a cryptographic algorithm must rely on the key, not on the design of the algorithm itself remaining secret. Even if an adversary discovers all there is to know about the algorithm, it must not be feasible to decrypt encrypted data (the ciphertext) without obtaining the encryption key.
(This doesn’t mean that encrypting something sensitive with a secure cipher is always safe—the strongest cipher in the world won’t protect you if a piece of malware on your machine scoops up your encryption key when you’re encrypting your sensitive information—but it does mean that it will be computationally infeasible for somebody who later obtains your ciphertext to retrieve your original data unless they have the encryption key. In the world of cryptography, “computationally infeasible” is much more serious than it sounds: Given any number of computers as we understand them today, an adversary must not be able to reverse the ciphertext without having the key, not just for the near future, but until long after the Sun has swallowed our planet and exploded, and humanity, hopefully, has journeyed to the stars.)
This undesirable act of keeping a design detail secret and hoping no bad guys will figure it out is better known as “security through obscurity,” and though this phrase is often misused in criticisms of non-cryptosystems (in which secrecy can be beneficial to security,) it is as important and poignant for crypto now as it was in the nineteenth century.
A Dutchman Rolling Over In His Grave
These days, criminals are increasingly using cryptography to hide their tracks and get away with heinous crimes (think child exploitation and human trafficking, not crimes that perhaps shouldn’t be crimes, a complex discussion that is beyond the scope of this article.) Most people, including myself, agree that cryptography aiding these crimes is horrible. So how do we stop it?
A popular (and understandable) suggestion is to mandate that ciphers follow a new rule: “The ciphertext must be decryptable with the key, but also with another, ‘special’ key that is known only to law enforcement.” This “special key” has also been referred to as “a secure golden key.”
It rolls off the tongue nicely, right? It’s secure. It’s golden. What are we waiting for? Let’s do it.
Here’s the thing: A secure golden key is neither secure nor golden. It is a backdoor. At best, it is a severe security vulnerability—and it affects everyone, good and bad. To understand why, let’s look at two hypothetical examples:
Example A: A team of cryptographers design a cipher “Foo” that appears secure and withstands intense scrutiny over a long period of time. When there is a consensus that there are no problems with this cipher, it is put into use, in browsers, in banking apps, and so on.
(This was the course of events for virtually all ciphers that you are using every day!)
Ten years later, it is discovered that the cipher is actually vulnerable. There is a “shortcut” which allows an attacker to reverse a ciphertext even if they don’t have the encryption key, and long before humans are visiting family in other solar systems. (This shortcut is usually theoretical or understood to not weaken the cipher enough to pose an immediate threat, but the reaction is the same:) Confidence in the cipher is lost, and the risk of somebody discovering how to exploit the vulnerability is too great. The cipher is deemed insecure, and the process starts over…
Example B: After the failure of “Foo,” the team of cryptographers get together again to design a new cipher, “Bar”, which employs much more advanced techniques, and appears secure even given our improved understanding of cryptography. A few years prior, however, a law was passed that mandates that the cryptographers add a way for law enforcement to decrypt the ciphertext long before everyone has a personal space pod that can travel near light speed. A “shortcut” if you will, that will allow egregious crimes to be solved in a few days or weeks instead of billions of years.
The cryptographers design Bar in such a way that only the encryption key can decrypt the ciphertext, but they also design a special and secret program, GoldBar, which allows law enforcement to decrypt any Bar ciphertext, no matter what key was used, in just a few days, and using modest computing resources.
The cipher is put into use, in browsers and banking apps, and…
See the problem?
That’s right. Bar violates Kerckhoffs’ principle. It has a clever, intentional, and secret vulnerability that is exploited by the GoldBar program, the “secure golden key.” GoldBar must be kept secret, and so must the knowledge of the vulnerability in Bar.
Bar, like the first cipher Foo, is not computationally infeasible to reverse without the key, and therefore is not secure—only this time, this is known to the designers before the cipher is even used! And not only that: Bar’s vulnerability isn’t theoretical, but extremely practical. That’s the whole point!
Here’s the problem with “practical:” For GoldBar to be useful, it must be accessible to law enforcement. Even if it is never abused in any way by any law enforcement officer, “accessible to law enforcement” really means “accessible to anyone who has access to any machine or device GoldBar is stored or runs on.”
No individual, company, or government agency knows how to securely manage programs like GoldBar and their accompanying documentation. It is not a question of “if,” but “when” a system storing it (or a human using that system) is compromised. Like many other cyberattacks, it may go unnoticed for years. Unknown and untrusted actors—perhaps even everyone, if Bar’s secrets are leaked publicly—will have full access to all communication “secured” by Bar. That’s your communication, and my communication. It’s all secure files, chats, online banking sessions, medical records, and virtually all other sensitive information imaginable associated with every person who thought they were doing something securely, including information belonging to people who have never committed any crime. There is no fix—once the secret is out, everything is compromised. There is no way to “patch” the vulnerability quickly and avoid disaster: Anyone who has any ciphertext encrypted using Bar will be able to decrypt it using GoldBar, forever. We can only design a new cipher and use that to protect our future information.
Enforcing the Law Without Compromising Everyone
Here’s the good news: We don’t need crypto backdoors/“secure golden keys.” There are many ways to get around strong cryptography that don’t compromise the security of everyone—for example:
Strong cryptography does not prevent a judge from issuing a subpoena forcing a suspect to hand over their encryption key.
Strong cryptography does not prevent a person from being charged with contempt of court for failing to comply with a subpoena.
Strong cryptography does not prevent a government from passing laws that increase the punishment for failure to comply with a subpoena to produce an encryption key.
Strong cryptography does not prevent law enforcement from carrying out a court order to install software onto a suspect’s computer that will intercept the encryption key without the cooperation of the suspect.
Strong cryptography does absolutely nothing to prevent the exploitation of the thousands of different vulnerabilities and avenues of attack that break “secure” systems, including ones using “Military-grade, 256-bit AES encryption,” every single day.
How to go about doing any of these things, or whether to do them at all, is the subject of much discussion, but it’s also beside my point, which is this: We don’t need to break the core of everything we all use every day to combat crime. Most people, including criminals, don’t know how to use cryptography in a way that resists the active efforts of a law enforcement agency, and this whole discussion doesn’t apply to the ones who do, because they know about the one-time pad.
We’re still getting better at making secure operating systems and software, but we can never reach our goal if the core of all our software is rotten. Yes, strong cryptography is a tough nut to crack—but it has to be, otherwise our information isn’t protected.