Security Through Obscurity

People often make the black-and-white distinction, “That’s not security. That’s obscurity.” in contexts where it is not appropriate.

Security is not a boolean concept. There is no such thing as absolute security. You are not either completely secure or completely insecure. You are always insecure — but you can complement your security by reducing the likelihood that there will be an attack against you.

I think that, while obscurity should not be mistaken for a substitute for security (as it is in the original sense of “security through obscurity” in cryptography), it doesn’t hurt in other contexts.

Let me give a few examples which I see come up quite often:

  • Refusing DNS zone transfers (AXFR)

    If you follow the logic that it doesn’t matter how obscure the information about your network is if the network is secure, you should allow anyone to read the DNS zone files for your domains. This is exceptionally reckless if you have any non-public yet publicly accessible hosts associated with your domain.

    An example: Your Apache server serving phpmyadmin at admin.mysubhost.mydomain.com only serves that website to clients that do a GET request including “Hostname: admin.mysubhost.mydomain.com”. When your zone file is readable, this information is readily available to anyone who wants to perform experiments with your network. If your zone file is not available, that hostname could essentially be as difficult to guess as a regular password.

    I think it is a mistake to assume that this would make you “immune” to anything, but I also think it is foolish to argue that by limiting your exposure you are not complementing your security in any way. In the above scenario, access to phpmyadmin should be restricted properly to improve the security, but the obscurity doesn’t hurt.

  • Having critical services (e.g. SSH) on non-standard ports

    Many argue that “if your SSH is secure, which port it is on doesn’t matter” because the range of possibilities is too small to make a difference if someone is intent on getting in.

    While this is the argument that I think makes the most sense—because there’s such a limited range of possibilities that anyone with any conviction would overcome that obstacle with ease—a perhaps not so apparent flaw in this argument is that it doesn’t have to be one person who is intent on getting into your machine. Thousands of machines are scanning not just popular targets but entire IP blocks for open common ports in hopes of finding machines that are susceptible to existing attacks, and more importantly compiling lists of machines that will be susceptible to attacks in the future.

    An example: A person in China is using a machine to scan ranges of IP addresses and comes across yours, connects to port 22 TCP, and notes down the header “SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu6”. In the not so distant future, a zero-day exploit for OpenSSH 5.3 appears which this person gets a hold of. He spends 10 minutes writing a script that will connect to every host in his list then install a small backdoor, and runs it. Your machine has now been owned, and you probably don’t know it.

    (This is not just an example, by the way. It is extremely common for port 22 TCP to be swarmed by traffic from Eastern Europe and Asia.)

    Conversely, had your OpenSSH been configured to listen on, say, port 31382, it probably wouldn’t have made it to the list of our person from China, simply because it’s too expensive to scan 65535 ports on every machine in entire IP blocks (and, in some cases, firewalls block subsequent requests after detecting a port scan in progress).

  • Hiding application names and version numbers in application banners

    Many applications—web servers, mail relays, VPN servers, content management solutions, et cetera—proudly broadcast their names and version numbers to any and all visitors. As in the previous example, this leaves the machine open to the same kind of “TODO-list-making” since it helps identify what software is being used, and whether it is out of date.

  • Keeping password hash digests secret

    Okay, I don’t see this one come up, understandably. But, wait. Why is that understandable for people who argue that “obscurity is not related to security”?

    There is no evidence that hash algorithms like bcrypt and SHA-512 (in e.g. PBKDF2) are breakable by any modern machinery, so why are we keeping password digests secret? (For the sake of argument, let’s say the passwords are relatively complex, and that the digests have their own salt so you can’t just compare them.) Or keeping our private key files for our online banking secret? Surely, hiding them is unnecessary? If our crypto is strong enough we don’t need to worry about it, right?

    This is a scenario where the answer seems more obvious, and you are probably thinking, “Herp derp, Patrick. We don’t need to expose stuff like that for no reason.”

    Tell me: What is the difference between this and the previous examples, exactly?

    (It is, in fact, far more likely that a zero-day vulnerability for an otherwise secure application emerges than it is that one of the industry-standard crypto schemes are broken—so I guess we should actually be hiding everything about our applications and not care at all about our online banking keys.)

    A real world analogy: It’s unlikely that somebody could succeed in stealing your identity if your social security number was freely available on your website. Nevertheless it is generally considered a sensitive detail so you probably wouldn’t put it there. I think the same consideration should go for application names and version numbers, and, of course, password digests.

(A few other, pretty obvious real world examples of when obscurity adds to security are: Camouflage, decoys, and witness protection.)

There are also times where obscurity can be hurtful—where the pejorative does apply—namely:

  • Closed vs. open source software

    If we can’t see the source of the applications we are using, e.g. OpenSSH, how can we be sure they’re secure enough?

  • Cryptographic algorithms

    Similarly: If we don’t know the math behind the hash computation and file encryption we are using, how can we be sure they are secure enough?

In the case of open cryptographic algorithms and open source software we are relying on communities to continously and thoroughly evaluate security aspects of our applications, and, when the “good” community is larger than the “bad” community, it is often a very successful—superior even—approach.

The former and latter are two different debates, though. One should be careful to properly understand the distinction. Just because some respected security figure said something about obscurity in cryptographic algorithms (where it is not good) doesn’t mean you should tell the world everything about your network setup when doing that simply is not beneficial. (Note: Please let me reiterate that I am not saying you should rely on obscurity. You should not take security any less seriously just because you might have reduced your window of exposure.)

When it’s just you and your machine(s), exposing information that simply doesn’t need to be exposed, and then counting on everything being “secure enough”, doesn’t help your security.

There is a reason black box penetration testing is harder than white box penetration testing.

Update: Just to be clear, I am absolutely not suggesting you share your cryptographic hash digests with anyone. It was a tongue-in-cheek example to demonstrate the fallacy of the “exposing it doesn’t matter if it’s ‘secure’” attitude.

LastPass Disclosure Shows Why We Can't Have Nice Things

A few days ago, LastPass announced they would be forcing their users to change their master passwords in response to what was essentially “something weird”:

We take a close look at our logs and try to explain every anomaly we see. Tuesday morning we saw a network traffic anomaly for a few minutes from one of our non-critical machines. These happen occasionally, and we typically identify them as an employee or an automated script.

In this case, we couldn’t find that root cause. After delving into the anomaly we found a similar but smaller matching traffic anomaly from one of our databases in the opposite direction (more traffic was sent from the database compared to what was received on the server). Because we can’t account for this anomaly either, we’re going to be paranoid and assume the worst: that the data we stored in the database was somehow accessed.

LastPass acted exactly like we wish most companies would act: responsibly. And the media’s response? Declaring LastPass “hacked” and “vulnerable”, and placing them in the same category as Sony—who definitely were hacked—with sensationalist headlines like:

  • WARNING: Your Web Browser’s Master Password May Have Been Stolen – Change It Now
  • LastPass Has Been Hacked And Asking Everyone To Change Their Master Passwords
  • LastPass Hacked, Change of Master Password Urgent
  • LastPass Is Hacked – Change Your Master Password, But Don’t Panic
  • Should the LastPass, Sony hacks make you fear storing data in the cloud?

LastPass announced nothing more than that their recent statistics looked strange, and because of that they wanted to stay on the safe side just in case there was a breach—although that was unlikely—and the press responded exactly as it would if LastPass had been caught trying to cover up a certain leak.

(In the worst case scenario, a breach of LastPass’ data would reveal nothing more than master password hashes that are virtually uncrackable if the original password has just minimal complexity. Everything else, including information about individual websites and passwords, would be nothing more than an encrypted blob, the contents of which are inaccessible without the original password.)

You can argue if it’s wise to store your passwords online, but at least treat the few companies who act right right.

By acting the way they were supposed to, LastPass only hurt themselves — and that’s why we can’t have nice things.