Security

Why there's no such thing as a zero-day vulnerability

The term "zero day" (or "0-day" or "0day") is getting a lot of use these days. Much of the time, it's being used incorrectly.

Even in venues generally frequented by knowledgeable people — familiar with the important facts of IT security — such as the bugtraq mailing list, I see people using the term "zero-day vulnerability." Unfortunately, there's no such thing in any meaningful sense of the term "zero day."

Bugs in software that expose a security vulnerability are discovered in one of two ways:

  1. The "good guys" find it: In this case, someone discovers a vulnerability and makes it known to people who can fix it and/or people affected by it. They may also produce a patch or workaround and pass it up the line to the software maintainer. They also might release a patch or workaround to the general public, especially if a vendor is being particularly slow to incorporate a patch into the software's bug-fix system. Unfortunately, with vendors increasingly notorious for "sitting on" vulnerabilities and trying to discredit anyone who attempts to get something done about a vulnerability, it's increasingly common for someone who discovers a vulnerability to feel it necessary to develop a proof-of-concept exploit as a defensive mechanism, as evidence that he or she is not just a manifestation of Chicken Little.
  2. The "bad guys" find it: When a vulnerability is discovered by a malicious security cracker, you may simply assume that an exploit is the whole point of finding the vulnerability in the first place. Generally, vendors and software maintainers are not told about the vulnerability, and instead the exploit implementation is either immediately employed or sold to the highest bidder — who then, in turn, generally employs it immediately. This leads to attacks occurring "in the wild" — outside of a laboratory environment, where they affect production systems — before the software maintainers are even aware there's a vulnerability. Under such circumstances, it's normally the case that the vulnerability is discovered by the "good guys" only because they find evidence that it's being actively exploited.

In the first case, what we have is a vulnerability, eventually (we hope) patched, possibly with a workaround in the meantime, and possibly with a proof-of-concept exploit that's specifically engineered to demonstrate the danger of the vulnerability without being dangerous to production systems itself.

In the second case, what we have is a vulnerability for which there is an active exploit, used to attack and compromise threatened systems before there's a patch available and most likely before software maintainers are even aware there's a vulnerability.

A vulnerability is just a vulnerability. There are three categories one might apply to these things:

  1. Patched
  2. Unpatched
  3. Undiscovered

Other than that, there isn't much to say. The concept of an "undiscovered" vulnerability is academic and largely uninteresting in and of itself — what nobody knows exists is of little import. Patched vulnerabilities are of historical interest only, except to those who haven't applied the patch. That means that, for the most part, the interesting category of vulnerability is "unpatched." When someone says there's a "zero-day" vulnerability, what he or she is usually saying is "I know about an unpatched vulnerability, and I'm misusing a buzzword I don't really understand to try to make it sound more exciting."

Exploits, however, can usefully be categorized as follows:

  1. Proof-of-concept exploit: This is an exploit developed just to prove the existence of the vulnerability and the fact that it is, in fact, vulnerable to attack in some manner. It's developed by someone who wishes to make it abundantly clear that nobody's crying wolf — that there's a real concern for production systems. It's not a threat to users of the software, however, unless it's used as the basis for an active exploit by a malicious security cracker.
  2. Active exploit: An exploit is considered "active" when it's being used — actively, by malicious security crackers — to compromise production systems. This is also known as an exploit "in the wild," but the term "active exploit" is more specific in its reference to the immediate threat it poses. An exploit may remain active after a patch is released, in cases where large numbers of systems remain unpatched despite the availability of a fix for the vulnerability that makes that exploit possible.
  3. Dead exploit: An exploit may be considered "dead" or "inactive" once the vulnerability that makes it possible has been effectively patched. It's debatable whether many exploits are ever truly "dead" within the extended lifespan of the affected software, of course, since some production systems may never have the patch against the exploited vulnerability applied.
  4. Zero-day exploit: Finally, we get to the heart of the matter — the zero-day exploit. This is an exploit that is active, "in the wild," actually affecting production systems. To qualify as a zero-day exploit, however, it must also be unpatched. In the world of security vulnerabilities, "day one" occurs when a patch is released to fix a vulnerability. Anything before that — because we don't really know how long until a patch will be released — is "day zero," and thus any active exploits during that time are "zero-day exploits."

A virus often falls into an interesting category of exploit status, thanks to the way viruses are handled by the security software industry, where it can simultaneously be considered both a dead exploit and a zero-day exploit. This is because all viruses are exploits of certain types of vulnerabilities that lend themselves to exploitation by malicious mobile code. While most viruses are addressed by virus definitions used in signature-based virus scanners very quickly after they're detected in the wild, they're often ignored by the vendors of the software the viruses actually target. As such, the "industry standard" workaround for virus vulnerabilities effectively patches against a given exploit (and perhaps very close relatives of it), but the underlying vulnerability itself is left unpatched.

With dismaying frequency, I observe that the term "zero day" is used to refer to vulnerabilities of all stripes, apparently because it sounds exciting. Such alarming terms as "zero day" can rile up readers, grabbing their collective attention and causing them to desire to read more about something. Usually, the people misusing the term this way don't really think about the meaning of the term when they attach it to the word "vulnerability" and don't consider the fact that there's already a term for a vulnerability that hasn't been addressed by the vendor — "unpatched."

So that security professionals can communicate effectively with one another, with clients, and with software maintainers, a term is needed to refer to special cases of vulnerability and exploit status. One of the terms we use to specify a distinct case of exploit status is "zero day," intended to mean an active exploit for an unpatched vulnerability — generally, an exploit whose active use is the very reason we're aware of the vulnerability at all. When people misuse a term, they contribute to the dilution of its meaning, and in a field where the specific meaning of technical terms is so critical to effective practice of our trade as IT security this can severely hamper our ability to do our jobs.

It's not only true that there's technically no such thing as a "zero-day vulnerability" but also that using such a non-term can detract from the ability to meaningfully respond to zero-day exploits.

About Chad Perrin

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks

Free Newsletters, In your Inbox