DNS: Patched but not totally fixed

DNS isn't out of the woods yet; there's a proof-of-concept exploit that successfully poisons the cache of a fully-patched and up-to-date DNS server. In order to keep everyone up to speed, I'd like to describe the attack vector and some possible fixes that we will be hearing about.

In an earlier article, "DNS: The Internet Dodged a Bullet, Thankfully," I described what many are calling a very serious design flaw in the DNS protocol. Quite simply, the bug allows attackers to poison a name server's DNS cache in a few minutes. After which, attackers can redirect unsuspecting Internet users to malicious Web sites that are set up to harvest personal information or facilitate the downloading of malware.

Fortunately, DNS developers rapidly came up with a fix and patched all major versions of DNS, including Cisco, BIND, and Microsoft. The patch increased the complexity of spoofing a DNS query response, by randomizing both the Transaction ID and the query port, making it 65,535 times more difficult to poison the DNS cache.

New attack vector

Well, it appears that's not enough. A Russian physicist, Dr. Evgeniy Polyakov, was able to spoof a fully patched BIND name server into accepting an incorrect IP address in a query response. Dr. Polyakov published the details of the attack on the blog "Successfully Poisoned the Latest BIND with Fully Randomized Ports!" The attack involves the following:

"BIND used fully randomized source port range, i.e. around 64000 ports. Two attacking servers, connected to the attacked one via GigE link, were used, each one attacked 1-2 ports with full ID range. Usually attacking server is able to send about 40-50 thousands fake replies before remote server returns the correct one, so if port was matched probability of the successful poisoning is more than 60%.

Attack took about half of the day, i.e. a bit less than 10 hours.

So, if you have a GigE LAN, any trojaned machine can poison your DNS during one night... "

Ten hours is a huge improvement when considering it takes less than a minute to poison an unpatched name server. Some security experts are of the mind-set that the patch still introduces enough entropy. Others are saying that Dr. Polyakov's test was unrealistic. Two servers hammering a recursive name server over a GigE link isn't a typical Internet connection and certainly not readily available to most attackers.

Dr. Polyakov has a convincing reply. To get the required bandwidth, computers on the same network as the DNS servers could be compromised using trojans. Those computers would then transmit the 40-50 thousand query responses required to carry out the attack, doing so during an evening or weekend to help evade detection. Another possibility is the use of botnets whose consolidated bandwidth would be sufficient to carry out the attack.

I submit that even though Dr. Polyakov's proof-of-concept test may not be realistic, the very fact that it's possible to subvert a recursive name server at all is very disconcerting. As I mentioned more than a few times, if there's even the slightest chance that DNS query responses are corrupt, it introduces all sorts of doubt in whether a displayed Web page is the actual one. I sense that most of the DNS experts feel the same way, because they are working on new fixes that should strengthen DNS to a point where cache poisoning is virtually impossible.

Possible fixes that will not work

If you read the comments on Dr. Polyakov's blog, you would see that there's an active dialogue going on promoting many possible solutions, several of which are interesting as they attest to the overall condition of DNS, especially the repercussions precipitated by possible changes to the protocol. I'd like to start out with the fixes that will not work as they allude to the fragility of DNS:

Debouncing is a process where the querying name server sends the authoritative name server the exact same query twice. It's a strange term, but it's a good approach, allowing the querying name server to check the validity of the query responses. If both responses match, it's a good bet that the responses are valid. If the responses don't match, then it probably means there's a cache-poisoning attack taking place. At that point, other more involved processes could be used to determine which response to accept.

Using TCP/IP and its three-way handshake will easily eliminate cache poisoning. The handshake authenticates both points of the connection, unlike UDP (used by DNS) which will accept any query response as long as it has the correct IP address and port number in the returning packet.

Great, either approach sounds like a good solution and shouldn't be that hard to implement. Except there's a catch, Dan Kaminsky and other DNS researchers feel that

"Debouncing is similar to the 'run all DNS traffic over TCP' approach — seems good, up until the moment you realize you'd kill DNS dead. There's a certain amount of spare capacity in the DNS system — but it is finite, and it is stressed. Absolutely there's not enough to handle a 100% increase in traffic over the course of a month."

In explanation, the existing worldwide DNS network (employing UDP and requiring only two packets per DNS query and response) is currently using more than 50% of available bandwidth. The debounce approach (double the number of packets) would immediately saturate the network, bringing the Internet to a grinding halt. If the debounce approach has that affect, there's no way using TCP/IP would work. It requires up to seven packets per DNS query and response. Besides not working, these solutions highlight how the existing DNS system may have other upcoming issues if the number of users continues to grow at the present rate.

Possible fixes that should work

DNS Security Extensions (DNSSEC) are a series of IETF specifications that can be added to the original DNS protocol specifically to increase security. DNSSEC uses Public Key Infrastructure, which allows digital signing of returning query responses from authoritative name servers. The querying name server can then check the digital signature against information it has and determine if the responding name server is valid.

DNSSEC has several other security enhancements to help protect DNS from other threats as well. You may be surprised to learn that DNSSEC has been around for almost 10 years, thus asking why hasn't it been implemented? Well, that's where it gets a bit sticky. DNSSEC is very complex and has implementation issues, requiring significant work to deploy it worldwide. Most DNS experts feel that DNSSEC or a revised version of it will eventually be used. DNSSEC just can't be deployed and tested soon enough to lessen the very real threat from cache poisoning.

0x20 proposal

I learned of the 0x20 proposal (very intriguing idea) from Dan Kaminsky and Steve Gibson. They both mentioned how the 0x20 proposal would add significant entropy to the DNS query and response. If you remember, the whole point of the recent changes to DNS was to increase the payload and port randomness of a DNS query and response. Doing so makes it more difficult for an attacker to flood the querying name server with enough fake response packets before the authoritative name server's response arrives.

The 0x20 proposal makes use of the Internet draft "Use of Bit 0x20 in DNS Labels to Improve Transaction Identity." Dan Kaminsky explains how this is possible:

"Basically, this idea from (I think) David Dagon and Paul Vixie notices that DNS is case insensitive (lower case is equal to upper case) but case preserving (a response contains the same caps as a request). So, we can get extra bits of entropy based on asking for wWw.DOXpaRA.cOM, and ignoring replies containing, WWW.doxpara.COM, etc."

This means there would be three components of a DNS query that introduce randomness: the Transaction ID (16-bits), the port number (16-bits), and now the queried Fully Qualified Domain Name (FQDN). Using the FQDN for added entropy is unique, because the total amount of randomness is completely dependent on the length of the name. Thus, longer domain names will have more entropy. For example, a FQDN using www and .com will add at least 6-bits of entropy, three from www and three from .com.

What makes the 0x20 proposal especially appealing is that it doesn't require any changes to the authoritative name servers. Authoritative name servers already preserve the case-sensitivity of FQDNs, by returning FQDNs to the querying server with each letter in the same case it was received. Only the DNS applications used by recursive name servers (already patched once) will have to be updated. All that's required is to add the ability to check if the letters in the returned FQDN are in the same case as the FQDN sent. Besides being a simple fix, the 0x20 proposal doesn't take up any additional bandwidth, and that's a significant plus.

Final thoughts

As I continue researching DNS and the underlying processes, the fragility of the DNS protocol becomes ever more apparent, which is alarming if you consider how utterly reliant the Internet is on DNS working properly. Ultimately, DNS experts seem resolved to have an improved variation of DNSSEC securing the DNS protocol. I suspect the 0x20 proposal will finds its way into the DNS protocol long before that, especially with the publicizing of Dr. Polyakov's successful cache poisoning of fully patched BIND name servers.


Michael Kassner has been involved with wireless communications for 40 plus years, starting with amateur radio (K0PBX) and now as a network field engineer for Orange Business Services and an independent wireless consultant with MKassner Net. Current certifications include Cisco ESTQ Field Engineer, CWNA, and CWSP.


Information is my field...Writing is my passion...Coupling the two is my mission.

Editor's Picks

Free Newsletters, In your Inbox