The Domain Name System (DNS) protocol isn’t essential in networking, but it sure makes everyone’s life a great deal easier. Without it, we’d have to use cumbersome numeric IP addresses (XX.XX.XX.XX) instead of Fully Qualified Domain Names (FQDN) like www.example.com. I don’t know about you, but that would be frustrating for me, as I tend to transpose numbers for some unknown reason. The relationship between an IP address and FQDN is also important to network engineers. They use it to see if DNS is working properly. If a FQDN doesn’t resolve, but the FQDN’s associated IP address works, it’s a pretty safe bet there’s a DNS-related problem. Sounds simple, but the relationship is easily forgotten when an entire facility is down and the CEO is upset that e-mail isn’t working.
I know I said that DNS isn’t essential in modern networks, but that’s only on a very fundamental level. Everything is set up to use DNS, so any kind of disruption to DNS will immediately create problems, which gets the attention of network engineers fast; as everything is down, nothing works. Or so I thought.
DNS problems can be sneaky
A client of mine called, and I could tell right away it wasn’t a social call. In a panic, he mentioned that no one was receiving any external e-mail (PoP3 or IMAP). Internal e-mail worked and so did outgoing e-mail (SMTP). Web browsing worked and so did VPN-accessed applications. I started to get a sinking feeling about the Exchange server. So I immediately set up a VPN tunnel into the facility. After running a few tests and checking the error logs, I was a bit perplexed as to why internal e-mail would be working as well as outgoing e-mail, but not incoming e-mail. Spam filters? Hmm, I disabled the spam filters and no change.
After running several more tests, I started to understand what was happening. The internal DNS servers were working properly, allowing the outgoing e-mail and Web browsing to resolve FQDNs. What about external incoming…? Before I could finish that sentence, I had an ah-ha! moment. I went to my normal Internet utilities Web site, quickly asking for the WHOIS and DNS records for the client’s domain. There it was, the canonical name wasn’t resolving into an IP address. The name server and MX records were missing as well. Okay, why was that? Thankfully another ah-ha! moment. The client recently instructed the ISP to discontinue hosting the unused “.net” domain name. A quick call to the ISP confirmed my suspicions. The ISP thought my client wanted all hosting to stop and that’s why the records for the “.com” domain were missing as well. After significant begging, the ISP kindly rushed a work order through. Within the hour, external e-mail started arriving again.
DNS is important
I hope my example points out how utterly reliant we are on having DNS work properly. Most network engineers I know constantly fret about the health (up-to-date software) and accuracy (DNS records) of their DNS servers. Sure DNS server health makes sense, but why is accuracy even a consideration? Well, funny you should ask.
There’s a well-known flaw in the DNS protocol that allows attackers to replace valid DNS content with entries of the attacker’s own choosing by using a “cache poisoning attack.” The poisoned DNS server will then offer incorrect content to DNS queries. This technique, for example, could redirect a Web browser to a malicious Web site that mimics the official Web site. After the redirection, any number of attacks could be made on the unsuspecting device that made the original query.
DNS cache poisoning just got easier
Most network professionals weren’t that concerned about cache poisoning as it’s non-trivial to implement. Last week — well, at least to the public (I’ll explain later) — that all changed. New research has led to exploits that leverage the same old vulnerability, but in a unique way — by improving the effectiveness of cache-poisoning attacks and simplifying the attack methodology.
Dan Kaminsky (director of pen testing at IOActive) reported that he accidentally uncovered a method that would allow an attacker easily to disrupt the Internet by attacking DNS servers. For a concise explanation about the vulnerability, listen to the NetworkWorld podcast “DNS Flaw-Fix Hype Addressed” where senior editor Denise Dube interviews Kaminsky about his findings. In addition, Dube and Kaminsky talk about the events that took place after finding the flaw.
DNS fix elicits controversy
The solution developed to rectify the vulnerability was rather unprecedented and to say the least interesting. It seems that the major parties concerned with DNS (a total of 16 entities, including ISC, CERT, Cisco, Sun, Microsoft, and major ISPs) acted in unison and under a cloak of secrecy to create the fix over the past several months. Hence my earlier comment about the vulnerability only being made public last week. For more about the controversy, please check out the TechWorld article “Hackers Gang Up on Kaminsky over DNS Flaw.” David Dragon, a DNS researcher from Georgia Tech, adds validity with the following comment in the referenced TechWorld article:
“The issue is urgent and should be patched immediately. With sparse details, a few have questioned whether Dan Kaminsky had repackaged older work in DNS attacks. It is not feasible to think that the world’s DNS vendors would have patched and announced in unison for no reason.”
The controversy deepens, because Kaminsky will not publish any details of the DNS flaw until his Black Hat presentation next month. Some say waiting until Black Hat is grandstanding, but Kaminsky makes a valid argument that waiting until then will allow engineers almost a month to get vulnerable DNS servers patched. In the same vein, security researchers are concerned that Kaminsky didn’t ask for any peer review before announcing the vulnerability. In his defense, Kaminsky has briefed a few well-known security researchers (Thomas Ptacek of Matasano Research and Paul Vixie of Internet systems Consortium), and they readily admit that his findings are correct.
I personally admire Kaminsky. He could have very well taken advantage of his finding and sold that information to the highest bidder. As noted in the following quote from the NetworkWorld article “Major DNS Flaw Could Disrupt the Internet“:
“Jeff Moss, founder of the Black Hat conference, applauded Kaminsky for treating the DNS discovery he made with a sense of responsible disclosure, rather than selling the information to the highest bidder, a practice growing increasingly common. If he had decided to sell it, he would have made hundreds of thousands of dollars.”
My ultimate goal with this article is to raise awarenes of the critical nature of DNS. I sincerely hope that everything possible to harden DNS is being done. Without it, our Internet life as we know it would be in dire straits.
Michael Kassner has been involved with wireless communications for 40 plus years, starting with amateur radio (K0PBX) and now as a network field engineer and independent wireless consultant. Current certifications include Cisco ESTQ Field Engineer, CWNA, and CWSP.