Pundits may argue whether the release of the Nimda virus was a terrorist plot to cripple Microsoft-centric U.S. computer networks and world capitalism. But for most network administrators, there is no doubt that Nimda was a massive attack on their networks.
TechRepublic’s IT Director Troy Atwood and his staff were completely focused on the battle several hours after the release of the virus around 9:00 A.M., Sept. 18, 2001, a week after the New York City and Washington, D.C. terrorist attacks.
At the time of this writing, several days after the Nimda attack, Atwood still believes his network is vulnerable to new strains of the virus until Microsoft releases a strong patch that deals with it specifically. He is hoping that the new patch will be more effective than the latest IIS, Code Red, and IE patches, which have proved unsuccessful in keeping Nimda at bay.
In this week’s From the Trenches column, we’ll follow Atwood’s trail as he battles the Nimda worm. As you’ll see, even the best laid plans and responses to a threat can sometimes go astray. As he chronicles each turn of events, Atwood adds some commentary and summarizes the lessons learned from this incident.
Battle stations, everyone
Atwood’s diary begins in the late morning of Tuesday, Sept. 18:
11:30 A.M. EDT: We begin receiving reports of a new worm. These reports come via a CERT advisory and user group activity. We check the WWW logs on servers with port 80 exposed to the Internet and find attempts from external addresses to infect the servers. In the logs, the attempts appear as HTTP Gets trying to execute Command.exe. Servers at this time do not exhibit any known symptoms.
1:00 P.M. EDT: There is increased network and router traffic suggesting that we might have a problem. We begin to research the problem and shut down nonessential IIS services. Every TechRepublic Windows 2000 Server has IIS installed by default to function as a client for Terminal Services. Also, TechRepublic’s developers use IIS on their Windows 2000 desktops as part of their job functions.
4:00 P.M. EDT:Trend Micro releases a new pattern file (No. 941). The update to our central Trend server fails due to the high Internet bandwidth usage on the Trend site. We fix the update problem by deleting incomplete files that were fooling the program into thinking we already had the updates, and then we begin the automated notify of all the client machines. We start installing Trend Server Protect (another Trend Micro product, along with Viruswall and ScanMail) on all of the TechRepublic servers and updating the existing pattern files to the latest release. Previously, Server Protect was installed only on selected file servers because of processor overhead concerns on other servers. We decided the overhead was negligible compared to the cost of cleaning the virus off. We update the pattern files on the Trend Viruswalls and the ScanMail services on the Exchange servers. We set the virus scanners to delete if the file cannot be cleaned, instead of renamed with a .vir extension.
4:33 P.M. EDT: We send out the first e-mail warning users of the problem and giving them instructions to upgrade to Windows 2000 Service Pack 2 and the latest Code Red patch.
6:30 P.M. to 12:30 A.M. EDT: We update the remaining Win2K servers to SP 2 and apply the latest patches. Based on the virus detect data reported by our central Trend server, we begin to systematically hunt down infected workstations. We upgrade all those workstations to Win2K SP 2 with the latest patches, we upgrade Internet Explorer to version 5.5 SP 2 with all of its security patches, and we verify that the virus pattern files are updated.
This, in combination with several reboots and forced virus scans, quells most of the activity. We also report machines outside our domain that are trying to reinfect our machines to the CNET group. We are seeing several e-mails on the Exchange Internet Mail Service (IMS) that are infected. We suspect that internal machines are sending SMTP mail directly to the IMS connector, bypassing the Trend Viruswalls. We adjust the scanner for only the Exchange servers with an IMS to see if we can figure out which machines are sending directly to the Exchange servers. (I’m thinking of changing the SMTP port for the IMS to a nonstandard port so it will only accept mail from Viruswall.)
It should have been fixed, but …
With all the server and client machines updated with the most recent patches and service packs, the network should be locked down and secure. But doing your best isn’t always going to solve the problem.
Wednesday, Sept. 19
5:00 A.M. EDT: Trend Micro releases a new pattern file (No. 942), and all workstations running and connected to the network receive it automatically.
7:00 A.M. EDT: We update all the servers to the new pattern file and set the automatic deploy on the Trend server to check Trend Micro’s site hourly instead of daily. We force all connected client machines to scan for viruses.
7:50 A.M. EDT: We send out another reminder e-mail to users to make sure they update their workstations and tell them to go to Windowsupdate.microsoft.com and update their IE browsers to at least version 5.5 SP 2 and all of the IE security patches.
8:00 A.M. EDT: We start tracking down and upgrading workstations and laptops that were off or not connected to the network.
8:10 A.M. EDT: The primary SMTP server is having a Blue Screen of Death (BSOD) and rebooting on a cyclical basis. We change the SMTP records to point to the secondary SMTP server, and it starts the same process of BSOD and reboot. We queue all inbound and outbound SMTP traffic while we troubleshoot.
10:00 A.M. EDT: We discover that pattern file No. 942 is not cleaning .htm, .html, .and .jhtml files and is instead deleting them. We set the workstation scanners to rename instead. (This is bad for our developers, so we allow them to inspect and clean the .jhtml files instead of renaming them. The problem with just renaming files is that the infected files are filling hard drives.)
10:30 A.M. EDT: We determine that the Server Protect Real-Time scanner and the ScanMail virus scanner are conflicting on both Exchange servers, causing them to BSOD and reboot. We adjust the Server Protect scanner to exclude the Exchange directories on the two Exchange servers and resume SMTP mail transfers. (We had doubled up these two programs in an effort to provide extra protection, but that was the wrong thing to do. Oops!)
11:00 A.M. EDT: We discover a routing problem with SMTP mail. Due to the heavy infection rates, we decide to exclude .html and .jhtml files from the scanner and set the scanner to delete files instead of renaming them on the desktop computers.
11:20 A.M. EDT: We notify users that there is a mail-routing issue.
12:00 P.M. EDT: We discover that one of the Viruswalls is not configured to accept SMTP mail from one of the Exchange IMS servers, causing a relaying error. This oops was related to the previous oops at 10:30 A.M. and can happen when you mess with your settings.
2:34 P.M. EDT: We send out an update e-mail telling users that the e-mail problem is resolved.
3:00 P.M. to 5:00 P.M. EDT: Trend Micro releases a cleaner for the virus. We begin testing it on a couple of the most heavily infected machines. By this time, we’ve already manually cleaned all the machines, and it doesn’t find any sign of infection.
A breather, at last
Finally, after a day and a half of intense combat against the Nimda virus, the situation seems under control.
Thursday, Sept. 20
We are mopping up the remaining desktops and laptops of people who have been traveling, on vacation, or out sick.
Lessons learned from Nimda
This virus was particularly nasty in that it attacked multiple security holes in different Microsoft products. Since we block all executable attachments in e-mail, that aspect of the attack appears to have had little or no affect on our systems.
However, we did have two gaping exposures that were exploited. First, not all of our machines were updated to SP 2 and the latest patch. On Aug. 16, 2001, Microsoft released Security Bulletin MS01-044 with the Cumulative IIS patch. The bulletin did not specify a service pack level for Win2K servers, and we applied it to all of our servers running IIS. It appears that the servers with SP 2 fared better than those with SP 1. The majority of our desktops and laptops were also on SP 1 but had the previous Code Red patch. This combination did not slow this worm down at all.
Second, the Internet Explorer security holes posed even larger vulnerabilities. Several of our users do not use IE and prefer Netscape, so they have old versions of IE on their machines and have never been prompted for critical updates from the Windows Update Web site. This allowed the Outlook Express exploit to be used very effectively. Several of our users retain older versions of IE for Web testing, and that contributed to the infection as well.
A number of our users had not used the Windows Update feature (and we had never instructed them to do so, either). Currently, we do not have a mechanism in place to automatically force users to upgrade their OS and applications to the latest revisions and patch levels.
We were able to get Nimda under control only after Trend Micro released pattern files that stopped the spread of the virus and we applied service packs and patched all systems to the latest levels. Nevertheless, we are certain that it is only the antivirus software that is keeping Nimda from infecting and spreading again to our machines since we saw machines with all the latest service packs and patches succumb to this worm.
Until Microsoft releases a new patch or service pack, all systems are still vulnerable. We need to remind users to use the Windows Update feature on a regular basis in order to help minimize our exposure.
Does this sound familiar?
Did you find the Microsoft service packs and patches lacking during the Nimda virus attack? Did you find a way to protect your network anyway? Was your virus-prevention vendor able to protect you from Nimda? Send us a note or post a comment in the discussion below.