Linux is often lauded as an economical solution. In fact, we recently heard from one administrator who saved his company $35,000 by using a Linux solution. However, his Linux strategy did more than just save money on the front end—it allowed him to deliver a better enterprise solution that increased his ability to monitor and manage network uptime.

In this installment of From the Trenches, we’re going to follow Ed, who was a manager overseeing network administration at a communications company when a major network reconfiguration was dumped in his lap. The project required him to divide a LAN into five smaller virtual LANs with switches that, at the time he needed them, would not pass DHCP or DNS traffic.

We will see why:

  • Ed wanted to subdivide his LAN into five VLANs.
  • The switches wouldn’t work on his LAN and how he got around it by turning old workstations into Linux DNS/DHCP servers.
  • Ed’s workaround saved his company at least $35,000.
  • Solving the problem of monitoring the old hardware created the additional benefit of enhancing network uptime.

You can learn quite a bit by reading about the methods other administrators and engineers use to resolve challenging technology issues. Our hope is that this column will provide you with unique solutions and valuable techniques that can help you become a better IT professional. If you have an experience that would be a good candidate for a future From the Trenches column, please e-mail us. All administrators and their companies remain anonymous in this column so that no sensitive company or network information is revealed.
Addressing the problem
When Ed was placed in charge of the call center network at a communications company in Bellevue, WA, the company was in the process of subdividing its 700- to 800-node network into five VLANs.

“The reason we did the VLANs is that we were running off of hubs, and we were running out of ports, and everyone wanted 100 megabits to their desktops,” Ed said. “We also wanted more intelligent switching and to segregate the traffic a little bit.”

Under the original network topology, when all the PC workstations in the company would fire up between 8:00 and 8:30 every morning, they were using DHCP to request an IP address from a Windows NT Server, which was also doing DNS service duties. Multiply the DNS requests by more than 700 machines, and there was a lot of network traffic being broadcast on the company’s network. Server traffic to a dozen servers also compounded the situation.

When the VLANs were set up, the marketing department would have its own LAN, the MIS department would have its own LAN, the call center would have its own LAN, and so on.

Ed picked a Saturday to make the change, and other than some poor planning that led to a few wiring problems, everything was going smoothly. Then, another system administrator came to Ed with some bad news. The new Bay Networks switches they were installing didn’t propagate the DHCP and DNS information to the new VLANs. (Bay Networks is now owned by Nortel Networks.)

The network administrator who came with the bad news recommended buying five new fully loaded Windows NT Servers to perform the DHCP/DNS server duties. “I knew my boss wouldn’t want to spend $35,000 to make this work. To make the situation worse, it was a temporary fix because Bay Networks was going to release a firmware fix within 90 days,” Ed said.

Finding a Linux solution
Ed knew the company had a bunch of older computers in a storage closet in the company parking garage, so he sent one of his staff members to the closet with instructions to pull out five or six machines with the fastest processors and the most memory. He said most of the machines in the closet had processor speeds between 266 MHz and 400 MHz, with between eight and 16 megs of RAM.

Ed installed Red Hat Linux on five of the machines and then ran a little script he wrote on each machine to configure them with the appropriate IP address ranges and other network settings for each VLAN.

“I guess I came in at 10:00 A.M. on a Saturday, and by 1:00 P.M., we had the [Linux] servers up and running,” Ed said. “It was very straightforward, and it surprised my network administrator, who was a Windows NT guy.”

Ed said that being located so close to Seattle, and Microsoft’s headquarters in Redmond, there was a natural tendency to use everything Microsoft at the company. Thus, when he deployed the Linux servers, there were many expressions of gloom and doom from his coworkers in the IT department.

“At the time, our DNS server running on NT would go down once or twice a week,” Ed said. “When I left the company, these Linux servers had been running over three months straight. I put them in a corner, turned them on, and forgot about them.”

No one really should have been surprised, he said. As a communications company doing telephone and cable telecommunications, the company was using a Linux box to gather call-data records off of phone switches, preprocess that data, and then feed that data to an AS/400 where it went into the billing system. According to Ed, the company had been using that machine with success for about a year before this project came up.

“Linux and the variants like BSD lend themselves very well to doing things like that—DNS services, DHCP services, print servers, file servers, that sort of thing. They do it very economically,” he said.

When the updates to the switch firmware came, the Linux servers were functioning so well that Ed’s company didn’t bother with the patch.

A good solution gets better
One concern with resurrecting the old computers for duty as DHCP/DNS servers was that they were potentially more likely to experience an equipment failure.

To solve that problem, Ed wrote a script that would ping the old machines every two minutes. If there were a change in operating state, the program would send him a notice to his cell phone.

Ed then turned one of old 486 machines in the storage closet into a Linux ping server. He figured that if his new ping server was going to be checking his Linux servers, he might as well add all the routers and other servers to the ping list. This turned out to be fortuitous for Ed. He believes it may have saved his job a couple months later.

The company’s AS/400 was located in Seattle and connected to the Bellevue operations via a T1 connection, with several backup T1 connections. Ed’s ping server was constantly testing the router at his end of the T1 in Bellevue and the router at the other end of the T1 in Seattle. If either of these routers refused to respond to a ping, it would mean the T1 connection, and all of its alternate connections, were down.

One morning at 7:30 A.M., Ed’s ping server sent him an e-mail via his cell phone that the T1 was down. He called someone in Seattle and they were working to fix the problem by 8 A.M. By 9 A.M., the T1 connection was restored. Without the warning system, Ed would not have discovered the problem until he got into the office. At that point, the T1 would probably have been down all morning while the fix was in progress.

“I just think it’s important to note that not only is there a place in corporate America for open source systems but that a company can save a significant amount of money and get a robust solution out of it,” Ed said. “You don’t have to spend hundreds of thousands of dollars to get a solution that works.”
Are you using old computers to distribute loads away from your main servers? Have you found a solution like Ed’s that has worked for you? Share your thoughts on the matter in a discussion below or send us a note.