Get IT Done: Learn about network implementations from this example

Follow these steps to properly implement a network

One sure way to deploy an inefficient network is to fail to plan for its implementation and maintenance. Fortunately, you can learn from others’ mistakes and avoid making the same errors.

In a 15-month-long project at Hamilton College, a small liberal arts college in upstate New York, I was in charge of procuring, designing, configuring, and implementing a major upgrade to the campus network infrastructure.

In my first installment, ”Network planning: Pay now or pay later," I discussed the initial planning phase of the project. In that phase, a vendor was selected, and I explained the rationale behind the network upgrade.

The project’s second phase involved the implementation planning phase, along with the implementation itself. For Hamilton, that process was the most interesting and troublesome part. In this article, I'll tell you about the initial failed implementation, as well as the successful final implementation of the network upgrade.

The protocols
Before I get into the implementation planning phase, let me discuss the protocol configuration with which we’re dealing. Over the network infrastructure, we were running TCP/IP, IPX/SPX, and AppleTalk. Being equally Macintosh- and PC-oriented and running NetWare 4.11 as our primary file and print service, we had to run both AppleTalk and IPX/SPX over the wire in addition to TCP/IP. With the original 1995 implementation of the campus network, it was decided that the TCP/IP configuration would be based on user type, the AppleTalk configuration would be based on building, and the IPX/SPX configuration would be completely bridged—there would be no routing involved.

Users of any type could be present in almost any building. However, for areas such as residence halls, we did not allow faculty or administrative IP addresses to be assigned to machines.

Unfortunately, with the growth of the campus network, this design proved to be fairly inefficient since we were, in essence, multinetting the network—that is, sending information from multiple networks down the same wire, broadcasts and all.

Hamilton College’s TCP/IP network configuration

The IPX/SPX network configuration was presenting the biggest problem to the operation of the old campus network. Under the old configuration, all 3,000 jacks on the campus network belonged to the same IPX/SPX network, and we had two IPX/SPX frame types present. In essence, under an IPX/SPX broadcast, ALL network jacks on campus received the information. It was quite inefficient.

Although it wasn’t perfect, AppleTalk was the most logically set up and efficient protocol on the campus network. In the original network design, AppleTalk zone names were set up with multiple buildings included in a single zone. For example, there may have been two residence halls named Res Hall A and Res Hall B next door to each other. These two buildings were put into a single zone with only their AppleTalk range being different. They were both placed into a zone called “ResHallA_ResHallB.” The major problem with this method was that users, at times, had difficulty locating resources, such as printers, in their buildings.

Hamilton College’s AppleTalk network configuration

As you may be able to tell, this protocol configuration generated a lot of excess broadcast traffic and was inefficient as the network grew in size.

It was at this point in the implementation planning that we made a mistake. Under the first implementation of the network upgrade, we decided not to make any changes to the actual protocol configuration. Basically, we assumed that the network implementation would proceed much more smoothly without the added task of modifying the network addressing. Over the summer of 1999, while students were away and a majority of the faculty was gone, we decided to continue down “broadcast road” and keep the same network configuration that we had on the old network.

Easier said than done
With the Powerhub, our protocol configuration worked fine, except when under a heavy load. Once we implemented the 3Com Corebuilder 9000, we found out the hard way that “easier” is not something that should be said until detailed testing is done. The Corebuilder 9000 was perfectly capable of handling our configuration, but actually implementing this configuration would have been extraordinarily difficult.

In addition, because of the complexity of the initial configuration, on-campus network security would have been compromised and actually maintaining and updating this configuration would have been something right out of Nightmare on Elm Street. Needless to say, we cancelled the initial implementation, which was in the best interests of the users but a difficult thing to do!

At this point, it became obvious that we needed to consider a much more aggressive implementation approach and that we would have to rethink our protocol configuration. Upon taking a hard look at the configuration, we decided that, along with a brand-new network, we required a brand-new protocol configuration. The act of actually changing the entire protocol configuration was going to be much work, but we were up to the task—and being the networking folks that we are, it was an interesting undertaking.

The redesign
The protocol redesign consisted of the network services team. (At the beginning of the process, that was just me, but it soon came to include the network/systems administrator we had just brought on board.) We created a conceptual final configuration and presented the pros and cons to the rest of the IT team on campus. Such communication both made them aware of what to expect as a result of this change and allowed them to give us feedback on our plan.

Our strategy was fairly simple. Rather than use a very small subset of our class B IP address space, as we had been doing for five years, we would utilize a large portion of the range to make the network traffic much more efficient on campus. Rather than allow an IP-based network broadcast to hit 30 buildings on campus, we would subnet the campus so that each building became its own broadcast domain. We would no longer use user-based or bridged VLANs for any network traffic.

This new plan also allowed us to slowly transition the campus to the new infrastructure rather than having to go with an “out with the old, in with the new” approach over a weekend. It allowed us to more easily test the new configuration and devices and to proceed in a much more controlled manner.

In order to make future management of this configuration easier, we also numbered each protocol consistently. For example, in buildings named “Hall A” and “Admin D,” the following might be the protocol configuration (example uses private addressing):


  Hall A Admin D
TCP/IP Network
IPX/SPX 802.2 5a 24a
IPX/SPX 802.3 5b 24b
AppleTalk range 5 24
AppleTalk zone Res_hall_a Admin_d

However, we ran into one snag during this process. Since we host our own DNS servers, and because we were using “static DHCP” (IP address based on Ethernet hardware address—basically a reserved lease for every system on campus), DNS was going to be a major difficulty during this transition. In order to alleviate this problem, and to correct a number of growing concerns that we were having with DNS, we purchased Cisco’s Network Registrar DNS/DHCP product and installed it.

As buildings were moved to the new network, they were also transitioned to the new DNS/DHCP servers. In addition, rather than basing IP addressing on the hardware address, we moved to a truly dynamic addressing environment. After the upgrade was complete, we registered our new DNS servers with Internic.

OSPF to the rescue
Another problem we ran into was communication between the old and new networks, which was critical for the success of the implementation. In order to make this work, we enabled OSPF (Open Shortest Path First) routing on all interfaces on both networks.

At this point, we thoroughly tested the new network by connecting systems and switches to different ports on the network backbone. Of course, we had to tweak configuration parameters, but after proper testing, we were confident that this network would work just fine.

This planning process took considerable time and effort. After we spent the time for the planning, we had one task left: implementation.

We began implementation in the middle of the semester. We wanted to avoid network disruptions during finals, but we wanted users to be on campus so that problems could be taken care of as they arose. If we had done the installation during a break, we would have been dealing with ALL problems in one day when the students returned.

The first part of the implementation consisted of communication. We sent a message to the entire campus outlining what was going to occur. We then chose a small residence hall close to our offices on campus and met with the residents. We asked them if they would be willing to be our “guinea pigs.” Basically, although we had done our testing, we wanted to test a real building with real people. They were more than happy to oblige.

Let the migration begin
We moved their building to the new network with no trouble whatsoever. So the next day, we moved on to the next building, which was much larger. Again, we experienced no problems. As a result of the success of these two buildings, and because we had received zero trouble calls, we decided to accelerate the implementation timeline. Within a matter of about a week and a half, we had moved every residence hall on campus to the new network.

The downtime for each residence hall was, at most, about an hour, and was much less in most cases. My team and I began moving buildings each day at 6 A.M. since most people are still asleep then. When people got up to check their e-mail, they would simply acquire an IP address from the new DHCP servers and would have no difficulties.

After the residence halls were migrated, we moved the academic and administrative buildings on campus to the new network. These buildings were slightly more difficult, since we had Macintosh servers in some areas and firewall entries to deal with. We proceeded with these buildings as we did with the residence halls and ran into very few problems. The Macintosh servers gave us the most difficulty, since we had to recreate all desktop printers and desktop document shortcuts on client systems in buildings that had Macintosh servers. But overall, this process went very smoothly.

On December 29, we moved the final building to the new network and completed moving our servers and our Internet connection to the new infrastructure. We then turned off the old Powerhubs and watched the Corebuilder 9000 handle the new load with ease!

I definitely have to thank the folks at the Rochester, NY, 3Com office for sticking with us, even through the initial canceled implementation. Without their constant help, we would not have been able to complete this implementation.

Ah, success
There were two primary reasons that the final network implementation was as successful as it was. First, we planned for just about every contingency that we could think of. Without this planning aspect, we would never have been as successful as we were. I’ll talk about my measure of success indicators in a minute.

The other reason we were successful was that we communicated everything we could to our users so that they knew exactly what to expect. We set up a Web site for them to receive all kinds of information about the upgrade, including what their new network parameters were. When we upgraded a building, we provided diagrams of the new network configuration so that people could fix minor problems (e.g., Macintosh desktop printers) without having to call the ITS Help Desk.

I feel that this upgrade was a success based on a number of indicators. First, the number of calls to the ITS Help Desk did not increase by very much at all. The few calls that did come to the Help Desk were legitimate problems that we would have had to address anyway. Second, many people did not notice that there was downtime or that things had changed. In essence, it was a completely transparent upgrade for a majority of the users on campus. Third, the rest of ITS was fairly unaffected by the changes as well, and most of them were very happy with the final results of the network upgrade.

I definitely learned the importance of proper planning during this project. In addition, while I have always found it important to communicate environment changes to users, we increased these notifications significantly during the upgrade process. While some users did not appreciate the additional e-mail, I received many compliments from users who were happy at being made aware of exactly what was happening to something that most of them heavily rely on.

Scott Lowe is an associate director and network services team leader for New York's Hamilton College. He's earned MCP+I and MCSE certifications from Microsoft.

If you'd like to share your opinion, please post a comment below or send the editor an e-mail.

Editor's Picks

Free Newsletters, In your Inbox