Over the next couple of months, my staff and I will be making some relatively significant changes to the computing environment at Westminster College. I thought I'd use this post to describe what we're doing and why and maybe give you some ideas about your own workings.Server virtualization. Yes, virtualization is all the rage these days. From direct cost savings with regard to hardware to "green IT" initiatives, server virtualization projects are underway everywhere. In our case, the goals are to reduce the amount of hardware in our data center and to provide availability options that would have previously required more substantial implementations. We have already deployed VMware Virtual Infrastructure on two hosts in our environment, but they are running the version 3.0 Starter edition of the product, which means that they lack availability features such as Vmotion. As we move ahead with virtualizing more critical services, Vmotion becomes a key part of the scenario. In the next few weeks, we'll be upgrading to VMware Virtual Infrastructure 3.5 Enterprise, which has everything we need to move into the next phase of our virtualization efforts. SAN. In order to benefit from the availability features in VMware Virtual Infrastructure and to add availability capability to other services we provide, shared storage is a must. Today, we took delivery of two iSCSI EMC AX4 arrays—one with SAS drives for higher performance applications and one with SATA disks for other uses. Once units are in place, we'll begin migrating services to these devices. These units also provide full snapshot capability, which we can use for further service protection. Blade servers. Within the next few days, we'll be taking delivery of five new sets of Dell M600 blade servers and an M1000e chassis. These blades are extremely well configured and will be suitable for everything from virtualization to running a terminal services environment to general usage. Why blades? First of all, we secured absolutely incredible pricing from Dell, including the fully outfitted chassis. Even with 1U or 2U servers, we couldn't come close to beating the pricing we got on the full blade solution. Local storage on the blades is somewhat minimal—each is configured with a pair of 73GB, 15K RPM SAS drives with RAID 1. However, the total raw capacity of the new storage arrays is 13TB, so storage won't be an issue. Each blade server has four onboard 1 Gbps Ethernet adapters, two of which will be dedicated to storage.
Servers will connect to the SAN in a fully-meshed way for highest level of availability. On another side of the equation, blades provide us with a way to use less space in our very, very small data center. Further, adding new blades will be a breeze since everything is in the chassis. I've mentioned before that we're a small shop. Eventually, I think we'll have reduced the number of physical servers we support to less than 16, which is the maximum capacity of an M1000e chassis.Terminal computing. Achieving a high level of service availability is one of the key components of this project. Part of the reason behind this goal lies in our intent to deploy a signficant (to us) terminal server infrastructure using Windows Server 2008 Terminal Services. There are numerous reasons for this direction. First, we have a desire to extend the "lab experience" for our students to their dorm rooms and beyond. That is, we'd like for them to be able to make use of the same software in their rooms that they use in the labs. However, we don't have enough licenses, nor do we have permission, to simply hand a license key and installation CD to every student. Therefore, we'll begin to make our licensed applications available through Terminal Services.
Second, we're looking for better ways to manage desktop computers on campus. For a number of administrative office users, a full desktop is simply not necessary. We will be able to provide better application support under a centralized model and, as is the case with students, location will be taken out of the equation. People can work from wherever they need.
Third, I'd love to extend the life of desktop computers on campus, or simply buy fewer units in favor of thin clients. Fourth, we had a professor come to us with a request for a new learning space. She wanted the ability to (1) Quickly deploy a full computer lab at will; (2) Deploy a lecture style classroom; (3) Deploy a classroom configured in a collaborative, team oriented style — all in the same classroom space. We opted to provide her with Safebook laptop form-factor thin clients and tables with wheels on one end. The Safebooks were chosen since they're basically useless if stolen and were relatively inexpensive. Now, we just need the Terminal Services computing environment to support all of this!
Is this going to be a lot of work? You bet. But, the end result, if everything goes as we expect, will be absolutely fantastic! We're basically rethinking our entire computing environment and trying to deploy an environment that is more flexible, more available, easier to manage, and less costly.
I'd love to hear about some of your similar initiatives as I'm never averse to stealing good ideas! Use the comment space to share.
Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive with CampusWorks, Inc. Scott is available for consulting, writing, and speaking engagements and can be reached at firstname.lastname@example.org.