By Bill Valle
On Aug. 11, Microsoft Corp. announced the release of the Microsoft Windows 2000 Datacenter Server. A launch date of Sept. 26 is planned for the program, along with several other enterprise servers including SQL Server 2000, Exchange Server 2000, BizTalk Server 2000, Commerce Server 2000, Application Center 2000, Internet Security and Acceleration Server 2000, and Host Integrations Server 2000.
While no one can speak from experience about the effectiveness of these products yet, it isn’t too soon to look at what the Datacenter Server could offer your company. It has some impressive specifications, and its capabilities may allow it to take on the mainframes without breaking the bank. Here’s an overview of how this new server could benefit your IT shop.
Where Datacenter fits in
Basically, there are four variations of Windows 2000.
- Windows 2000 Professional, which is your desktop and laptop machine.
- Windows 2000 Server, which adds some server and management capabilities that Professional doesn’t have, as well as allowing more connectivity.
- Windows 2000 Advanced Server, which basically just has more scalability and some additional features, such as the COM+ set of component services.
- Windows 2000 Datacenter Server, which takes a quantum leap away from NT 4.0 in terms of potential scalability and in terms of being a large, back-end machine, whether it is an application server or a data center.
Microsoft will place tight restrictions on which hardware manufacturers are certified to host the Windows 2000 Datacenter Server. There may only be a dozen manufacturers that qualify for this, and most of those will be made up of the major players, such as Hitachi, Unisys, NCR, Dell, and Compaq.
Datacenter Server’s reason for being
Although Windows NT 4.0 can now handle up to eight processors, the progression of performance for each CPU tends to fall off with the addition of the sixth CPU. Whether it is inter-process communication limitations or scheduling limitations, just because you add another metric of performance by a CPU doesn’t necessarily mean you’re going to get that same one-to-one relationship between this many processors added and that much performance gained.
Traditionally, the limitations have been two-fold. There was a hardware limitation in terms of what Intel could do with multiple Intel chips, and then there was a limitation on what Microsoft could do in arbitrating the processing across more than four processors.
The first limitation has disappeared with performance improvements in Intel chips, such as 64 gigabyte memory addressing. The Windows 2000 Datacenter Server is designed to take advantage of this while also now allowing access to up to 32 processors.
With this upcoming release, Microsoft’s progression of server offerings can now be broken down accordingly:
- Windows 2000 Server has a limit of four processors.
- Windows 2000 Advanced Server brings that limit up to eight processors.
- Windows 2000 Datacenter Server brings it up to 32 processors.
A lot of the companies are still using a big-end UNIX machine or even mainframes to run their back-end systems, but that is going to change.
Between adding faster processors and scaling more, Windows Datacenter gives some unique capabilities that allow a machine, whether it be eight- or 16- or 32-way, to compete against the very high-end servers, like a Sun Microsystems Enterprise 10000, and for a significantly lower cost.
Just like NT, Windows 2000 has pretty much entrenched itself on the lower- to middle-range server platforms. Now they can justify going for larger back-end systems as well.
Thanks for the memory
Memory support is another big change with the Datacenter Server. In the past, the memory support was limited by what the Intel chips could address. Of course, with the 64-bit chips you will be able to address more, but even with a 32-bit chip:
- Windows 2000 Server addresses four gigabytes.
- Windows 2000 Advanced Server addresses eight gigabytes.
- Windows 2000 Datacenter Server addresses 64 gigabytes.
If you have to deal with a huge database and have to pull a lot of tables into memory, you now have a lot of memory potential. Up until the Datacenter Server’s arrival, you did not have a competitive product to go against the Sun E10000, which could easily support this much memory or more.
Balancing the load
Another improvement is network load balancing, which refers to the ability of the server to take a load off the network and distribute it among different processors. There is no network load balancing on Windows 2000 Server, but on Windows 2000 Advanced Server and Datacenter Server there is a 32 node maximum.
You can actually distribute load across multiple servers seamlessly within the software and not need any specialized hardware. The servers would appear like one virtual IP address and then, basically, once the requests come in, it would be determined which server was least busy and the requests would be routed that way.
This feature allows a session to be sticky. If one user hits one specific server, they keep hitting that server so they don’t lose their session, rather than jumping to a different server.
Server clustering also plays an important role with the new servers. Microsoft Cluster Server is not available for Windows 2000 Server, but it is available for Windows 2000 Advanced Server, with a maximum of two nodes. So there is a sail-over capability and high availability between two separate nodes with Advanced Server.
With Datacenter Server, the maximum gets bumped to four nodes, and for future releases Microsoft is looking at gaining even more depth.
SAN, which SAN?
Another interesting facet about the Datacenter Server is Winsoft Direct—high-speed inter-process communications for what Microsoft calls a system area network. Unfortunately, Microsoft is using an acronym for this communications process that’s already being used by storage area networks—SAN.
Winsoft Direct is an extension of the Winsoft Version 2 abstraction layer. It should substantially reduce the latency in CPU overhead and increase the communication bandwidth between servers.
This tool segments the user traffic from the inter-server traffic, just like you would do with a storage network to keep the storage traffic away from the user traffic. You would put in high-speed communications between servers and then big, bulk chunks of data could move back and forth with minimal system overhead, enabling a group of servers to work almost like parallel processors.
True parallel processing requires very intelligent software to parse functions between processors and then reassemble the functions. Combining clusters of servers with multiple processors tackling multiple and individual tasks—and making it look like this is all happening on one big machine—may be as close to parallel processing as we can get with commercial software for some time.
The final word
Should your company consider investing in Microsoft Windows 2000 Datacenter Server? Even though some application software will have to be rewritten to take advantage of the system software advancements made by the Microsoft Windows 2000 Datacenter Server, the program might be right for you if:
- You must deal with huge databases or very large applications.
- You must handle a large number of online processes.
- You need to cache large amounts of information in RAM.
- You have large load balancing and clustering for sail-over requirements.
- You have high storage needs and a lot of inter-server traffic.
Much of this capability can only be handled today by large mainframes, but now the Windows product line could play a substantial role in that market.
Are you thinking about purchasing Microsoft Windows 2000 Datacenter Server for your company? Will it function as a data center with, or without, a Web connection? You can also start a discussion below or send us a note.