10-Gigabit Ethernet: one million IOPs with iSCSI

Intel is pushing the 10-Gigabit Ethernet interfaces very hard. IT pro Rick Vanover discusses one report that has come up of the high performance networking protocol.

For many IT infrastructures, the cost of fibre channel storage from the adapter and switch side is becoming less attractive. The ability to use existing Ethernet infrastructure for a storage area network (SAN) is becoming a viable option for many environments. Ethernet-based storage protocols of iSCSI and NFS have been very popular in small and medium sized SANs at 1-Gigabit Ethernet. A new Intel report has Microsoft Windows Sever 2008 R2 servers hitting over 1 million I/O operations per second (IOPs) with a single 10 Gigabit Ethernet adapter running the native iSCSI storage protocol.

Like everyone else, I’m always looking for ways to do what I do better. In regards to storage networking, I’ve got tremendous interest in an Ethernet-based storage protocol. Definitely to save costs, but also to achieve a better performing solution for the requirements in place. The Intel report is using a single Intel 82559 10-gigabit server adapter on a Windows Server 2008 R2 system to generate 1,030,000 IOPs. This throughput is tremendous and could run nearly any workload in organizations large and small. But the devil is in the details. Figure A below shows the configuration for this iSCSI test. Figure A

Figure A

Click to enlarge, image reproduced from Intel Webcast.

While these numbers are impressive, we have to clarify a few points. The image states that there are 10 logical unit numbers (LUNs) zoned to the single 10-Gigabit Ethernet adapter and Windows Server system. What the image does not state is how many hard drives or storage product are in use (including whether or not solid state drives were in use).

The Webcast mentioned a white paper would be made available with the details of the configuration, yet I have not been able to locate the material. It is very possible there may be 100 or more actual hard drives of the highest performance tier used to perform this test. Chances are, not everyone can drop in a 10-Gigabit Ethernet adapter and hit these numbers. These numbers really are trying to say that the current Nehalem processors, newest Intel network interface and software iSCSI initiator, are capable of delivering this performance. Besides, does anyone have an application with this type of I/O requirement? I didn’t think so.

Looking down the road, fibre channel, as we know it, is really dead. Go ahead and start writing the obituary. I believe that iSCSI, fibre channel over Ethernet (FCoE), and to an extent NFS will become the mainstream storage protocol. Organizations that have a SAN infrastructure in place -- depending on how the environment is scaled -- may stay on fibre channel for a while. Yet, new installations clearly need to consider an Ethernet-based storage protocol to really deliver a right-sized and right-priced solution.

What is your take on iSCSI and Ethernet-based storage protocols? Share your comments below.