The Case for Using 10 Gigabit Ethernet for Low Latency Network Applications
For most traditional enterprise client/server applications, the primary performance metric has been user response time, with something on the order of 100 milliseconds (ms) generally being considered acceptable. Over the last few years a new generation of server-to-server distributed applications has emerged where performance is largely determined by the end-to-end latency (a.k.a.; the application latency) between servers. These applications include the migration of virtual machines between physical servers; High Performance Computing (HPC) with clusters of compute nodes; high frequency, automated security trading; clustered data bases; storage networking, and Hadoop/MapReduce clusters for performing analytics on unstructured big data. Applications such as these require far more bandwidth than client/server applications and perform best with end-to-end latencies that can be as low as a few microseconds.