Download Now Free registration required
Ethernet line rates are projected to reach 100 Gbits/s by as soon as 2010. While in principle suitable for high performance clustered and parallel applications, Ethernet requires matching improvements in the system software stack. In this paper, the authors address several sources of CPU and memory system overhead in the I/O path at line rates reaching 80 Gbits/s (bidirectional), using multiple 10 Gbit/s links per system node. Key contributions of their work are the design of a parallel protocol that uses context-independent page-remapping to reduce packet processing overheads, thread management overhead and synchronization in the common case, and reduce affinity issues in NUMA multicore CPUs.
- Format: PDF
- Size: 243.53 KB