MVAPICH-Aptus: Scalable High-Performance Multi-Transport MPI Over InfiniBand
Source: Ohio State University
The need for computational cycles continues to exceed availability, driving commodity clusters to increasing scales. With upcoming clusters containing tens-of thousands of cores, InfiniBand is a popular interconnects on these clusters, due to its low latency (1.5ìsec) and high bandwidth (1.5 GB/sec). Since most scientific applications running on these clusters are written using the Message Passing Interface (MPI) as the parallel programming model, the MPI library plays a key role in the performance and scalability of the system.