The Ohio Society of CPAs
Clusters of several thousand nodes interconnected with InniBand, an emerging high-performance interconnect, have already appeared in the Top 500 list. The next-generation InniBand clusters are expected to be even larger with tens-of-thousands of nodes. A high performance scalable MPI design is crucial for MPI applications in order to exploit the massive potential for parallelism in these very large clusters. MVAPICH is a popular implementation of MPI over InniBand based on its reliable connection oriented model. The requirement of this model to make communication buffers available for each connection imposes a memory scalability problem.