Download now Free registration required
Clusters of several thousand nodes interconnected with InniBand, an emerging high-performance interconnect, have already appeared in the Top 500 list. The next-generation InniBand clusters are expected to be even larger with tens-of-thousands of nodes. A high-performance scalable MPI design is crucial for MPI applications in order to exploit the massive potential for parallelism in these very large clusters. MVAPICH is a popular implementation of MPI over InniBand based on its reliable connection oriented model. The requirement of this model to make communication buffers available for each connection imposes a memory scalability problem. In order to mitigate this issue, the latest InniBand standard includes a new feature called Shared Receive Queue (SRQ) which allows sharing of communication buffers across multiple connections.
- Format: PDF
- Size: 222.6 KB