Download now Free registration required
Petascale machines with close to a million processors will soon be available. Although MPI is the dominant programming model today, some researchers and users wonder (and perhaps even doubt) whether MPI will scale to such large processor counts. In this paper, the authors examine this issue of how scalable is MPI. They first examine the MPI specification itself and discuss areas with scalability concerns and how they can be overcome. They then investigate issues that an MPI implementation must address to be scalable. They ran some experiments to measure MPI memory consumption at scale on up to 131,072 processes or 80% of the IBM Blue Gene/P system at Argonne National Laboratory. Based on the results, they tuned the MPI implementation to reduce its memory footprint.
- Format: PDF
- Size: 1996.6 KB