Servers

Exploring the Multiple-GPU Design Space

Free registration required

Executive Summary

Graphics Processing Units (GPUs) have been growing in popularity due to their impressive processing capabilities, and with general purpose programming languages such as NVIDIA's CUDA interface, are becoming the platform of choice in the scientific computing community. Previous studies that used GPUs focused on obtaining significant performance gains from execution on a single GPU. These studies employed low-level, architecture-specific tuning in order to achieve sizeable benefits over multicore CPU execution. In this paper, the authors consider the benefits of running on multiple (parallel) GPUs to provide further orders of performance speedup. The methodology allows developers to accurately predict execution time for GPU applications while varying the number and configuration of the GPUs, and the size of the input data set.

  • Format: PDF
  • Size: 576.6 KB