Processors

On the Limits of GPU Acceleration

Free registration required

Executive Summary

This paper throws a small "Wet blanket" on the hot topic of GPGPU acceleration, based on experience analyzing and tuning both multithreaded CPU and GPU implementations of three computations in scientific computing. These computations - iterative sparse linear solvers; sparse Cholesky factorization; and the fast multi-pole method - exhibit complex behavior and vary in computational intensity and memory reference irregularity. In each case, algorithmic analysis and prior work might lead one to conclude that an idealized GPU can deliver better performance, but the authors find that for at least equal-effort CPU tuning and consideration of realistic workloads and calling-contexts, they can with two modern quad-core CPU sockets roughly match one or two GPUs in performance.

  • Format: PDF
  • Size: 328.6 KB