Characterizing and Improving the Use of Demand-Fetched Caches in GPUs

Free registration required

Executive Summary

Initially introduced as special-purpose accelerators for games and graphics code, Graphics Processing Units (GPUs) have emerged as widely-used high-performance parallel computing platforms. GPUs traditionally provided only software-managed local memories (or scratchpads) instead of demand-fetched caches. Increasingly, however, GPUs are being used in broader application domains where memory access patterns are both harder to analyze and harder to manage in software-controlled caches. In response, GPU vendors have included sizable demand-fetched caches in recent chip designs. Nonetheless, several problems remain. Since, these hardware caches are quite new and highly-configurable, it can be difficult to know when and how to use them; they sometimes degrade performance instead of improving.

  • Format: PDF
  • Size: 506.57 KB