Virtualization

Teaching Old Caches New Tricks: RegionTracker and Predictor Virtualization

Free registration required

Executive Summary

On-chip last-level caches are increasing to tens of megabytes to accommodate applications with large memory footprints and to compensate for high memory latencies and limited off-chip bandwidth. This paper reviews two on-going research efforts that exploit such large caches: coarse-grain cache management, and predictor virtualization. Coarse-grain cache management collects and stores cache information at a large memory region granularity (e.g., 1KB to 8KB). This coarse view of memory access behaviour enables optimizations that were not previously possible with conventional caches. Predictor virtualization is motivated by the observation that on-chip storage has become sufficiently large to accommodate allocating, on demand, a small percentage of its capacity for purposes other than storing program data and instructions.

  • Format: PDF
  • Size: 218.9 KB