MRPB: Memory Request Prioritization for Massively Parallel Processors

Provided by: University of Rhode Island, Kingston
Topic: Storage
Format: PDF
Massively parallel, throughput-oriented systems such as Graphics Processing Units (GPUs) offer high performance for a broad range of programs. They are, however, complex to program, especially because of their intricate memory hierarchies with multiple address spaces. In response, modern GPUs have widely adopted caches, hoping to provide smoother reductions in memory access traffic and latency. Unfortunately, GPU caches often have mixed or unpredictable performance impact due to cache contention that results from the high thread counts in GPUs.

Find By Topic