University of Wisconsin-La Crosse
Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs 64 lanes/CU) with memory access patterns designed for throughput more than locality. To drive GPU MMU design, the authors examine GPU memory reference behavior with the Rodinia benchmarks and a database sort to find: the coalescer and scratchpad memory are effective TLB bandwidth filters (reducing the translation rate by 6.8x on average), TLB misses occur in bursts (60 concurrently on average), and post-coalescer TLBs have high miss rates (29% average).