Improving GPU Performance Via Large Warps and Two-Level Warp Scheduling

Due to their massive computational power, Graphics Processing Units (GPUs) have become a popular platform for executing general purpose parallel applications. GPU programming models allow the programmer to create thousands of threads, each executing the same computing kernel. GPUs exploit this parallelism in two ways. First, threads are grouped into fixed-size SIMD batches known as warps, and second, many such warps are concurrently executed on a single GPU core. Despite these techniques, the computational resources on GPU cores are still underutilized, resulting in performance far short of what could be delivered. Two reasons for this are conditional branch instructions and stalls due to long latency operations.

Provided by: Association for Computing Machinery Topic: Data Centers Date Added: Dec 2011 Format: PDF

Find By Topic