CUDA-NP: Realizing Nested Thread-Level Parallelism in GPGPU Applications

Provided by: Association for Computing Machinery
Topic: Hardware
Format: PDF
Parallel programs consist of series of code sections with different Thread-Level Parallelism (TLP). As a result, it is rather common that a thread in a parallel program, such as a GPU kernel in CUDA programs, still contains both sequential code and parallel loops. In order to leverage such parallel loops, the latest Nvidia Kepler architecture introduces dynamic parallelism, which allows a GPU thread to start another GPU kernel, thereby reducing the overhead of launching kernels from a CPU. However, with dynamic parallelism, a parent thread can only communicate with its child threads through global memory and the overhead of launching GPU kernels is non-trivial even within GPUs.

Find By Topic