Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance

Tuesday Sept. 29, 2015
Location: CIC Panther Hollow Room
Time: 4:30PM


Rachata Ausavarungnirun
CMU

Abstract

In a GPU, all threads within a warp execute the same instruction in lockstep. For a memory instruction, this can lead to memory divergence: the memory requests for some threads are serviced early, while the remaining requests incur long latencies. This divergence stalls the warp, as it cannot execute the next instruction until all requests from the current instruction complete.

In this work, we make three new observations. First, GPGPU warps exhibit heterogeneous memory divergence behavior at the shared cache: some warps have most of their requests hit in the cache (high cache utility), while other warps see most of their request miss (low cache utility). Second, a warp retains the same divergence behavior for long periods of execution. Third, due to high memory level parallelism, requests going to the shared cache can incur queuing delays as large as hundreds of cycles, exacerbating the effects of memory divergence.

We propose a set of techniques, collectively called Memory Divergence Correction (MeDiC), that reduce the negative performance impact of memory divergence and cache queuing. MeDiC uses warp divergence characterization to guide three components: (1) a cache bypassing mechanism that exploits the latency tolerance of low cache utility warps to both alleviate queuing delay and increase the hit rate for high cache utility warps, (2) a cache insertion policy that prevents data from high cache utility warps from being prematurely evicted, and (3) a memory controller that prioritizes the few requests received from high cache utility warps to minimize stall time. We compare MeDiC to four cache management techniques, and find that it delivers an average speedup of 21.8%, and 20.1% higher energy efficiency, over a state-of-the-art GPU cache management mechanism across 15 different GPGPU applications.

Preprint

Bio

I am a graduate student at Carnegie Mellon. I also did my undergrad at Carnegie Mellon. I am from Bangkok, Thailand. My research interests are in heterogeneous architecture, memory subsystem, scheduling, storage system, and network on chip.

I work with my advisor Prof. Onur Mutlu in the SAFARI research group, part of the Computer Architecture Lab at Carnegie Mellon (CALCM). I am interested in designing a scalable and high performance heterogeneous architecture through a new memory controller design and scalable network on chip. In addition, I am also interested in providing service guarantee for different types of applications in an heterogeneous system including SoCs. I am supported by the Royal Thai Scholarship for both undergrate and graduate degrees.



Back to the seminar page