High-bandwidth, Energy-efficient DRAM Architectures for GPU Systems

Tuesday Nov. 3rd 2015
Location: CIC Panther Hollow Room
Time: 4:30PM

mike_oconnor.jpeg
Mike O’Connor
Senior Manager of External Memory Systems Research, NVIDIA

Abstract

High-bandwidth, energy-efficient DRAM architectures are required to support the computation demands of GPUs (and other throughput architectures). This talk describes several aspects of GPU memory systems, with a focus on the requirements placed on the DRAM. New stacked, in-package High-Bandwidth Memory (HBM) addresses many of these challenges, and 2nd generation HBM2 further improves the situation. As GPUs scale to bandwidths beyond 1 TB/sec, however, new innovations are required. This talk provides a high-level overview of these emerging DRAM architecture challenges.

Bio

Mike O'Connor manages research efforts at NVIDIA focused on external memory systems, such as DRAM. Mike has been involved in a range of research on many areas of GPU and memory systems architecture at NVIDA and, previously, AMD. At AMD, Mike was deeply involved in the development of the HBM standard. Prior to AMD, Mike was in the product architecture group at NVIDIA where he was the lead memory system architect for several generations of NVIDIA GPUs, including the first NVIDIA GPUs with GDDR5 support. Mike has also architected network processors at start-up Silicon Access Networks, an ARM processor core at Texas Instruments, and the picoJava cores at Sun. He has 50 granted patents. He has a BSEE from Rice University and an MSEE from the University of Texas at Austin. He is currently working towards finishing his long-delayed PhD at UT-Austin. Mike is a Senior Member of the IEEE and a member of the ACM.



Back to the seminar page