The Fourth Workshop on the Intersections of Computer Architecture and Reconfigurable Logic (CARL 2015)

Portland, Oregon - Sunday, June 14, 2015

Oregon Convention Center, Room D-132

Co-located with ISCA 2015

http://www.ece.cmu.edu/calcm/carl

Differences

This shows you the differences between two versions of the page.

eric_chung_microsoft_accelerating_deep_convolutional_neural_networks_using_specialized_hardware_in_the_datacenter [2015/06/14 18:21]
jhoe
eric_chung_microsoft_accelerating_deep_convolutional_neural_networks_using_specialized_hardware_in_the_datacenter [2015/06/18 16:27] (current)
jhoe
Line 2: Line 2:
====Accelerating Deep Convolutional Neural Networks Using Specialized Hardware in the Datacenter==== ====Accelerating Deep Convolutional Neural Networks Using Specialized Hardware in the Datacenter====
 +
 +({{carl15_chung.pdf |slides}})
Recent breakthroughs in the development of multi-layer convolutional neural networks have led to state-of-the-art improvements in the accuracy of non-trivial recognition tasks such as large-category image classification and automatic speech recognition.  These many-layered neural networks are large, complex, and require substantial computing resources to train and evaluate.  Unfortunately, these demands come at an inopportune moment due to the recent slowing of gains in commodity processor performance. Recent breakthroughs in the development of multi-layer convolutional neural networks have led to state-of-the-art improvements in the accuracy of non-trivial recognition tasks such as large-category image classification and automatic speech recognition.  These many-layered neural networks are large, complex, and require substantial computing resources to train and evaluate.  Unfortunately, these demands come at an inopportune moment due to the recent slowing of gains in commodity processor performance.