# Program Generation for Transforms: Tackling the Memory Hierarchy

## Abstract

State of the art code generators, including ATLAS, Flame, FFTW and SPIRAL, rely on search to fit algorithm to hardware. The reasons for this are partially historical and partially practical. Prior to vector units and multi-cores in the general computing realm, algorithm optimization occurred primarily in the domain of the memory hierarchy. Opti-mizing for the memory hierarchy was (and is) difficult, and the literature is filled with techniques to improve locality and perfor-mance. Search provides a means to implicitly apply sets of these optimization methods automatically and produces code which performs similarly to hand-tuned code, a major win for the code generators. Search (using an execute and time feedback system) however, has several unfortunate consequences: it is time consuming (large search space); it is subject to measurement noise; it gives the best algorithm, rather than the optimal; it may not even find the best algorithm; and finally, it is a black box (we know which algorithm is best, but not why it is best). The goal of this work is to build an optimization framework for linear transforms that relies on the parameters and structure of the memory hierarchy ra-ther than search. The approach involves the definition of a memory model on which we define a cost function. We then identify what structures in the transform algorithm minimize the cost function and construct DFT algorithms that meet these crite-ria. This talk describes on-going research and many results are preliminary.

## Bio

Marek Telgarsky is a PhD student in the Electrical and Computer Engineering department. He com-pleted his Bachelor's Degree in Electrical Engineer-ing at the University of Wisconsin-Madison. His ad-visors are Jose Moura and James Hoe. His re-search interests include signal processing, software optimization, and computer architecture.