in Computer Architecture: Lattice Gas Automata
Tuesday December 3, 2002
Hamerschlag Hall D-210
Associate Professor of ECE and RI, Carnegie Mellon University
Computer architecture is a maturing discipline with established criteria
to evaluate new ideas. Developing a competitive microprocessor keeps hundreds
of engineers busy for years. The development costs for computing platforms
that include I/O, memory and storage structures are even higher. Hence
it is no surprise that this discipline is dominated by small, evolutionary
refinements. Gone are the days of bold, radically different, innovative
ideas. Instead, mainstream computer architecture is concerned with improving
branch predictors, find a some more instruction level parallelism, add
a bit more speculation, tweak the cache hierarchy a little more or perhaps
address some reliability problems. Research and innovation along these
lines requires a very costly infrastructure and familiarity with the myriad
of technology constraints and design trade-offs. Thus nearly all of this
work is happening behind the closed doors of Intel, IBM, AMD, Sony...
However computer architecture research in academia is far from dead. Our
strength is not having to worry about a billion lines of Windows code.
We can think about how to solve important, REAL problems that do not fit
the current microprocessor paradigm. There are solutions to important
problems that are completely alien to a mainstream computer architect.
Lattice Gas Automata (LGA) are one such example: without any floating
point arithmetic, LGA can solve fluid-dynamic problems more efficiently
than a traditional supercomputer. LGA are a special class of cellular
automata (CA). More commonly known CA include John Conway's game of life,
John von Neumann's self-replicating computer and the-new-kind-of-science's
rule 110 by that impossible to remember fellow. In 1976 Hardy, de Pazzis
and Pomeau proposed a CA in which each bit of the state of a cell only
affects the next state of a distinct neighbor. Such CA can model discretized
versions of a billiard ball table where collisions occur only at the cells
of a certain, regular lattice. Collisions may maintain certain properties,
such as the number of balls, the total momentum, etc. Since then, LGA
have been refined to model a wide range of physical problems. In this
talk, I will give a brief overview of LGA and other cellular automata
machines from the perspective of a traditional computer architect. LGA
research is a highly developed, quite subtle field of research that can't
be covered in an hour. But it should be fun to take a look at it to provoke
some out-of-the-box thinking.
After receiving his PhD in Computer Science from CMU, Andreas Nowatzyk
developed distributed shared memory multiprocessors at Sun Microsystems.
S3.mp allowed all workstation within a building to be interconnected to
form one distributed multiprocessor that efficiently shared all computing
resources. He and his team at Sun developed the first single chip router
with multiple, integrated serial interfaces operating at >1 Gb/sec. His
work on processor/memory integration at Sun predates similar work at Berkeley
and resulted in several basic patents. At Digital/Compaq's Western Research
Laboratory, he worked on the Piranha chip multiprocessor, which refined
several of the innovations from S3.mp. Even though the nearly completed
Piranha CMP was canceled along with the Alpha microprocessor, it influenced
projects under development at Intel and Sun. After working for 10 years
in industry on scalable MP systems for commercial application, he is now
an associate professor at CMU's ECE department and the Robotics Institute.
Besides computer architecture, his research interests include optics,
high resolution imaging, signal processing, and computational biology.