Automata Processor: It’s Not Just About Speed and Power Consumption

By Paul Dlugosch - 2013-12-19

At Supercomputing 13 (SC13), I participated in a discussion panel on the topic of reconfigurable computing. One recurring theme expressed by the audience was disappointment that the industry is not moving more quickly toward exascale-class performance despite the fact that we have some very powerful CPUs, GPUs, and reconfigurable logic engaged in this effort. Several panel members pointed to insufficient programmer productivity as the root of this problem. In other words, we have very powerful systems that we cannot fully exploit because it’s too difficult to do so.

We know that scientists and engineers are demanding high levels of parallelism. Parallelism is viewed as a key capability in the march toward exascale-class computing. Unfortunately, achieving sufficient levels of parallelism can be very hard. Simply identifying the potential parallelism in complex computer algorithms is often one of the most difficult challenges. And if this potential for parallelism can be identified, implementing this parallelism on current parallel computer architectures becomes the next major challenge.

The Automata Processing can address these challenges in parallel programming in a very unique way. While parallel automata-based computing may be unfamiliar to many, the actual concept is quite simple. Some large problems may require tens, hundreds, or even thousands of automata to operate in parallel. In conventional parallel architectures, these individual automata can be equated to threads — and synchronizing these threads, managing their access to system resources, and coordinating results become very difficult and time-consuming tasks.  With the Automata Processor, each automaton is loaded into the processing fabric and essentially becomes an independently operating thread execution engine. Need another thread? Simply compile your automaton and load it into an empty area of the fabric.  All of the automatons will operate in parallel against the input data stream. It’s clean, simple, and easy — and is sure to make the lives of parallel programmers a lot easier.

This programming simplicity has not been lost on some analysts who have reviewed the Automata Processor architecture.  In his SC13 blog series for Moor Insights & Strategy, Paul Tiech zeroed in on this compelling capability of the Automata Processor: The fact that the user can define hundreds or even thousands of unique machines and never have to think about how to make them work together in parallel. I was very happy to see someone “on the outside” make this important observation. Technologies with simple, easy-to-use interfaces and programming capabilities — like the Automata Processor — will play a key role in improving programmer productivity and helping the industry to scale computing performance to extraordinary levels.

Stay tuned for more discussions and analysis of our Automata Processor launch…

Paul Dlugosch