logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Micron Blog

Searching for a New Applications-Driven HPC Revolution

At the end of this year’s International Supercomputing Conference (ISC), I sat enjoying a drink with a group of high-performance computing (HPC) veterans outside the church in Leipzig where the fall of communism began, and we inevitably asked the question, “What happened this year?”  The consensus of the group was simple: nothing.

That’s not to say that there weren’t deeply interesting ISC sessions and technical papers, or that China’s Tianhe-2 supercomputer moving to the top of the Top500 list earlier than expected wasn’t important.  Rather, it seems that the community feels an urgent need for a revolutionary or disruptive announcement, which simply didn’t come.  Perhaps we’ve grown too comfortable with today’s roadmap to exascale computing.  We have a lot of engineering and technical challenges to overcome if we’re going to get from petascale to exascale—especially if we’re going to do so meaningfully, where “scale” is defined as something more than “LINPACK.”  I think we all believe the exa-LINPACK will arrive well before 2020.

It’s possible that the seeds of the revolution have already been planted—it may just be a very quiet revolution.  Tianhe-2’s ascendency to the top of the Top500 didn’t correspond with first place on the Graph500.  Coming in sixth is still an impressive national accomplishment for China, but IBM’s BlueGene/Q and the K computer still dominate the top five slots.  With all the talk about big data, perhaps this is an example of how the problems of traditional HPC and large-scale analytics are simply different; we have to look for a revolutionary approach to finding a unified architecture that will address the problems of both classes of applications.

At ISC, Dave Dunning from Intel and Muhammad Soofi from Aramco gave interestingly complementary talks in radically different sessions.  Dunning made the point that compute technology could get us to exascale, but we need to worry about the memory.  Soofi argued that many of his applications are memory bandwidth-bound and even in a cost-constrained environment, they’d shut down cores to accommodate memory system requirements.  From the perspective of Dunning, a computer architect, it sounded very much like the “memory wall” argument of the late 1990s. But as a user, it seems that Soofi identified it more accurately: “It’s a processor wall.”

Ultimately, I think focusing on the components rather than the system is probably the wrong way to look at it.  We have a systems problem, and the energy trade-offs are systems-level problems.  Joules spent on data movement cannot be spent again on the compute.  From a components perspective, Micron has Hybrid Memory Cube (HMC) products on its roadmap that meet the requirements of exascale computing.  When we looked at the standard commodity roadmap and the nearly impossible power goals in 2008, we never dreamed that any commercial memory company would be able to bring an exascale-capable part to market.  We thought the technology would be the revolution.

Instead, I think the revolution will be the change of the HPC application base and the reluctant acceptance that supercomputers will move from the MPI-Beowulf model of the last century into something driven by a revolution in architecture.  That revolution is not about building a faster floating-point unit, which dominated the field since Seymour Cray dominated the creation of new machines.  The revolution is about building systems that are more capable of coping with data sets that are large, complex, and irregular.  It’ll be driven by applications instead of technology, and I’m guessing that’s the reason for the palpable lack of enthusiasm at the end of the conference.  The applications are in the early phases of being defined, and we’ll need time for the technology requirements of those applications to emerge before the revolution can really be visible.  

About Our Blogger

Richard Murphy

Dr. Richard Murphy is a Senior Advanced Memory Systems Architect for Micron’s DRAM Solutions Group and is focused on future memory platforms, including processing-in-memory.

Prior to joining Micron in 2012, Dr. Murphy was a Principal Member of the Technical Staff at Sandia National Laboratories. He also worked as a technical staff member at Sun Microsystems and served as the Principal Investigator of several advanced computing R&D efforts, including projects for the Defense Advanced Research Projects Agency (DARPA) and the Department of Energy (DOE).

Dr. Murphy’s specialties include research and development of computer architecture, advanced memory systems, and supercomputing systems for physics and data-intensive problems. He has led several large multidisciplinary teams in the successful creation of new technologies.  He also cofounded the Graph 500 benchmark and currently chairs its executive committee. 

Dr. Murphy is Adjunct Faculty in the Electrical and Computer Engineering Departments at the Georgia Institute of Technology and New Mexico State University. He is the author of over two dozen papers and two patents. He holds a PhD in computer science and engineering, as well as an MS, BS, and BA from the University of Notre Dame. Dr. Murphy is a Senior Member of the IEEE.

Login or Sign Up Now for an account to leave a comment.