logo-micron

Add Bookmark(s)


To:

Email


Bookmark(s) shared successfully!

Please provide at least one email address.

Micron Blog

Contemplating the Future of Computing

I spoke Friday at the IEEE Workshop on Microelectronics and Electron Devices (WMED) in Boise about a memory-centric vision for the future of computing. This was my first time at this workshop, and I was very impressed by two things: the high caliber of the talks and the fact that the workshop had strong representation from students at all levels. Coming from the high-performance computing community—which tends to be a somewhat more distinguished, somewhat greyer crowd—seeing a strong student presence was refreshing!

John Knickerbocker from IBM gave a particularly thought-provoking talk on the future of 2D, 2.5D, and 3D integration as ways of combining heterogeneously fabricated elements (logic, memory, MEMS, and other sensors, potentially silicon photonics). The kind of close proximity enabled by these methods is fundamentally critical to the future of computing.  Proximity is one of the few ways to significantly decrease the energy required for communication between modules in a computer—which is the key factor in power consumption, regardless of a computer’s scale (mobile phone to supercomputer). With servers consuming up to 1.5% of the world’s power (see Growth in Data Center Electricity Use 2005 to 2010 [updated 2011]), this is a big deal.

Work in advanced technologies like processing-in-memory (PIM) is about performing computation in less energy than it takes to drag the data between a standard memory module and a processor module.

Another dominant and interrelated theme is the change in Moore’s Law that has occurred since 2003.  Performance doesn’t simply come “for free” by waiting anymore, so we’re looking to new architectures for an overall improvement in end-user application performance.  There was a lot of talk about potential new memory devices, but more than that, there was an air of opportunity about the creation of new architectures capable of addressing problems not well solved by today’s computers. 

One repeated theme was human-cortex-inspired computers, which despite being slow, show tremendous 3D structure and interconnectedness. To paraphrase one speaker, they may not be able to diagonalize a matrix better than a von Neumann computer, but they have tremendous capabilities in pattern recognition and other extremely important large-scale data analytics.

The potential for enabling computing in the post Moore’s Law era is in solving the heterogeneous integration problem, which, in turn, enables us to less expensively explore the kinds of architectures capable of addressing workloads that have become more about exploring connections and patterns than traditional science calculations.

The human cerebral cortex is the ultimate example of this: Tens of billions of neurons, each of which has on the order of 50 degrees of freedom, which produce a whopping 6 bits of information. In terms of storage, this is less than a petabyte of information total, which is an achievable goal for silicon systems today using commodity NAND Flash storage. The complexity and power arises not in the raw storage of information, but in how that information is applied. 

All in all—pretty heady stuff! Look for more highlights from future events…

About Our Blogger

Richard Murphy

Dr. Richard Murphy is a Senior Advanced Memory Systems Architect for Micron’s DRAM Solutions Group and is focused on future memory platforms, including processing-in-memory.

Prior to joining Micron in 2012, Dr. Murphy was a Principal Member of the Technical Staff at Sandia National Laboratories. He also worked as a technical staff member at Sun Microsystems and served as the Principal Investigator of several advanced computing R&D efforts, including projects for the Defense Advanced Research Projects Agency (DARPA) and the Department of Energy (DOE).

Dr. Murphy’s specialties include research and development of computer architecture, advanced memory systems, and supercomputing systems for physics and data-intensive problems. He has led several large multidisciplinary teams in the successful creation of new technologies.  He also cofounded the Graph 500 benchmark and currently chairs its executive committee. 

Dr. Murphy is Adjunct Faculty in the Electrical and Computer Engineering Departments at the Georgia Institute of Technology and New Mexico State University. He is the author of over two dozen papers and two patents. He holds a PhD in computer science and engineering, as well as an MS, BS, and BA from the University of Notre Dame. Dr. Murphy is a Senior Member of the IEEE.

Login or Sign Up Now for an account to leave a comment.