I spoke Friday at the IEEE Workshop on Microelectronics and Electron Devices (WMED) in Boise about a memory-centric vision for the future of computing. This was my first time at this workshop, and I was very impressed by two things: the high caliber of the talks and the fact that the workshop had strong representation from students at all levels. Coming from the high-performance computing community—which tends to be a somewhat more distinguished, somewhat greyer crowd—seeing a strong student presence was refreshing!
John Knickerbocker from IBM gave a particularly thought-provoking talk on the future of 2D, 2.5D, and 3D integration as ways of combining heterogeneously fabricated elements (logic, memory, MEMS, and other sensors, potentially silicon photonics). The kind of close proximity enabled by these methods is fundamentally critical to the future of computing. Proximity is one of the few ways to significantly decrease the energy required for communication between modules in a computer—which is the key factor in power consumption, regardless of a computer’s scale (mobile phone to supercomputer). With servers consuming up to 1.5% of the world’s power, this is a big deal.
Work in advanced technologies like processing-in-memory (PIM) is about performing computation in less energy than it takes to drag the data between a standard memory module and a processor module.
Another dominant and interrelated theme is the change in Moore’s Law that has occurred since 2003. Performance doesn’t simply come “for free” by waiting anymore, so we’re looking to new architectures for an overall improvement in end-user application performance. There was a lot of talk about potential new memory devices, but more than that, there was an air of opportunity about the creation of new architectures capable of addressing problems not well solved by today’s computers.
One repeated theme was human-cortex-inspired computers, which despite being slow, show tremendous 3D structure and interconnectedness. To paraphrase one speaker, they may not be able to diagonalize a matrix better than a von Neumann computer, but they have tremendous capabilities in pattern recognition and other extremely important large-scale data analytics.
The potential for enabling computing in the post Moore’s Law era is in solving the heterogeneous integration problem, which, in turn, enables us to less expensively explore the kinds of architectures capable of addressing workloads that have become more about exploring connections and patterns than traditional science calculations.
The human cerebral cortex is the ultimate example of this: Tens of billions of neurons, each of which has on the order of 50 degrees of freedom, which produce a whopping 6 bits of information. In terms of storage, this is less than a petabyte of information total, which is an achievable goal for silicon systems today using commodity NAND Flash storage. The complexity and power arises not in the raw storage of information, but in how that information is applied.
All in all—pretty heady stuff! Look for more highlights from future events…