This week I am attending International Supercomputing (ISC) 2016 in Frankfurt, Germany. ISC is considered "the event" for High Performance Computing (HPC), Networking, and Storage. Today I will share some findings and trends from the show floor.
There is a lot of talk here about the convergence of Big Data Analytics and High Performance Computing (HPC), and for good reason. There are many similarities in the system requirements: High performance processors, big memory footprints and high-performance interconnects are three pretty big similarities. I think there’s more here. High Performance Computing has always been an interesting niche, but it is just a niche. As HPC systems have gotten bigger the market certainly has gotten more interesting. But, to have alignment with a major application like Big Data analytics is a major plus.
High Performance Computing applications have been pretty well understood. The challenge has been to scale up the performance and size of applications such as combustion analysis, weather forecasting and molecular modeling so that more accurate analysis can be made in shorter time. Big data analytics is a different, new, memory-intensive beast. While the challenge for HPC applications has been the porting of well-known algorithms to new, highly-parallel architectures, the challenge for big data analytics is the creation of entirely new algorithms. These new algorithms are finding some success in the form of cognitive computing systems. Today these systems are using machine learning to improve tasks like natural language processing and image recognition. Several speakers pointed out that this is only the beginning. One application cited often was autonomous driving automobiles. These systems will use a combination of local super-computing class processing, large memory subsystems and cloud computing. The amount of data in these systems is staggering, with an estimated 5x10^18 computational operations in a mere one hour of driving.
Now, you might be tempted to “say it ain’t so”, that a mobile application would demand the level of processing of a supercomputer, but it’s happened before. After all, a smartphone contains more memory, storage and processing power than a second-generation Cray supercomputer.
Rajeeb Hazra, GM of Intel’s Enterprise and Government Group made a significant announcement today. The “Knight’s Landing” processor, the next generation of Intel’s Xeon Phi processor, debuted at the conference.
Some of the benefits of this touted by Intel were:
- A 5X increase in highly parallel life sciences workloads such as LAMMPS.
- Integrated memories giving up to 2.7x performance boost on financial apps such as Monte Carlo DP.
- An integrated fabric (Omni-path) with over 5X performance improvement on the visualization application EMBREE.
Taken together, the benefits of the architecture shines, with a new record in the single-socket SPECfp_rate2006 benchmark. This is a promise of great things from future systems using this solution.
About Our Blogger
Dean Klein is Vice President of Memory System Development at Micron Technology. Mr. Klein joined Micron in January 1999, after having held several leadership positions at Micron Electronics, Inc., including Executive Vice President of Product Development and Chief Technical Officer. He also co-founded and served as President of PC Tech, Inc., previously a wholly-owned subsidiary of Micron Electronics, Inc., from its inception in 1984. Mr. Klein’s current responsibilities as Vice President of Memory System Development focus on developing memory technologies and capabilities.
Mr. Klein earned a Bachelor of Science degree in electrical engineering and a Master of Electrical Engineering from the University of Minnesota, and he holds over 220 patents in the areas of computer architecture and electrical engineering. He has a passion for math and science education and is a mentor to the FIRST Robotics team (www.USFIRST.org) in the Meridian, Idaho school district.