The future of supercomputing can be summed up in three letters: GPU. GPU stands for graphics processing unit. In the world of high-performance computing, graphics processing units are the talk of the town.
The GPU is a specialized circuit designed to accelerate the image output in a frame buffer intended for output to a display.
GPUs are very efficient at manipulating computer graphics and are generally more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. GPUs are moving from video games into high performance computing in a big way since companies like Nvidia and AMD began focusing on software and revised its hardware designs to make them easier to use. Basically a GPU has a large number of cores each capable of executing an operation of its own.
GPU-based high performance computers are starting to play a significant role in large-scale modelling. Three of the 5 most powerful supercomputers in the world take advantage of GPU acceleration. Not coincidentally, this is exactly what China has done to achieve the world’s fastest speeds with its “Tianhe-1A” supercomputer. That computer combines about 7,000 Nvidia GPUs with 14,000 Intel CPUs: the only hybrid CPU-GPU system in the world of that scale.
An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear the book into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.