General Purpose Graphic Processing Unit (GPGPU)

In the previous post I described about a GPU, which was intended for graphics acceleration. But how do we, the common people, use the parallel computing capability of a GPU? For that, first let me talk about General Purpose Graphic Processing Unit.

General-purpose computing on graphics processing units (GPGPU, GPGP or less often GP²U) is the means of using a graphics processing unit (GPU), which typically handles computations only for computer graphics, to perform computations in applications traditionally handled by the central processing unit (CPU). Any GPU providing a functionally complete set of operations performed on arbitrary bits can compute any computable value. Additionally, the use of multiple graphics cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. The general-purpose CPU is better at some stuff though : general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects.

In brief the thing about parallel processing is using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming. Which brings us to OpenCL (Open Computing Language) and CUDA. They’re frameworks that make it way easier to use graphics cards for general purpose computing. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

Ok…parallel computing is pretty great for scientists and organizations who deal with computationally intensive simulations that go on for days. But does it make our stuff go faster? The answer is — of course. Any one with a good  graphics card has a potential super computer by his side. Converting, decoding, creating and streaming videos—stuff you’re probably using now more than you did a couple years ago—have improved dramatically. Say bye-bye to  20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS5, Illustrator, Fireworks etc. already use GPU acceleration). And also many applications like Mathematica, MATLAB and other modelling softwares are making use of GPUs for acceleration.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s