Qt is a cutie!

Qt is a cross platform application framework used for developing stunning GUI (Graphical User Interface) applications. Most notably it’s used in Autodesk Maya, Adobe Photoshop Elements, Skype, VLC media player, Mathematica. Giants like Dreamworks, Google, HP, Lucasfilm, Walt Disney Animation Studios, Research In Motion make use of this.

Qt uses standard C++. It can also be used with several other programming languages using language bindings. It runs on almost all desktop platforms and few mobile platforms. Non-GUI features include SQL database access, XML parsing, thread management, network support and a unified cross-platform application programming interface (API) for file handling. With Qt, you can reuse code efficiently to target multiple platforms with one code base.

  • Qt framework – intuitive APIs for C++ and CSS/JavaScript-like programming with Qt Quick for rapid UI creation
  • Qt Creator IDE – powerful cross-platform integrated development environment, including UI designer tools and on-device debugging
  • Tools and toolchains – All you need: simulator, local and remote compilers, internationalization support, device toolchains and more

An example demo of Qt can be seen below:

More information regarding Qt, documentation and installation files can be found here.

The Qt 5 Beta  is now available as the first major release under the new Qt Project umbrella. Major architectural changes are to be implemented in it. One that interests me the most is Qt 5 allows smooth accelerated graphics performance with limited resources. Qt 5 makes better use of the GPU to deliver better performance on inexpensive hardware. For example, using Qt 5 you can achieve 60 fps performance on a $35 single-board computer like Raspberry Pi. See here for Qt on Pi. There is also a Qt on iPhone project, purpose of which is to have the Qt framework run on the iPhone. Qt already runs on Android.

References:

  1. http://qt.nokia.com/
  2. http://qt-apps.org/
  3. http://labs.qt.nokia.com/

 

CUDA vs OpenCL

CUDA and OpenCL are two major programming frameworks for GPU computing. I have told briefly about them in one of the previous posts. Now, if you wanted to learn GPU Computing, which one to choose – CUDA or OpenCL?

Until recently, CUDA has attracted most of the attention from developers, especially in the High Performance Computing realm because of the good support from NVIDIA itself especially from the forums. But OpenCL is gaining ground rapidly. OpenCL software has now reached the point GPU programmers are taking a second look.

CUDA and OpenCL do mostly the same – it’s like Italians and French fighting over who has the most beautiful language, while they’re both Roman languages

nVidia’s CUDA is vendor-specific. It has better tools, better performance and there’s lot sample code, tools, documentation and utilities available. If you have an actual GPU project that you need to work on in the in short term and you can be certain that you only need to support high-end nVidia hardware, then CUDA is the way to go. OpenCL provides an open, industry-standard framework. As such, it has garnered support from nearly all processor manufacturers including AMD, Intel, and nVidia, as well as others that serve the mobile and embedded computing markets. As a result, applications developed in OpenCL are now portable across a variety of GPUs and CPUs. OpenCL, being an open standard, allows any vendor to implement OpenCL support on its products. Intel has announced that it will support OpenCL on future CPU products.

Ok, now you have two frameworks – which one to choose? Well, it depends on a lot of factors. If you are planning to implement a GPU project solely on nVidia’s cards, then CUDA is a better option. But if your application is to be deployed over a range of architectures then you need to work with OpenCL.

But to start off with, I personally prefer CUDA, because of the detailed documentation that nVidia has provided and also vast community support. You can post a question in nVidia forums (which are off-line now due to some security issues) and get clarifications from experts. And also there is Stackoverflow. The basic idea behind learning CUDA and OpenCL is the same. The skills and knowledge you develop while working with CUDA will mostly be transferrable to OpenCL later if needed. Also some tools like swan, convert a CUDA code into an OpenCL code. So, basically if you learn one, you can very easily work with the other. A good comparison of CUDA and OpenCL is shown here and here. You can also look in the references for more information.

Concluding,

CUDA

  • Better marketing
  • Good support and documentation
  • Many features and toolsets
  • Works only on nVidia cards

OpenCL

  • Supports many architectures
  • It’s open standard – which we always want
  • No proper documentation
  • Provided by different vendors in various packages – no universal package

Recently, OpenCL is gaining grounds on CUDA – this might be a reason that nVidia recently released its source code to developers and also stopped providing OpenCL support in newer releases of CUDA. Well, that indicates there is a stiff competition going on and I personally feel it’s only a matter of time that OpenCL will reach the level of CUDA.

References:

  1. http://www.streamcomputing.eu/blog/2011-06-22/opencl-vs-cuda-misconceptions/
  2. http://www.hpcwire.com/hpcwire/2012-02-28/opencl_gains_ground_on_cuda.html
  3. http://wiki.tiker.net/CudaVsOpenCL
  4. http://blog.accelereyes.com/blog/2012/02/17/opencl_vs_cuda_webinar_recap/

General Purpose Graphic Processing Unit (GPGPU)

In the previous post I described about a GPU, which was intended for graphics acceleration. But how do we, the common people, use the parallel computing capability of a GPU? For that, first let me talk about General Purpose Graphic Processing Unit.

General-purpose computing on graphics processing units (GPGPU, GPGP or less often GP²U) is the means of using a graphics processing unit (GPU), which typically handles computations only for computer graphics, to perform computations in applications traditionally handled by the central processing unit (CPU). Any GPU providing a functionally complete set of operations performed on arbitrary bits can compute any computable value. Additionally, the use of multiple graphics cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. The general-purpose CPU is better at some stuff though : general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects.

In brief the thing about parallel processing is using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming. Which brings us to OpenCL (Open Computing Language) and CUDA. They’re frameworks that make it way easier to use graphics cards for general purpose computing. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

Ok…parallel computing is pretty great for scientists and organizations who deal with computationally intensive simulations that go on for days. But does it make our stuff go faster? The answer is — of course. Any one with a good  graphics card has a potential super computer by his side. Converting, decoding, creating and streaming videos—stuff you’re probably using now more than you did a couple years ago—have improved dramatically. Say bye-bye to  20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS5, Illustrator, Fireworks etc. already use GPU acceleration). And also many applications like Mathematica, MATLAB and other modelling softwares are making use of GPUs for acceleration.

Hello world!

Hello World!

The future of supercomputing can be summed up in three letters: GPU. GPU stands for graphics processing unit. In the world of high-performance computing, graphics processing units are the talk of the town.

The GPU is a specialized circuit designed to accelerate the image output in a frame buffer intended for output to a display.
GPUs are very efficient at manipulating computer graphics and are generally more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel. GPUs are moving from video games into high performance computing in a big way since companies like Nvidia and AMD began focusing on software and revised its hardware designs to make them easier to use. Basically a GPU has a large number of cores each capable of executing an operation of its own.

GPU-based high performance computers are starting to play a significant role in large-scale modelling. Three of the 5 most powerful supercomputers in the world take advantage of GPU acceleration. Not coincidentally, this is exactly what China has done to achieve the world’s fastest speeds with its “Tianhe-1A” supercomputer. That computer combines about 7,000 Nvidia GPUs with 14,000 Intel CPUs: the only hybrid CPU-GPU system in the world of that scale.

An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear the book into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.