Over the past years, as the problems that we tackle have become more and more complex, the demands for computational power has increased significantly. However, CPU speeds have failed to increase as fast as the demand due to physical limitations in developing faster CPUs. Nowadays, we see CPUs with multiple cores to introduce a small level of parallelization in personal computers and when even that is not enough scientists use clusters for parallel computing. However, clusters are expensive, hard to maintain, and running your codes usually requires staying in user queues for your turn to run your program.
Nowadays, graphic cards, or more precisely Graphic Processing Units (GPUs) come with hundreds to thousands of computational cores. Each core, is usually slower than a typical CPU core, but the sheer number of the cores makes the computational power of GPUs tens to hundred times larger than traditional CPUs at a fraction of the cost of supercomputers. In this hands-on short course, we will discuss the steps required to develop interactive GPU applications which run on any modern web browser without the need to compile or even install any additional program or plugin. Our language of choice will be WebGL 2.0 and we will use our in-house library, Abubu.JS, which simplifies numerical computing using WebGL 2.0 and is freely available for use. We will show how to programs to implement interactive applications that run in real-time on the GPU which would otherwise require supercomputers to solve or would need several hours or even days to run on CPUs.
By the end of this short course, the attendees should be able to:
The tentative schedule for the course is as follows below. I will make minor adjustments to the topics covered on each day based on the interests of the participants.
Advances in computational models and methods over the last decades have encouraged and enabled us to tackle more and more complex problems. However, this has resulted in an increase in computational cost of the problems and questions that we. Concurrently, the CPU clock speeds have reached their thermodynamic limit. To keep up with the computational demands of the users, nowadays, we see CPUs with multiple cores to introduce a small level of parallelization in personal computer. When even multiple core CPUs are not enough, CPU clusters are employed to address the computational costs. Nevertheless, computer clusters are expensive to own, hard to maintain, and running your parallel programs on them usually requires staying in user queues. After execution of the program on the parallel cluster, the users typically have to transfer the data to local machines for post processing or safe keeping which can be hassle in itself.
Graphic Processing Units (GPUs), or generally known as graphic cards, pack hundreds to thousands of computational cores to be able to quickly compute (or render) thousands of pixels of scenes that need to be displayed on the screens. Even though each core is slightly less powerful than a single core of a typical CPU, the sheer number of the computational cores of GPUs makes them hundreds of times more powerful than CPUs. As a result, GPUs provide a tremendous level of parallelization at a fraction of the cost of a CPU cluster. Nowadays, virtually, every personal computer and smartphone comes with a graphic card pre-installed to be able to show images and videos. This means that if we harness the power of GPUs in a general way, we can transform each of these devices into a supercomputer.
The major challenge in programming for GPUs is that GPUs require their own programming languages. Several programming languages have been developed for coding GPU enabled programs, most notably, NVIDA CUDA, OpenGL, OpenCL, and Vulcan. The major challenge involved with all these languages is that the developer needs to precompile the program for various target graphic cards which can pose certain limitations. For example, NVIDIA CUDA programs can run only on NVIDIA hardware. This can put a huge burden on the developers as well as users. In this short course, we will review programming using a language called WebGL that allows for programming GPU applications to run on all modern web browsers without the need to install any software or plugin. The developed applications will be automatically compiled by the web-browser at runtime. Hence, the applications will be cross-platform and independent of the operating system and hardware. Since graphic cards and WebGL are intended for rendering 2D and 3D graphics, as an added benefit, the post-processing of computational results can happen simultaneously with the computation.
WebGL is traditionally developed for rendering graphics and not for general computing. Consequently, programming general computational applications may seem daunting for individuals who are not familiar with graphics pipeline. To overcome this challenge, we have developed a computational library called Abubu.js at Georgia Tech which removes most of the complexities of the graphical pipeline so that users can concentrate on coding numerical schemes rather than designing the graphics pipeline. In this short course, we explain the philosophy and the steps for generating interactive numerical application in WebGL using Abubu.js library. The attendees will experience a hands on approach where they start programming their own GPU applications from the first session.