Hugely parallelised GPU data processing, using either CUDA or OpenCL, is changing the shape of data science. It even has its own snappy acronym - GPGPU - General-purpose computing on graphics processing units.
It’s no surprise, then, that flexible, scalable access to these GPU resources is becoming a key requirement in many cloud deployments (see the Canonical Distribution of Kubernetes for a good example). But it’s not always easy, nor cheap, to get started. Unless you use LXD.
LXD’s unrivalled density in real-world cloud deployments, and its ability to run locally, make it a game-changing tool for experimenting with cloud-like GPU data processing.
It enables you to create local scalable deployments using nothing more than a PC with a GPU or two. As we’ll now demonstrate.
What you’ll learn
- How to replace default NVIDIA drivers with the latest ones
- How to install the CUDA toolkit
- How to configure LXD to use NVIDIA GPUs and CUDA
What you’ll need
We’ll be using NVIDIA hardware alongside NVIDIA’s proprietary CUDA, as these currently constitute the most widely used GPGPU platform.
However, LXD’s hardware passthrough enables any GPU to appear natively to any deployment, which means that using different GPUs or drivers with OpenCL should be possible.
As both NVIDIA’s drivers and CUDA are constantly in a rapid state of development, we’re going to install and use the latest versions we can get hold of. This will mean using packages separate from those supplied by the distribution, which we’ll cover in the next step.
Originally authored by Graham Morrison