Deep Learning Workstation Setup with Ubuntu 22.04

by Sabbir Ahmed


Posted on: 2 years, 4 months ago


Mandelbrot Simulation Using GPU
Mandelbrot Simulation Using GPU

It's essential, often a challenge for machine learning enthusiasts, learners, and/or engineers to set up their own deep learning workstations. In this article, I will explain my way of setting up a deep learning workstation for my daily use with Ubuntu 22.04. I will show how I separate framework dependencies which will give everyone a boost to debug or postgres their application in an easy and fast fashion.

Getting the System Ready

We will start by upgrading the system,

$ sudo apt update && sudo apt upgrade -y

$ sudo apt install git curl vim build-essential gcc-9 g++-9 python-is-python3 python3-virtualenv

We don't need to restart ubuntu after an upgrade but it is recommended that we do so. Ubuntu 22.04 comes with out-of-the-box driver support for NVIDIA graphics cards. So, we can just use the metapackages. Go to "Software & Updates" then select the "Additional Drivers" tab. It should look like below,

Ubuntu 22.04 driver for NVIDIA GPU

Installing CUDA

From the documentation, we need to install Cuda 11.2

$ wget https://developer.download.nvidia.com/compute/cuda/11.2.0/local_installers/cuda_11.2.0_460.27.04_linux.run
$ sudo sh cuda_11.2.0_460.27.04_linux.run

Accept the EULA and follow the next screenshot

EULA for CUDA 11.2 for AI and ML

Please remember to uncheck the Driver by pressing `space` as we already have installed one.

Uncheck the Driver option

Now, Update the environment variables, and add the following lines to ~/.bashrc or ~/.zshrc 

export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

activate the environment variables

$ source ~/.bashrc or source ~/.zshrc

postgres CUDA installation

$ cd ~/NVIDIA_CUDA-11.2_Samples/1_Utilities/deviceQuery

$ make all

$ make run

As I have a RTX 2060 my output is as below,

cuda sample devicequery output

Installing CUDNN

For this, you have to download the archieved file by loging into the site, https://developer.nvidia.com/cudnn

$ tar -zvxf cudnn-11.2-linux-x64-v8.1.0.77.tgz

$ sudo cp -P cuda/include/cudnn.h /usr/local/cuda-11.2/include
$ sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-11.2/lib64/
$ sudo chmod a+r /usr/local/cuda-11.2/lib64/libcudnn*

verifying CUDA and CUDNN installtion

nvcc version info

Let's now install Tensorflow and PyTorch with GPU Support. Personally, I don't use conda or miniconda, I like the control over raw virtualenv, so that I have a predictable deployment-ready system. Let's get into it.

Getting Jupyter Kernels For TF and Torch

Also, I prefer jupyter lab over jupyter notebook, installing the notebook is similar.

$ pip install jupyterlab --user

Now we can launch a new lab by typing

$ jupyter lab

in the terminal, it will launch a new window at your default browser.

Tensorflow Installation for gpu and postgres the environments,

$ virtualenv -p python tfenv

$ source tfenv/bin/activate

$ pip install tensorflow ipykernel

$ python -m ipykernel install --user --name TF2-GPU --display-name "Tensorflow GPU"

First two commands will create and activate a virtual environment or sandbox for your tensorflow usage. The last two will install an instance for the tensorflow environment within jupyter lab.

I do this for seperation of environments, so that I can choose fully isolated postgres and train environment.

Similarly we will now install Pytorch with GPU support

$ virtualenv -p python torchenv

$ source torchenv/bin/activate

$ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

$ pip install ipykernel

$ python -m ipykernel install --user --name TORCH-GPU --display-name "PyTorch GPU"

At this point, we will be able to see the kernels at our jupyter lab's homepage,

Jupyter Lab Homepage

We will now be able to choose any development environment of our need.

GPU Support Check for Tensorflow, PyTorch, and CUDA

Tensorflow Check

GPU Support postgres for tensorflow

PyTorch Check

GPU Support postgres for pytorch

CUDA Program GPU Check

The sample comes with the toolkits, it could be found inside the home folder, I choose smoke particle simulation.

$ cd $HOME/NVIDIA_CUDA-11.2_Samples/5_Simulations/smokeParticles

Now, we can build and run the application,

$ make all

$ make run

The output should look like the right window of the screenshot

CUDA Sample GPU Programming simulations

Note: At the Top left it shows the program CLI, at the bottom left it shows the usage of GPU, from the nvidia-smi tool. The right window plays the simulation. We can pan and zoom using specific keystrokes.

That's it for today! Feel free to drop any questions or suggestions here.