Using GPUs on Guillimin

* Compute Canada documentation:


GPU Hardware on Guillimin

The following table summarizes the Phase 2 GPU nodes available on Guillimin


Node Type

CountProcessorsTotal coresMemory (GB/core)Total memory (GB)CardsTotal Peak SP FP (TFlops)Total Peak DP FP (TFlops)Network
AW nVidia 8 Dual Intel Ivy Bridge EP E5-2650v2 (8-core, 2.6 GHz, 20MB Cache, 115W) 128 4 512 Dual nVidia K20,
Peak SP FP: 3.52 TFLops, Peak DP FP: 1.17 TFlops
56.3 18.7 InfiniBand QDR 2:1 blocking
25 Dual Intel Sandy Bridge EP E5-2670 (8-core, 2.6 GHz, 20MB Cache, 115W) 400 8 3,200 176 58.5
25 400 4 1,600 176 58.5

Full details of the hardware specifications for the K20 GPUs can be found in Nvidia's documentation.

Submitting GPGPU jobs

Submitting a GPU job is similar to submitting a regular job. The main difference is that the submission script and/or the qsub command should specify the "gpus" resource in the resource list, and the job is then automatically routed to the k20 queue.

Example nVidia K20 GPU job:

$ qsub -l nodes=1:ppn=16:gpus=2 ./

This uses one whole node, as each accelerator node has two devices (two GPUs) and 16 cores. It is also possible to reserve half a node as follows:

$ qsub -l nodes=1:ppn=8:gpus=1 ./

You can also use multiple nodes, for MPI-GPU hybrid jobs:

$ qsub -l nodes=2:ppn=16:gpus=2:sandybridge ./
$ qsub -l nodes=2:ppn=16:gpus=2:ivybridge   ./

Or, if you can tolerate other (non-GPU) jobs running on the same node, and for cheaper accounting you could use lower ppn counts, for example:

$ qsub -l nodes=1:ppn=1:gpus=1 ./

Note that unlike for regular jobs, the pmem parameter specifies memory per node, not per core. For a job needing one of the 128GB nodes you could use this submission (where 123200m=16*7700m):

$ qsub -l nodes=1:ppn=16:gpus=2 -l pmem=123200m ./

Compute Modes

Nvidia GPUs can be set to specific compute modes that control how many separate processes can use the GPU at a time. You may specify the required compute mode in your job script or submission command.

$ qsub -l nodes=1:ppn=16:gpus=2:exclusive_process ./
$ qsub -l nodes=1:ppn=16:gpus=2:reseterr:exclusive_process ./

EXCLUSIVE_THREAD mode is no longer supported on Guillimin. EXCLUSIVE_PROCESS mode will allow all of the threads from a single process use the node at once. The compute modes are discussed in detail in the CUDA programming guide.

GPU software

When using the K20 nodes, please note the availability of the following software modules:

Our GPUs are set to the "Exclusive Process" compute mode. This means that a GPU card can only be used by a single process at a time. Other processes attempting to access the GPU will fail with an error message.

GPU Training/Education

Please see our recent GPU/CUDA training event materials for more information about how to use GPUs effectively for your research.