Compiling GPU Programs¶
The Gautschi cluster nodes contain 6 GPUs that support CUDA and OpenCL. This section focuses on using CUDA.
A simple CUDA program has a basic workflow:
- Initialize an array on the host (CPU).
- Copy array from host memory to GPU memory.
- Apply an operation to array on GPU.
- Copy array from GPU memory to host memory.
Here is a sample CUDA program:
Both front-ends and GPU-enabled compute nodes have the CUDA tools and libraries available to compile CUDA programs. To compile a CUDA program, load CUDA, and use nvcc to compile the program:
The example illustrates only how to copy an array between a CPU and its GPU but does not perform a serious computation.
The following program times three square matrix multiplications on a CPU and on the global and shared memory of a GPU:
For best performance, the input array or matrix must be sufficiently large to overcome the overhead in copying the input and output data to and from the GPU.
For more information about NVIDIA, CUDA, and GPUs: