[cuda] How to verify CuDNN installation?

I have searched many places but ALL I get is HOW to install it, not how to verify that it is installed. I can verify my NVIDIA driver is installed, and that CUDA is installed, but I don't know how to verify CuDNN is installed. Help will be much appreciated, thanks!

PS.
This is for a caffe implementation. Currently everything is working without CuDNN enabled.

This question is related to cuda computer-vision caffe conv-neural-network cudnn

The answer is


When installing on ubuntu via .deb you can use sudo apt search cudnn | grep installed


Getting cuDNN Version [Linux]

Use following to find path for cuDNN:

cat $(whereis cudnn.h) | grep CUDNN_MAJOR -A 2

If above doesn't work try this:

cat $(whereis cuda)/include/cudnn.h | grep CUDNN_MAJOR -A 2

Getting cuDNN Version [Windows]

Use following to find path for cuDNN:

C:\>where cudnn*
C:\Program Files\cuDNN6\cuda\bin\cudnn64_6.dll

Then use this to dump version from header file,

type "%PROGRAMFILES%\cuDNN6\cuda\include\cudnn.h" | findstr "CUDNN_MAJOR CUDNN_MINOR CUDNN_PATCHLEVEL"

Getting CUDA Version

This works on Linux as well as Windows:

nvcc --version

Run ./mnistCUDNN in /usr/src/cudnn_samples_v7/mnistCUDNN

Here is an example:

cudnnGetVersion() : 7005 , CUDNN_VERSION from cudnn.h : 7005 (7.0.5)
Host compiler version : GCC 5.4.0
There are 1 CUDA capable devices on your machine :
device 0 : sms 30  Capabilities 6.1, SmClock 1645.0 Mhz, MemSize (Mb) 24446, MemClock 4513.0 Mhz, Ecc=0,    boardGroupID=0
Using device 0

I have cuDNN 8.0 and none of the suggestions above worked for me. The desired information was in /usr/include/cudnn_version.h, so

cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2

did the trick.


How about checking with python code:

from tensorflow.python.platform import build_info as tf_build_info

print(tf_build_info.cudnn_version_number)
# 7 in v1.10.0

The installation of CuDNN is just copying some files. Hence to check if CuDNN is installed (and which version you have), you only need to check those files.

Install CuDNN

Step 1: Register an nvidia developer account and download cudnn here (about 80 MB). You might need nvcc --version to get your cuda version.

Step 2: Check where your cuda installation is. For most people, it will be /usr/local/cuda/. You can check it with which nvcc.

Step 3: Copy the files:

$ cd folder/extracted/contents
$ sudo cp include/cudnn.h /usr/local/cuda/include
$ sudo cp lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/lib64/libcudnn*

Check version

You might have to adjust the path. See step 2 of the installation.

$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

Notes

When you get an error like

F tensorflow/stream_executor/cuda/cuda_dnn.cc:427] could not set cudnn filter descriptor: CUDNN_STATUS_BAD_PARAM

with TensorFlow, you might consider using CuDNN v4 instead of v5.

Ubuntu users who installed it via apt: https://askubuntu.com/a/767270/10425


To check installation of CUDA, run below command, if it’s installed properly then below command will not throw any error and will print correct version of library.

function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' <<< $LD_LIBRARY_PATH) 2>/dev/null | grep $1; }
function check() { lib_installed $1 && echo "$1 is installed" || echo "ERROR: $1 is NOT installed"; }
check libcuda
check libcudart

To check installation of CuDNN, run below command, if CuDNN is installed properly then you will not get any error.

function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' <<< $LD_LIBRARY_PATH) 2>/dev/null | grep $1; }
function check() { lib_installed $1 && echo "$1 is installed" || echo "ERROR: $1 is NOT installed"; }
check libcudnn 

OR

you can run below command from any directory

nvcc -V

it should give output something like this

 nvcc: NVIDIA (R) Cuda compiler driver
 Copyright (c) 2005-2016 NVIDIA Corporation
 Built on Tue_Jan_10_13:22:03_CST_2017
 Cuda compilation tools, release 8.0, V8.0.61

On Ubuntu 20.04LTS:

cat /usr/local/cuda/include/cudnn_version.h | grep CUDNN_MAJOR

returned the expected results


My answer shows how to check the version of CuDNN installed, which is usually something that you also want to verify. You first need to find the installed cudnn file and then parse this file. To find the file, you can use:

whereis cudnn.h
CUDNN_H_PATH=$(whereis cudnn.h)

If that doesn't work, see "Redhat distributions" below.

Once you find this location you can then do the following (replacing ${CUDNN_H_PATH} with the path):

cat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2

The result should look something like this:

#define CUDNN_MAJOR 7
#define CUDNN_MINOR 5
#define CUDNN_PATCHLEVEL 0
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

Which means the version is 7.5.0.

Ubuntu 18.04 (via sudo apt install nvidia-cuda-toolkit)

This method of installation installs cuda in /usr/include and /usr/lib/cuda/lib64, hence the file you need to look at is in /usr/include/cudnn.h.

CUDNN_H_PATH=/usr/include/cudnn.h
cat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2

Debian and Ubuntu

From CuDNN v5 onwards (at least when you install via sudo dpkg -i <library_name>.deb packages), it looks like you might need to use the following:

cat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2

For example:

$ cat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2                                                         
#define CUDNN_MAJOR      6
#define CUDNN_MINOR      0
#define CUDNN_PATCHLEVEL 21
--
#define CUDNN_VERSION    (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

#include "driver_types.h"
                      

indicates that CuDNN version 6.0.21 is installed.

Redhat distributions

On CentOS, I found the location of CUDA with:

$ whereis cuda
cuda: /usr/local/cuda

I then used the procedure about on the cudnn.h file that I found from this location:

$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

Examples related to cuda

Which TensorFlow and CUDA version combinations are compatible? NVIDIA NVML Driver/library version mismatch How do I select which GPU to run a job on? How to verify CuDNN installation? Using GPU from a docker container? Using Java with Nvidia GPUs (CUDA) Error Message : Cannot find or open the PDB file How can I flush GPU memory using CUDA (physical reset is unavailable) What is the canonical way to check for errors using the CUDA runtime API? How to get the nvidia driver version from the command line?

Examples related to computer-vision

How to predict input image using trained model in Keras? How do I increase the contrast of an image in Python OpenCV How to verify CuDNN installation? OpenCV Error: (-215)size.width>0 && size.height>0 in function imshow How to draw a rectangle around a region of interest in python How can I extract a good quality JPEG image from a video file with ffmpeg? Simple Digit Recognition OCR in OpenCV-Python OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection Converting an OpenCV Image to Black and White Combining Two Images with OpenCV

Examples related to caffe

How to verify CuDNN installation?

Examples related to conv-neural-network

Why binary_crossentropy and categorical_crossentropy give different performances for the same problem? How to verify CuDNN installation?

Examples related to cudnn

Which TensorFlow and CUDA version combinations are compatible? How to verify CuDNN installation?