Step 9: Install cuDNN 7.3.1:
NVIDIA cuDNN is a GPU-accelerated library of primitives for deep neural networks.
Goto https://developer.nvidia.com/cudnn and download Login and agreement required
After login and accepting agreement.
Download the following:
cuDNN v7.3.1 Library for Linux [ cuda 10.0]
Goto downloaded folder and in terminal perform following:
tar -xf cudnn-10.0-linux-x64-v18.104.22.168.tgz
sudo cp -R cuda/include/* /usr/local/cuda-10.0/include
sudo cp -R cuda/lib64/* /usr/local/cuda-10.0/lib64
Step 10: Install NCCL 2.3.5:
NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs
Go to https://developer.nvidia.com/nccl/nccl-download and attend survey to download Nvidia NCCL.
Download following after completing survey.
Download NCCL v2.3.5, for CUDA 10.0 -> NCCL 2.3.5 O/S agnostic and CUDA 10.0
Goto downloaded folder and in terminal perform following:
tar -xf nccl_2.3.5-2+cuda10.0_x86_64.txz
sudo cp -R * /usr/local/cuda-10.0/targets/x86_64-linux/
Step 11: Install Dependencies
Use following if not in active virtual environment.
pip install -U --user pip six numpy wheel mock
pip3 install -U --user pip six numpy wheel mock
pip install -U --user keras_applications==1.0.5 --no-deps
pip3 install -U --user keras_applications==1.0.5 --no-deps
pip install -U --user keras_preprocessing==1.0.3 --no-deps
pip3 install -U --user keras_preprocessing==1.0.3 --no-deps
Use following if in active virtual environment.
pip install -U pip six numpy wheel mock
pip install -U keras_applications==1.0.5 --no-deps
pip install -U keras_preprocessing==1.0.3 --no-deps
Step 12: Configure Tensorflow from source:
chmod +x bazel-0.17.2-installer-linux-x86_64.sh
echo 'export PATH="$PATH:$HOME/bin"' >> ~/.bashrc
Reload environment variables
source ~/.bashrc sudo ldconfig
Start the process of building TensorFlow by downloading latest tensorflow 1.12 .
git clone https://github.com/tensorflow/tensorflow.git
git checkout r1.12
Give python path in
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3
Press enter two times
Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: Y
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: Y
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
Do you wish to build TensorFlow with ROCm support? [y/N]: N
Do you wish to build TensorFlow with CUDA support? [y/N]: Y
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 10.0
Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: /usr/local/cuda-10.0
Do you wish to build TensorFlow with TensorRT support? [y/N]: N
Please specify the NCCL version you want to use. If NCCL 2.2 is not installed, then you can use version 1.3 that can be fetched automatically but it may have worse performance with multiple GPUs. [Default is 2.2]: 2.3.5
Now we need compute capability which we have noted at step 1 eg. 5.0
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 5.0] 5.0
Do you want to use clang as CUDA compiler? [y/N]: N
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: /usr/bin/gcc
Do you wish to build TensorFlow with MPI support? [y/N]: N
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: -march=native
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:N
Step 13: Build Tensorflow using bazel
The next step in the process to install tensorflow GPU version will be to build tensorflow using bazel. This process takes a fairly long time.
To build a pip package for TensorFlow you would typically invoke the following command:
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
add "--config=mkl" if you want Intel MKL support for newer intel cpu for faster training on cpu
add "--config=monolithic" if you want static monolithic build (try this if build failed)
add "--local_resources 2048,.5,1.0" if your PC has low ram causing Segmentation fault or other related errors
This process will take a lot of time. It may take 3- 4 hours or maybe even more.
Also if you got error like Segmentation Fault then try again it usually worked.
The bazel build command builds a script named build_pip_package. Running this script as follows will build a .whl file within the tensorflow_pkg directory:
To build whl file issue following command:
To install tensorflow with pip:
for existing virtual environment:
pip install tensorflow*.whl
With a new virtual environment using virtualenv:
sudo apt-get install virtualenv virtualenv tf_1.12_cuda10.0 -p /usr/bin/python3 source tf_1.12_cuda10.0/bin/activate pip install tensorflow*.whl
for python 2: (use sudo if required)
pip2 install tensorflow*.whl
for python 3: (use sudo if required)
pip3 install tensorflow*.whl
Note : if you got error like unsupported platform then make sure you are running correct pip command associated with the python you used while configuring tensorflow build.
You can check pip version and associated python by following command
Step 14: Verify Tensorflow installation
Run in terminal
python import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello))
If the system outputs the following, then you are ready to begin writing TensorFlow programs:
Success! You have now successfully installed tensorflow 1.12 on your machine. If you are on Windows OS, you might want to check out our other post here, How to install Tensorflow 1.7.0 GPU with CUDA 9.1 and cuDNN 7.1.2 for Python 3 on Windows OS. Cheers!!
For prebuilt wheels go to this link .