原文地址:blogof33.com/post/2/node
Tensorflow有Pip,Docker, Virtualenv, Anaconda 或 源碼編譯的方法安裝 ,本文中採用Pip安裝。python
由於國內中文教程中有關Pip安裝的不多,官方中文文檔也有一些順序上的錯誤,語義的模糊,按照上面的來很容易出錯。因此我就在這裏寫一篇教程吧。linux
注:建議 Ubuntu 16.04版本ios
配置:c++
Linux Distribution: Ubuntu 16.04 64位ubuntu
Cpu: Intel Core i5 6300HQvim
Gpu:GTX 960M安全
Python:2.7bash
由於筆者是裝的雙系統,用UEFI引導的沒有關Security Boot(安全啓動),因此出了一點問題。post
先打開terminal輸入:
sudo apt-get update
而後將顯卡驅動選擇爲NVIDIA的顯卡驅動(更改之後須要等待一段時間。)
下載成功之後按照圖中的命令操做便可安裝完成。 安裝完成之後,cuda默認安裝在了/usr/local/cuda-8.0/目錄處,而後: vim ~/.profile
注:不少其餘發行版是打開.bash_profile,ubuntu沒有這個,而是.profile
設置環境變量(加在文件末尾,每次登錄後自動生效): export PATH="$PATH:/usr/local/cuda-8.0"
export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64"
保存之後,繼續在terminal中輸入:: source ~/.profile #使更改的環境變量當即生效 nvidia-smi #測試是否配置成功 結果出現如下相識輸出說明配置成功:
由於ubuntu16.04的gcc編譯器是5.4.0,然而cuda8.0不支持5.0以上的編譯器,所以須要降級,把編譯器版本降到4.9:
sudo apt-get install g++-4.9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 20
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 10
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 20
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 10
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30
sudo update-alternatives --set cc /usr/bin/gcc
sudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30
sudo update-alternatives --set c++ /usr/bin/g++
gcc --version
複製代碼
若是版本顯示出來是4.9就成功了
必定要下載cuDNN5.1!!!!,至少如今下載6.0版本後面是有問題的,我以前下載的6.0版本,後面導入tensorflow時一直報錯,在gaythub上看見有人說6.0「don not work」而5.1能夠,不得已退回到5.1,而後便成功了。
下載以前要註冊一個NVIDA DEVELOPER賬號,而後填三個問題就能夠啦。如圖選擇cuDNN v5.1 Library for Linux:
下載完成之後,解壓並拷貝 CUDNN 文件到 Cuda Toolkit 8.0安裝路徑下. 假設 Cuda Toolkit 8.0 安裝在 /usr/local/cud-8.0(默認路徑), 執行如下命令(若/usr/local/cuda-8.0/include目錄不存在,先建立一個include目錄):
tar xvzf cudnn-8.0-linux-x64-v5.1.tgz
sudo cp cuda/include/cudnn.h /usr/local/cuda-8.0/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda-8.0/lib64
sudo chmod a+r /usr/local/cuda-8.0/include/cudnn.h /usr/local/cuda-8.0/lib64/libcudnn*
複製代碼
sudo apt-get install python-pip python-dev
pip install --upgrade pip
複製代碼
pip install --upgrade tensorflow-gpu
測試一下:
$python
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-08-25 14:34:54.825013: W tensorflow/core/platform/cpu_feature_guard.cc:45] The
TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:54.825065: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-08-25 14:34:54.825081: W tensorflow/core/platform/cpu_feature_guard.cc:45] The
TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:54.825093: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your
machine and could speed up CPU computations.
2017-08-25 14:34:54.825105: W tensorflow/core/platform/cpu_feature_guard.cc:45] The
TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-08-25 14:34:55.071951: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-08-25 14:34:55.072542: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: GeForce GTX 960M major: 5 minor: 0 memoryClockRate (GHz) 1.176 pciBusID 0000:01:00.0 Total memory: 1.95GiB Free memory: 1.31GiB 2017-08-25 14:34:55.072632: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-08-25 14:34:55.072695: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-08-25 14:34:55.072730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960M, pci bus id: 0000:01:00.0) >>> print(sess.run(hello)) Hello, TensorFlow! 查看一下GPU開啓狀況: - "/cpu:0": The CPU of your machine. - "/gpu:0": The GPU of your machine, if you have one. - "/gpu:1": The second GPU of your machine, etc. 複製代碼
>>> import tensorflow as tf
>>> a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
>>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
>>> c = tf.matmul(a, b)
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
>>> print(sess.run(c))
MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0
2017-08-09 09:47:39.461702: I tensorflow/core/common_runtime/simple_placer.cc:847] MatMul: (MatMul)/job:localhost/replica:0/task:0/gpu:0
b: (Const): /job:localhost/replica:0/task:0/gpu:0
2017-08-09 09:47:39.461942: I tensorflow/core/common_runtime/simple_placer.cc:847] b: (Const)/job:localhost/replica:0/task:0/gpu:0
a: (Const): /job:localhost/replica:0/task:0/gpu:0
2017-08-09 09:47:39.461976: I tensorflow/core/common_runtime/simple_placer.cc:847] a: (Const)/job:localhost/replica:0/task:0/gpu:0
[[ 22. 28.]
[ 49. 64.]]
複製代碼