首先說明,這是在臺式機上的安裝測試經歷,首先安裝的win10,而後安裝ubuntu16.04雙系統,顯卡爲GTX1060
臺式機顯示器接的是GTX1060 HDMI口,win10上首先安裝了最新的GTX1060驅動375html
廢話很少說,上車吧,少年python
1、首先安裝nvidia顯卡驅動linux
我是1080P的顯示器,在沒有安裝顯卡驅動前,ubuntu分辨率很低,能夠手動修改一下grub文件,提升分辨率,在終端輸入git
sudo vim /etc/default/grub
找到如下行 # The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command 'vbeinfo'
# GRUB_GFXMODE=640×480
按a進入插入模式,增長下面一行 GRUB_GFXMODE=1920×1080
#這裏分辨率自行設置
按esc退出插入模式,按:wq保存退出
在終端編輯 sudo update-grub
更新grub
從新啓動ubuntu使之生效
github
進入ubuntu系統設置-軟件與更新-附加驅動bootstrap
安裝以後重啓系統讓GTX1060顯卡驅動生效 ubuntu
終端輸入 nvidia-smi
顯示效果以下圖表示安裝成功
vim
2、cuda安裝api
下載cuda_8.0.61_375.26_linux.run 和 cudnn-8.0-linux-x64-v5.1.tgzapp
這裏我提供了百度網盤,這兩個文件我先在win10下下載好,並用u盤拷貝到ubuntu的下載目錄下
安裝cuda8.0
終端輸入 cd 下載/
sh cuda_8.0.27_linux.run --override
啓動安裝程序,一直按空格到最後,輸入accept接受條款 (或者按 Q)
輸入n不安裝nvidia圖像驅動,以前已經安裝過了
輸入y安裝cuda 8.0工具
回車確認cuda默認安裝路徑:/usr/local/cuda-8.0
輸入y用sudo權限運行安裝,輸入密碼
輸入y或者n安裝或者不安裝指向/usr/local/cuda的符號連接
輸入y安裝CUDA 8.0 Samples,以便後面測試
回車確認CUDA 8.0 Samples默認安裝路徑:/home/yt(yt是個人用戶名),該安裝路徑測試完能夠刪除
安裝cudnn v5.1
終端輸入
cd 下載/ tar zxvf cudnn-8.0-linux-x64-v5.1.tgz
解壓在下載目錄下產生一個cuda目錄
cd cuda/include/ sudo cp cudnn.h /usr/local/cuda/include/ #複製頭文件 cd ../lib64 #打開lib64目錄 sudo cp lib* /usr/local/cuda/lib64/ #複製庫文件 sudo chmod a+r /usr/local/cuda/include/cudnn.h sudo chmod a+r /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/lib64/libcudnn* #給全部用戶增長這些文件的讀權限
創建軟連接
終端輸入
cd /usr/local/cuda/lib64/ sudo rm -rf libcudnn.so libcudnn.so.5 sudo ln -s libcudnn.so.5.1.10 libcudnn.so.5 #具體看版本 sudo ln -s libcudnn.so.5 libcudnn.so
設置環境變量,終端輸入
sudo gedit /etc/profile
在末尾加入
PATH=/usr/local/cuda/bin:$PATH
export PATH
保存後,建立連接文件
sudo vim /etc/ld.so.conf.d/cuda.conf
按a進入插入模式,增長下面一行
/usr/local/cuda/lib64
按esc退出插入模式,按:wq保存退出
最後在終端輸入sudo ldconfig
使連接生效
cuda Samples測試
打開CUDA 8.0 Samples默認安裝路徑,終端輸入 cd /home/yt/NVIDIA_CUDA-8.0_Samples
(yt是個人用戶名) sudo make all -j4
(4核)
出現「unsupported GNU version! gcc versions later than 5.3 are not supported!」
的錯誤,這是因爲GCC版本太高,在終端輸入 cd /usr/local/cuda-8.0/include
sudo cp host_config.h host_config.h.bak
sudo gedit host_config.h
ctrl+f尋找有「5.3」的地方,只有一處,以下 # if __GNUC__ > 5 || (__GNUC__ == 5 && __GNUC_MINOR__ > 3)
#error -- unsupported GNU version! gcc versions later than 5.3 are not supported!
將兩個5改爲6,即 #if __GNUC__ > 6 || (__GNUC__ == 6 && __GNUC_MINOR__ > 3)
保存退出,繼續在終端輸入 cd /home/yt/NVIDIA_CUDA-8.0_Samples
(yt是個人用戶名) sudo make all -j4
(4核)
完成後繼續向終端輸入 cd bin/x86_64/linux/release
./deviceQuery
完成以後出現以下圖所示,表示成功安裝cuda
3、依賴包安裝
sudo apt-get install build-essential
#必要的編譯工具依賴
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install --no-install-recommends libboost-all-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
四
安裝python的pip和easy_install,方便安裝軟件包
終端輸入 cd
wget --no-check-certificate https://bootstrap.pypa.io/ez_setup.py
sudo python ez_setup.py --insecure
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
五
安裝科學計算和python所需的部分庫
終端輸入 sudo apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran python-numpy
六
安裝git,拉取源碼
終端輸入 sudo apt-get install git
git clone https://github.com/BVLC/caffe.git
七
安裝python依賴
終端輸入 sudo apt-get install python-pip
安裝pip
cd /home/yt/caffe/pythonsudo su
for req in $(cat "requirements.txt"); do pip install -i https://pypi.tuna.tsinghua.edu.cn/simple $req; done
按Ctrl+D退出sudo su模式
8、編譯caffe(暫不對matlab說明)
終端輸入 cd /home/yt/caffe
cp Makefile.config.example Makefile.config
gedit Makefile.config
①將USE_CUDNN := 1
取消註釋,
②INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
後面打上一個空格 而後添加/usr/include/hdf5/serial
若是沒有這一句可能會報一個找不到hdf5.h的錯誤
終端輸入 make all -j4
make過程當中出現找不到lhdf5_hl和lhdf5的錯誤,
解決方案:
在計算機中搜索libhdf5_serial.so.10.1.0
,找到後右鍵點擊打開項目位置
該目錄下空白處右鍵點擊在終端打開,打開新終端輸入 sudo ln libhdf5_serial.so.10.1.0 libhdf5.so
sudo ln libhdf5_serial_hl.so.10.0.2 libhdf5_hl.so
最後在終端輸入sudo ldconfig
使連接生效
原終端中輸入make clean
清除第一次編譯結果
再次輸入make all -j4
從新編譯
終端輸入
make test -j4 make runtest -j4 make pycaffe -j4 make distribute #生成發佈安裝包
測試python,終端輸入 cd /home/yt/caffe/python
python
import caffe
若是不報錯就說明編譯成功
9、mnist測試
下載mnist數據集,終端輸入 cd /home/yt/caffe/data/mnist/
./get_mnist.sh
獲取mnist數據集
在/home/yt/caffe/data/mnist/
目錄下會多出訓練集圖片、訓練集標籤、測試集圖片和測試集標籤等4個文件
mnist數據格式轉換,終端輸入 cd /home/yt/caffe/
./examples/mnist/create_mnist.sh
必需要在第一行以後運行第二行,即必需要在caffe根目錄下運行create_mnist.sh
此時在/caffe/examples/mnist/
目錄下生成mnist_test_lmdb和mnist_train_lmdb兩個LMDB格式的訓練集和測試集
LeNet-5模型描述在/caffe/examples/mnist/lenet_train_test.prototxt
Solver配置文件在/caffe/examples/mnist/lenet_solver.prototxt
訓練mnist,執行文件在/caffe/examples/mnist/train_lenet.sh
終端輸入 cd /home/yt/caffe/
./examples/mnist/train_lenet.sh
測試結果以下
一、直接輸入命令:
sudo pip install theano
二、配置參數文件:.theanorc
sudo gedit ~/.theanorc
[global] floatX=float32 device=gpu base_compiledir=~/external/.theano/ allow_gc=False warn_float64=warn [mode]=FAST_RUN [nvcc] fastmath=True [cuda] root=/usr/local/cuda
三、運行測試例子:
from theano import function, config, shared, sandbox import theano.tensor as T import numpy import time vlen = 10 * 30 * 768 # 10 x #cores x # threads per core iters = 1000 rng = numpy.random.RandomState(22) x = shared(numpy.asarray(rng.rand(vlen), config.floatX)) f = function([], T.exp(x)) print(f.maker.fgraph.toposort()) t0 = time.time() for i in range(iters): r = f() t1 = time.time() print("Looping %d times took %f seconds" % (iters, t1 - t0)) print("Result is %s" % (r,)) if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]): print('Used the cpu') else: print('Used the gpu')
echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
curl https://storage.googleapis.com/bazel-apt/doc/apt-key.pub.gpg | sudo apt-key add -
sudo apt-get update && sudo apt-get install bazel
sudo apt-get upgrade bazel
git clone https://github.com/tensorflow/tensorflow cd tensorflow git checkout Branch # where Branch is the desired branch git checkout r1.0
sudo apt-get install python-numpy python-dev python-pip python-wheel sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel
sudo apt-get install libcupti-dev
./configure
$ ./configure # 如下是一個例子 Please specify the location of python. [Default is /usr/bin/python]: y Invalid python path. y cannot be found Please specify the location of python. [Default is /usr/bin/python]: Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Do you wish to use jemalloc as the malloc implementation? [Y/n] y jemalloc enabled Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n No Google Cloud Platform support will be enabled for TensorFlow Do you wish to build TensorFlow with Hadoop File System support? [y/N] y Hadoop File System support will be enabled for TensorFlow Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] n No XLA JIT support will be enabled for TensorFlow Found possible Python library paths: /usr/local/lib/python2.7/dist-packages /usr/lib/python2.7/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] Using python library path: /usr/local/lib/python2.7/dist-packages Do you wish to build TensorFlow with OpenCL support? [y/N] n No OpenCL support will be enabled for TensorFlow Do you wish to build TensorFlow with CUDA support? [y/N] y CUDA support will be enabled for TensorFlow Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 8.0 Please specify the location where CUDA 8.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify the Cudnn version you want to use. [Leave empty to use system default]: 5 Please specify the location where cuDNN 5 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: "3.5,5.2"]: 6.1 INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes. ........ INFO: All external dependencies fetched successfully. Configuration finished
編譯
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0"
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
檢查tmp文件夾下生成的whl文件名
sudo pip install /tmp/tensorflow_pkg/ tensorflow-1.0.1-cp27-cp27mu-linux_x86_64.whl
三、測試
python import tensorflow as tf sess = tf.Session()