linux(manjaro) tensorflow2.1 conda cuda10 雙顯卡筆記本深度學習環境搭建

linux(manjaro) tensorflow2.1 conda cuda10 雙顯卡筆記本深度學習環境搭建

下學期要學tensorflow,看着我可憐的1050ti,流下了貧窮的淚水,但無奈要作實驗啊,學仍是得學的,安裝過程記錄一下,僅供參考html

關於manjaro

以前寫過一篇怎麼安裝manjaro的文章來着,雖然manjaro在國內不是大衆發行版,但在嘗試過諸多linux後,我最終留在了manjaro.python

雙顯卡驅動

個人驅動,直接上圖
驅動
linux

Anaconda

一開始我嘗試用pacman直接安裝tf cuda cudnn等,很簡單git

tf CPU
sudo pacman -S python-tensorflow-opt
tf GPU
sudo pacman -S python-tensorflow-opt-cuda cuda cudnn

可是GUP版裝好以後運行測試會報
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at …
緣由:CUDA驅動版本不知足CUDA運行版本。
具體顯卡驅動與CUDA版本對應見下
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html
在這裏插入圖片描述
個人是440xx 而軟件庫中提供的是cuda11





vim

不想換驅動,那就給 cuda 和 tf 降級bash

conda安裝

sudo pacman -S anaconda

conda -h

若是有conda:命令未找到的報錯,就須要修改一下環境變量app

export PATH=$PATH:/opt/anaconda/bin

CUDA CUDNN

conda install cudatoolkit=10.1 cudnn=7.6 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/linux-64/

tensorflow2.1

conda create -n tf2-gpu tensorflow-gpu==2.1 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/linux-64/

裝好後,檢查環境dom

conda env list


# conda environments:
#
tf2-gpu                  $home/.conda/envs/tf2-gpu
base                  *  /opt/anaconda

進入環境並測試

與win不一樣,linux進入conda環境要使用source activate,退出則是conda deactivate
要進入剛纔搭建的tf2的環境只須要輸入source activate tf2-gpu
svg

source activate tf2-gpu

(tf2-gpu) git clone https://hub.fastgit.org/guangfuhao/Deeplearning

(tf2-gpu) cd Deeplearning

(tf2-gpu) cp mnist.npz <你的測試目錄>

(tf2-gpu) pip install matplotlib numpy

編輯測試程序,很短就用vim test.py,注意將這個test.py與以前下載的mnist.npz放到同一目錄下學習

測試程序
# 1.Import the neccessary libraries needed
import numpy as np
import tensorflow as tf
import matplotlib
from matplotlib import pyplot as plt

########################################################################

# 2.Set default parameters for plots
matplotlib.rcParams['font.size'] = 20
matplotlib.rcParams['figure.titlesize'] = 20
matplotlib.rcParams['figure.figsize'] = [9, 7]
matplotlib.rcParams['font.family'] = ['STKaiTi']
matplotlib.rcParams['axes.unicode_minus'] = False

########################################################################
# 3.Initialize Parameters

# Initialize learning rate
lr = 1e-3
# Initialize loss array
losses = []
# Initialize the weights layers and the bias layers
w1 = tf.Variable(tf.random.truncated_normal([784, 256], stddev=0.1))
b1 = tf.Variable(tf.zeros([256]))
w2 = tf.Variable(tf.random.truncated_normal([256, 128], stddev=0.1))
b2 = tf.Variable(tf.zeros([128]))
w3 = tf.Variable(tf.random.truncated_normal([128, 10], stddev=0.1))
b3 = tf.Variable(tf.zeros([10]))

########################################################################

# 4.Import the minist dataset by numpy offline


def load_mnist():
    # define the directory where mnist.npz is(Please watch the '\'!)
    path = r'./mnist.npz'
    f = np.load(path)
    x_train, y_train = f['x_train'], f['y_train']
    x_test, y_test = f['x_test'], f['y_test']
    f.close()
    return (x_train, y_train), (x_test, y_test)


(train_image, train_label), _ = load_mnist()
x = tf.convert_to_tensor(train_image, dtype=tf.float32) / 255.
y = tf.convert_to_tensor(train_label, dtype=tf.int32)
# Reshape x from [60k, 28, 28] to [60k, 28*28]
x = tf.reshape(x, [-1, 28*28])

########################################################################

# 5.Combine x and y as a tuple and batch them
train_db = tf.data.Dataset.from_tensor_slices((x, y)).batch(128)
''' #Encapsulate train_db as an iterator object train_iter = iter(train_db) sample = next(train_iter) '''

########################################################################

# 6.Iterate database for 20 times
for epoch in range(20):
    # For every batch:x:[128, 28*28],y: [128]
    for step, (x, y) in enumerate(train_db):
        with tf.GradientTape() as tape:  # tf.Variable
            # x: [b, 28*28]
            # h1 = x@w1 + b1
            # [b, 784]@[784, 256] + [256] => [b, 256] + [256] => [b, 256] + [b, 256]
            h1 = x@w1 + tf.broadcast_to(b1, [x.shape[0], 256])
            h1 = tf.nn.relu(h1)
            # [b, 256] => [b, 128]
            h2 = h1@w2 + b2
            h2 = tf.nn.relu(h2)
            # [b, 128] => [b, 10]
            out = h2@w3 + b3

            # y: [b] => [b, 10]
            y_onehot = tf.one_hot(y, depth=10)

            # compute loss
            # mse = mean(sum(y-out)^2)
            # [b, 10]
            loss = tf.square(y_onehot - out)
            # mean: scalar
            loss = tf.reduce_mean(loss)

        # compute gradients
        grads = tape.gradient(loss, [w1, b1, w2, b2, w3, b3])
        # Update the weights and the bias
        w1.assign_sub(lr * grads[0])
        b1.assign_sub(lr * grads[1])
        w2.assign_sub(lr * grads[2])
        b2.assign_sub(lr * grads[3])
        w3.assign_sub(lr * grads[4])
        b3.assign_sub(lr * grads[5])

        if step % 100 == 0:
            print(epoch, step, 'loss:', float(loss))

    losses.append(float(loss))

########################################################################

# 7.Show the change of losses via matplotlib
plt.figure()
plt.plot(losses, color='C0', marker='s', label='訓練')
plt.xlabel('Epoch')
plt.legend()
plt.ylabel('MSE')
# Save figure as '.svg' file
# plt.savefig('forward.svg')
plt.show()
python3 test.py

不出意外會有相似的輸出
在這裏插入圖片描述
最後畫出一張圖
在這裏插入圖片描述


ps: 如何優雅的監控GPU

watch -n 1 nvidia-smi

在這裏插入圖片描述
好了,環境搭建大功告成 在個人機器上這個過程是成立的,若是有什麼疑問歡迎在評論區留言

相關文章
相關標籤/搜索