VisualPytorch發佈域名+雙服務器以下:
http://nag.visualpytorch.top/static/ (對應114.115.148.27)
http://visualpytorch.top/static/ (對應39.97.209.22)html
PyTorch是在Torch基礎上用python語言從新打造的一款深度學習框架.增加速度與TensorFlow一致.java
anaconda(需添加中科大鏡像)python
pycharm(將jetbrains-agent.jar 放 到安裝目錄\bin文件 夾,在 pycharm64.exe.vmoptions 中添加 命令 -javaagent:安裝目錄\jetbrains-agent.jar
. 重啓選擇網頁激活)算法
pytorch(先安裝cuda10.0及對應CuDNN版本,經過nvcc -V
驗證. 登錄https://download.pytorch.org/whl/torch_stable.html. 下載cu100/torch-1.2.0-cp37-cp37m-win_amd64.whl 及對應 torchvision 的whl文件,進入相應虛擬環境,先建立虛擬環境,再經過pip安裝)。最後經過下面指令驗證GPU版本Pytorch能夠運行:服務器
In [1]: import torch In [2]: torch.cuda.current_device() Out[2]: 0 In [3]: torch.cuda.device(0) Out[3]: <torch.cuda.device at 0x7efce0b03be0> In [4]: torch.cuda.device_count() Out[4]: 1 In [5]: torch.cuda.get_device_name(0) Out[5]: 'GeForce GTX 950M' In [5]: torch.__version__ Out[5]: '1.2.0'
0.4.0以前,Variable是torch.autograd中的數據類型,主要用於封裝Tensor,進行自動求導,包含:網絡
以後的Tensor兼容了Variable的全部屬性,並添加了下面三個:框架
A. 直接ide
torch.tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) # 是否鎖頁內存 torch.from_numpy(...) # 共享內存,參數同上
B. 依據數值函數
torch.zeros(*size, out=None, dtype=None, layout=torch.strided,device=None, requires_grad=False) torch.zeros_like(input, dtype=None,layout=None,device=None, requires_grad=False) torch.ones(...) # 參數同zeros torch.ones_like(...) # 參數同zeros_like
# 如下函數參數從out起同zeros torch.full(size, fill_value, ...) torch.arange(start=0, end, step=1, ...) # step表步長 torch.linspace(start, end, steps=100, ...) # steps表長度 (steps-1)*step=end-start torch.logspace(start, end, steps=100, base=10.0,...) torch.eye(n, m=None,..) # 二維,n行m列
C. 依據機率分佈學習
torch.normal(mean, std, (size,) out=None) # mean, std 均爲標量時size表示輸出一維張量大小,不然沒有size,輸出的張量來自於不一樣分佈 torch.randn(*size, out=None, dtype=None, layout=torch.strided,device=None, requires_grad=False) # 標準正太分佈 torch.rand(...) # 均勻分佈,參數同randn torch.randint(low=0, high,...) # 均勻整數分佈,參數從size同randn torch.randperm(n,...) # 0到n-1隨機排列,參數從out同randn torch.bernoulli(input, *, generator=None, out=None) # 以input爲機率,生成伯努力分佈
A. 拼接與切分
torch.cat(tensors, dim=0, out=None) torch.stack(tensors, dim=0, out=None) # 在新維度上拼接 torch.chunk(input, chunks, dim=0) torch.split(tensor, split_size_or_sections, dim=0)
B. 張量索引
torch.index_select(input, dim, index, out=None) torch.masked_select(input, mask, out=None) # mask爲與input同形狀的bool
C. 張量變換
torch.reshape(input, shape) torch.transpose(input, dim0, dim1) # 交換的兩維 torch.t(input) torch.squeeze(input, dim=None, out=None) # 默認去除全部長度1的維,不然去除指定且長度爲1的維 torch.usqueeze(input, dim, out=None)
結果:
import torch import matplotlib.pyplot as plt torch.manual_seed(10) lr = 0.05 # 學習率 20191015修改 # 建立訓練數據 x = torch.rand(20, 1) * 10 # x data (tensor), shape=(20, 1) y = 2*x + (5 + torch.randn(20, 1)) # y data (tensor), shape=(20, 1) # 構建線性迴歸參數 w = torch.randn((1), requires_grad=True) b = torch.zeros((1), requires_grad=True) for iteration in range(1000): # 前向傳播 wx = torch.mul(w, x) y_pred = torch.add(wx, b) # 計算 MSE loss loss = (0.5 * (y - y_pred) ** 2).mean() # 反向傳播 loss.backward() # 更新參數 b.data.sub_(lr * b.grad) w.data.sub_(lr * w.grad) # 清零張量的梯度 20191015增長 w.grad.zero_() b.grad.zero_() # 繪圖 if iteration % 20 == 0: plt.scatter(x.data.numpy(), y.data.numpy()) plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r-', lw=5) plt.text(2, 20, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'}) plt.xlim(1.5, 10) plt.ylim(8, 28) plt.title("Iteration: {}\nw: {} b: {}".format(iteration, w.data.numpy(), b.data.numpy())) plt.pause(0.5) if loss.data.numpy() < 1: break
計算圖是用來描述運算的有向無環圖,有兩個主要元素:結點(Node,表示數據,如向量,矩陣,張量) 和邊(Edge,表示運算,如加減乘除卷積等)結
用計算圖表示:y = (x+ w) * (w+1)
以下
a = x + w b = w + 1 y = a * b
能夠根據計算圖構建相應的張量並計算梯度:
a.retain_grd()
例如x=2 w=1
時,經過計算圖得:
\(dw = \frac{\partial y}{\partial w} = \frac{\partial y}{\partial a} \frac{\partial a}{\partial w} + \frac{\partial y}{\partial b} \frac{\partial b}{\partial w} = b *1+a*1=w+1+x+w=5\)
w = torch.tensor([1.], requires_grad=True) x = torch.tensor([2.], requires_grad=True) a = torch.add(w, x) # retain_grad() b = torch.add(w, 1) y = torch.mul(a, b) y.backward() # 張量內部方法,調用了torch.autograd.backward() print(w.grad)
tensor([5.])
# 查看葉子結點 print("is_leaf:\n", w.is_leaf, x.is_leaf, a.is_leaf, b.is_leaf, y.is_leaf) # 查看梯度 print("gradient:\n", w.grad, x.grad, a.grad, b.grad, y.grad) # 查看 grad_fn print("grad_fn:\n", w.grad_fn, x.grad_fn, a.grad_fn, b.grad_fn, y.grad_fn)
is_leaf:
True True False False False
gradient:
tensor([5.]) tensor([2.]) None None None
grad_fn:
None None <AddBackward0 object at 0x000001F6AABCB0F0> <AddBackward0 object at 0x000001F6C1A52048> <MulBackward0 object at 0x000001F6AACD7D68>
動態圖vs靜態圖——自由行(靈活易調)vs跟團遊
torch.autograd.backward(tensors,grad_tensors=None,retain_graph=None,create_graph=False) torch.autograd.grad(outputs,inputs,grad_outputs=None,retain_graph=None,create_graph=False) # 求指定inputs的梯度
Parameter:
retain_graph
保存計算圖,可屢次
w = torch.tensor([1.], requires_grad=True) x = torch.tensor([2.], requires_grad=True) a = torch.add(w, x) b = torch.add(w, 1) y = torch.mul(a, b) y.backward(retain_graph=True) # print(w.grad) y.backward()
grad_tensors
表示多梯度權重
y0 = torch.mul(a, b) # y0 = (x+w) * (w+1) y1 = torch.add(a, b) # y1 = (x+w) + (w+1) dy1/dw = 2 loss = torch.cat([y0, y1], dim=0) # [y0, y1] loss.backward(gradient=torch.tensor([1., 2.])) print(w.grad) # 1*5+2*2=9
create_graph
輸出爲張量的元組,表明一種求導運算:
x = torch.tensor([3.], requires_grad=True) y = torch.pow(x, 2) # y = x**2 grad_1 = torch.autograd.grad(y, x, create_graph=True) # grad_1 = dy/dx = 2x = 2 * 3 = 6 print(grad_1) grad_2 = torch.autograd.grad(grad_1[0], x) # grad_2 = d(dy/dx)/dx = d(2x)/dx = 2 print(grad_2)
(tensor([6.], grad_fn=
),)
(tensor([2.]),)
Note:
梯度不自動清零
w = torch.tensor([1.], requires_grad=True) x = torch.tensor([2.], requires_grad=True) for i in range(4): a = torch.add(w, x) b = torch.add(w, 1) y = torch.mul(a, b) y.backward() print(w.grad)
若是沒有 w.grad.zero_()
,梯度將會累積
依賴於葉子結點的結點,requires_grad默認爲True
print(a.requires_grad, b.requires_grad, y.requires_grad)
True True True
葉子結點不可執行in-place
a = torch.ones((1, )) a = a + torch.ones((1, )) # a存儲位置改變,爲inplace操做 a += torch.ones((1, ))
w = torch.tensor([1.], requires_grad=True) x = torch.tensor([2.], requires_grad=True) a = torch.add(w, x) b = torch.add(w, 1) y = torch.mul(a, b) w.add_(1) y.backward()
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
模型訓練步驟:
結果:
代碼:
import torch import torch.nn as nn import matplotlib.pyplot as plt import numpy as np torch.manual_seed(10) # ============================ step 1/5 生成數據 ============================ sample_nums = 100 mean_value = 1.7 bias = 1 n_data = torch.ones(sample_nums, 2) x0 = torch.normal(mean_value * n_data, 1) + bias # 類別0 數據 shape=(100, 2) y0 = torch.zeros(sample_nums) # 類別0 標籤 shape=(100, 1) x1 = torch.normal(-mean_value * n_data, 1) + bias # 類別1 數據 shape=(100, 2) y1 = torch.ones(sample_nums) # 類別1 標籤 shape=(100, 1) train_x = torch.cat((x0, x1), 0) train_y = torch.cat((y0, y1), 0) # ============================ step 2/5 選擇模型 ============================ class LR(nn.Module): def __init__(self): super(LR, self).__init__() self.features = nn.Linear(2, 1) self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.features(x) x = self.sigmoid(x) return x lr_net = LR() # 實例化邏輯迴歸模型 # ============================ step 3/5 選擇損失函數 ============================ loss_fn = nn.BCELoss() # ============================ step 4/5 選擇優化器 ============================ lr = 0.01 # 學習率 optimizer = torch.optim.SGD(lr_net.parameters(), lr=lr, momentum=0.9) # ============================ step 5/5 模型訓練 ============================ for iteration in range(1000): # 前向傳播 y_pred = lr_net(train_x) # 計算 loss loss = loss_fn(y_pred.squeeze(), train_y) # 反向傳播 loss.backward() # 更新參數 optimizer.step() # 清空梯度 optimizer.zero_grad() # 繪圖 if iteration % 20 == 0: mask = y_pred.ge(0.5).float().squeeze() # 以0.5爲閾值進行分類 correct = (mask == train_y).sum() # 計算正確預測的樣本個數 acc = correct.item() / train_y.size(0) # 計算分類準確率 plt.scatter(x0.data.numpy()[:, 0], x0.data.numpy()[:, 1], c='r', label='class 0') plt.scatter(x1.data.numpy()[:, 0], x1.data.numpy()[:, 1], c='b', label='class 1') w0, w1 = lr_net.features.weight[0] w0, w1 = float(w0.item()), float(w1.item()) plot_b = float(lr_net.features.bias[0].item()) plot_x = np.arange(-6, 6, 0.1) plot_y = (-w0 * plot_x - plot_b) / w1 plt.xlim(-5, 7) plt.ylim(-7, 7) plt.plot(plot_x, plot_y) plt.text(-5, 5, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'}) plt.title("Iteration: {}\nw0:{:.2f} w1:{:.2f} b: {:.2f} accuracy:{:.2%}".format(iteration, w0, w1, plot_b, acc)) plt.legend() plt.show() plt.pause(0.5) if acc > 0.99: break