pytorch tutorial 2

這裏使用pytorch進行一個簡單的二分類模型python

 

導入全部咱們須要的庫網絡

import torch
import matplotlib.pyplot as plt
import torch.nn.functional as F

 

接着咱們這裏 生成咱們須要的假數據優化

# set seed
torch.manual_seed(1)

# make fake data
n_data = torch.ones(100, 2)
x0 = torch.normal(2 * n_data, 1)
y0 = torch.zeros(100)
x1 = torch.normal(-2 * n_data, 1)
y1 = torch.ones(100)

x = torch.cat((x0, x1), 0).type(torch.FloatTensor)
y = torch.cat((y0, y1), ).type(torch.LongTensor)

  

咱們先定義好咱們須要的net的這個類3d

class Net(torch.nn.Module):

    def __init__(self, n_feature, n_hidden, n_output):
        super(Net, self).__init__()
        self.hidden = torch.nn.Linear(n_feature, n_hidden)
        self.out = torch.nn.Linear(n_hidden, n_output)

    def forward(self, x):
        x = F.relu(self.hidden(x))
        x = self.out(x)
        return x

 

如今開始搭建咱們須要的網絡orm

咱們構建一個只有1個隱藏層的網絡blog

用SGD的方法對損失方程進行優化ci

而後用交叉熵來做爲咱們loss functionget

 

net = Net(n_feature=2, n_hidden=10, n_output=2)
print(net)

optimizer = torch.optim.SGD(net.parameters(), lr=0.0015)
loss_func = torch.nn.CrossEntropyLoss()

 

接着咱們開始訓練咱們的網絡it

plt.ion()

for t in range(200):
    out = net(x)
    loss = loss_func(out, y)

    optimizer.zero_grad()   # clear gradients for next train
    loss.backward()         # backpropagation, compute gradients
    optimizer.step()

    if t % 2 == 0:
        plt.cla()
        predcition = torch.max(out, 1)[1]
        pred_y = predcition.data.numpy()
        target_y = y.data.numpy()
        plt.scatter(x.data.numpy()[:, 0], x.data.numpy()[:, 1], c=pred_y, s=100, lw=0, cmap='RdYlGn')
        accuracy = float((pred_y == target_y).astype(int).sum()) / float(target_y.size)
        plt.text(1.5, -4, 'Accuracy=%.2f' % accuracy, fontdict={'size': 20, 'color': 'red'})
        plt.pause(0.1)


plt.ioff()
plt.show()

  

 

接着咱們能夠看到  已經把咱們作的假數據成功分紅了兩類io

相關文章
相關標籤/搜索