咱們在上篇筆記中介紹了感知機的理論知識,討論了感知機的由來、工做原理、求解策略、收斂性。這篇筆記中,咱們親自動手寫代碼,使用感知機算法解決實際問題。python
先從一個最簡單的問題開始,用感知機算法解決OR邏輯的分類。git
import numpy as np import matplotlib.pyplot as plt x = [0,0,1,1] y = [0,1,0,1] plt.scatter(x[0],y[0], color="red",label="negative") plt.scatter(x[1:],y[1:], color="green",label="positive") plt.legend(loc="best") plt.show()
下面咱們來定義一個函數,用來斷定一個樣本點是否被正確分類了。因爲此例中樣本點是二維的,所以權重向量也相應的爲二維,能夠定義爲\(w = (w_1, w_2)\),在Python中能夠使用列表來表達,例如w = [0, 0]
,而樣本到超平面的距離天然就是w[0] * x[0] + w[1] * x[1] +b
。下面給出完整的函數。算法
def decide(data,label,w,b): result = w[0] * data[0] + w[1] * data[1] - b print("result = ",result) if np.sign(result) * label <= 0: w[0] += 1 * (label - result) * data[0] w[1] += 1 * (label - result) * data[1] b += 1 * (label - result)*(-1) return w,b
寫完核心函數後,咱們還須要寫一個調度函數,這個函數提供遍歷每個樣本點的功能。數組
def run(data, label): w,b = [0,0],0 for epoch in range(10): for item in zip(data, label): dataset,labelset = item[0],item[1] w,b = decide(dataset, labelset, w, b) print("dataset = ",dataset, ",", "w = ",w,",","b = ",b) print(w,b)
data = [(0,0),(0,1),(1,0),(1,1)] label = [0,1,1,1]
run(data,label)
result = 0 dataset = (0, 0) , w = [0, 0] , b = 0 result = 0 dataset = (0, 1) , w = [0, 1] , b = -1 result = 1 dataset = (1, 0) , w = [0, 1] , b = -1 result = 2 dataset = (1, 1) , w = [0, 1] , b = -1 result = 1 dataset = (0, 0) , w = [0, 1] , b = 0 result = 1 dataset = (0, 1) , w = [0, 1] , b = 0 result = 0 dataset = (1, 0) , w = [1, 1] , b = -1 result = 3 dataset = (1, 1) , w = [1, 1] , b = -1 result = 1 dataset = (0, 0) , w = [1, 1] , b = 0 result = 1 dataset = (0, 1) , w = [1, 1] , b = 0 result = 1 後面的迭代這裏省略不貼,參數穩定下來,算法已經收斂
下面看一個來自UCI的數據集:PIMA糖尿病數據集,例子來自《機器學習算法視角》第三章網絡
import os import pylab as pl import numpy as np import pandas as pd
os.chdir(r"DataSets\pima-indians-diabetes-database")
pima = np.loadtxt("pima.txt", delimiter=",", skiprows=1)
pima.shape
(768, 9)
indices0 = np.where(pima[:,8]==0) indices1 = np.where(pima[:,8]==1)
pl.ion() pl.plot(pima[indices0,0],pima[indices0,1],"go") pl.plot(pima[indices1,0],pima[indices1,1],"rx") pl.show()
數據預處理app
1.將年齡離散化dom
pima[np.where(pima[:,7]<=30),7] = 1 pima[np.where((pima[:,7]>30) & (pima[:,7]<=40)),7] = 2 pima[np.where((pima[:,7]>40) & (pima[:,7]<=50)),7] = 3 pima[np.where((pima[:,7]>50) & (pima[:,7]<=60)),7] = 4 pima[np.where(pima[:,7]>60),7] = 5
2.將女性的懷孕次數大於8次的統一用8次代替機器學習
pima[np.where(pima[:,0]>8),0] = 8
3.將數據標準化處理ide
pima[:,:8] = pima[:,:8]-pima[:,:8].mean(axis=0) pima[:,:8] = pima[:,:8]/pima[:,:8].var(axis=0)
4.切分訓練集和測試集函數
trainin = pima[::2,:8] testin = pima[1::2,:8] traintgt = pima[::2,8:9] testtgt = pima[1::2,8:9]
定義模型
class Perceptron: def __init__(self, inputs, targets): # 設置網絡規模 # 記錄輸入向量的維度,神經元的維度要和它相等 if np.ndim(inputs) > 1: self.nIn = np.shape(inputs)[1] else: self.nIn = 1 # 記錄目標向量的維度,神經元的個數要和它相等 if np.ndim(targets) > 1: self.nOut = np.shape(targets)[1] else: self.nOut = 1 # 記錄輸入向量的樣本個數 self.nData = np.shape(inputs)[0] # 初始化網絡,這裏加1是爲了包含偏置項 self.weights = np.random.rand(self.nIn + 1, self.nOut) * 0.1 - 0.05 def train(self, inputs, targets, eta, epoch): """訓練環節""" # 和前面處理偏置項同步地,這裏對輸入樣本加一項-1,與W0相匹配 inputs = np.concatenate((inputs, -np.ones((self.nData,1))),axis=1) for n in range(epoch): self.activations = self.forward(inputs) self.weights -= eta * np.dot(np.transpose(inputs), self.activations - targets) return self.weights def forward(self, inputs): """神經網路前向傳播環節""" # 計算 activations = np.dot(inputs, self.weights) # 判斷是否激活 return np.where(activations>0, 1, 0) def confusion_matrix(self, inputs, targets): # 計算混淆矩陣 inputs = np.concatenate((inputs, -np.ones((self.nData,1))),axis=1) outputs = np.dot(inputs, self.weights) nClasses = np.shape(targets)[1] if nClasses == 1: nClasses = 2 outputs = np.where(outputs<0, 1, 0) else: outputs = np.argmax(outputs, 1) targets = np.argmax(targets, 1) cm = np.zeros((nClasses, nClasses)) for i in range(nClasses): for j in range(nClasses): cm[i,j] = np.sum(np.where(outputs==i, 1,0) * np.where(targets==j, 1, 0)) print(cm) print(np.trace(cm)/np.sum(cm))
print("Output after preprocessing of data") p = Perceptron(trainin,traintgt) p.train(trainin,traintgt,0.15,10000) p.confusion_matrix(testin,testtgt)
Output after preprocessing of data [[ 69. 86.] [182. 47.]] 0.3020833333333333
這個案例使用感知機訓練獲得的結果比較糟糕,這裏只是做爲展現算法的例子。
最後看一個使用感知機算法識別MNIST手寫數字的例子。代碼借鑑了Kaggle上的kernel。
import numpy as np import pandas as pd import matplotlib.pyplot as plt
train = pd.read_csv(r"DataSets\Digit_Recognizer\train.csv", engine="python") test = pd.read_csv(r"DataSets\Digit_Recognizer\test.csv", engine="python")
print("Training set has {0[0]} rows and {0[1]} columns".format(train.shape)) print("Test set has {0[0]} rows and {0[1]} columns".format(test.shape))
Training set has 42000 rows and 785 columns Test set has 28000 rows and 784 columns
建立label
,它的size爲 (42000, 1)
建立training set
,size爲(42000, 784)
建立weights
,size爲(10,784)
,這可能有點很差理解。咱們知道,權重向量是描述神經元的,784是維度,表示一個輸入樣本有784維,相應的與它對接的神經元也要有784維。同時,要記住一個神經元只能輸出一個output,而在數字識別問題中,咱們期待的是輸入一個樣本數據,能返回10個數字,而後依機率判斷這個樣本是哪一個數字的可能性最大。因此,咱們須要10個神經元,這就是(10,784)
的來歷。
trainlabels = train.label trainlabels.shape
(42000,)
traindata = np.asmatrix(train.loc[:,"pixel0":]) traindata.shape
(42000, 784)
weights = np.zeros((10,784)) weights.shape
(10, 784)
這裏能夠先看一個樣本,找找感受。注意原數據是壓縮成了784維的數組,咱們須要將它變回28*28的圖片
# 從矩陣中隨便取一行 samplerow = traindata[123:124] # 從新變成28*28 samplerow = np.reshape(samplerow, (28,28)) plt.imshow(samplerow, cmap="hot")
這裏咱們對訓練數據集循環若干次,而後重點關注錯誤率曲線
# 先建立一個列表,用來記錄每一輪訓練的錯誤率 errors = [] epoch = 20 for epoch in range(epoch): err = 0 # 對每個樣本(亦矩陣中的每一行) for i, data in enumerate(traindata): # 建立一個列表,用來記錄每一個神經元輸出的值 output = [] # 對每一個神經元都作點乘操做,並記錄下輸出值 for w in weights: output.append(np.dot(data, w)) # 這裏簡單的取輸出值最大者爲最有可能的 guess = np.argmax(output) # 實際的值爲標籤列表中對應項 actual = trainlabels[i] # 若是估計值和實際值不一樣,則分類錯誤,須要更新權重向量 if guess != actual: weights[guess] = weights[guess] - data weights[actual] = weights[actual] + data err += 1 # 計算迭代完42000個樣本以後,錯誤率 = 錯誤次數/樣本個數 errors.append(err/42000)
x = list(range(20)) plt.plot(x, errors)
[<matplotlib.lines.Line2D at 0x5955c50>]
從圖能夠看出,達到15次迭代時,錯誤率已經有上升的趨勢了,開始過擬合了。
感知機是一個很是簡單的算法,以至於很難在真正的場景中使用感知機算法。這裏舉的3個例子,都旨在於動手寫代碼實現這個算法,找找感受。稍有經驗的讀者想必會好奇:爲何沒有使用Scikit-Learn這個包,這部分實際上是筆者另有計劃,打算結合算法寫Scikit-Learn的源碼解讀筆記。固然,限於我的水平,不必定能解析到精髓,但勉力而爲吧。下篇會寫Multi-Layer-Perceptron算法的原理,在那裏咱們很容易看到,縱使是簡單的感知機,只要加一個隱層,就能大幅提高其分類能力。另外,也會抽空寫一篇感知機Sklearn源碼解讀的文章。有任何問題,歡迎你們留言討論。