感知機原始形式、對偶形式的Python實現

感知機學習的目標就是求得一個可以將訓練數據集中正負實例徹底分開的分類超平面python

 

感知機原始形式dom

from __future__ import division
import random
import numpy as np
import matplotlib.pyplot as plt  


def sign(v):
    if v>=0:
        return 1
    else:
        return -1

def train(train_num,train_datas,lr):
    w=[0,0]
    b=0
    for i in range(train_num):
        x=random.choice(train_datas)
        x1,x2,y=x
        if(y*sign((w[0]*x1+w[1]*x2+b))<=0):
            w[0]+=lr*y*x1
            w[1]+=lr*y*x2
            b+=lr*y
    return w,b
def plot_points(train_datas,w,b):
    plt.figure()
    x1 = np.linspace(0, 8, 100)  
    x2 = (-b-w[0]*x1)/w[1]
    plt.plot(x1, x2, color='r', label='y1 data')
    datas_len=len(train_datas)
    for i in range(datas_len):
        if(train_datas[i][-1]==1):
            plt.scatter(train_datas[i][0],train_datas[i][1],s=50)  
        else:
            plt.scatter(train_datas[i][0],train_datas[i][1],marker='x',s=50)  
    plt.show()


if __name__=='__main__':
    train_data1 = [[1, 3, 1], [2, 2, 1], [3, 8, 1], [2, 6, 1]]  # 正樣本
    train_data2 = [[2, 1, -1], [4, 1, -1], [6, 2, -1], [7, 3, -1]]  # 負樣本
    train_datas = train_data1 + train_data2  # 樣本集
    w,b=train(train_num=50,train_datas=train_datas,lr=0.01)
    plot_points(train_datas,w,b)

 

感知機對偶形式學習

通俗理解「原問題難以求解或者求解的消耗較高,於是轉爲求解它的對偶問題」。優化

此處,對偶問題主要解決的是將感知機的對偶形式中對w,b的學習變成了對α,b的學習,原始形式中,w在每一輪迭代錯分時都須要更新,而採用對偶形式時,對於某一點(xi,yi)發生錯分時,咱們只須要更新其對應的αi便可,便可一次計算出w. spa

 

另外,xj⋅xi僅之內積的形式出現,所以咱們能夠先計算出x的gram矩陣存儲起來,這樣正式訓練時只須要查表就能夠獲得xj⋅xi的值,這樣作能夠方便程序的優化,提升運算的速度。 .net

至關於不斷更新xj⋅xi以前的係數,而不用每次像原始形式那樣去計算向量積。code

from __future__ import division
import random
import numpy as np
import matplotlib.pyplot as plt  


def sign(v):
    if v>=0:
        return 1
    else:
        return -1

def train(train_num,train_datas,lr):
    w=0.0
    b=0
    datas_len = len(train_datas)
    alpha = [0 for i in range(datas_len)]
    train_array = np.array(train_datas)
    gram = np.matmul(train_array[:,0:-1] , train_array[:,0:-1].T)
    for idx in range(train_num):
        tmp=0
        i = random.randint(0,datas_len-1)
        yi=train_array[i,-1]
        for j in range(datas_len):
            tmp+=alpha[j]*train_array[j,-1]*gram[i,j]
        tmp+=b
        if(yi*tmp<=0):
            alpha[i]=alpha[i]+lr
            b=b+lr*yi
    for i in range(datas_len):
        w+=alpha[i]*train_array[i,0:-1]*train_array[i,-1]
    return w,b,alpha,gram

def plot_points(train_datas,w,b):
    plt.figure()
    x1 = np.linspace(0, 8, 100)
    x2 = (-b-w[0]*x1)/(w[1]+1e-10)
    plt.plot(x1, x2, color='r', label='y1 data')
    datas_len=len(train_datas)
    for i in range(datas_len):
        if(train_datas[i][-1]==1):
            plt.scatter(train_datas[i][0],train_datas[i][1],s=50)  
        else:
            plt.scatter(train_datas[i][0],train_datas[i][1],marker='x',s=50)  
    plt.show()

if __name__=='__main__':
    train_data1 = [[1, 3, 1], [2, 2, 1], [3, 8, 1], [2, 6, 1]]  # 正樣本
    train_data2 = [[2, 1, -1], [4, 1, -1], [6, 2, -1], [7, 3, -1]]  # 負樣本
    train_datas = train_data1 + train_data2  # 樣本集
    w,b,alpha,gram=train(train_num=500,train_datas=train_datas,lr=0.01)
    plot_points(train_datas,w,b)

 

參考:http://blog.csdn.net/winter_evening/article/details/70196040blog

相關文章
相關標籤/搜索