支持向量機在高維或無限維空間中構造超平面或超平面集合,其能夠用於分類、迴歸或其餘任務。直觀來講,分類邊界距離最近的訓練數據點越遠越好,由於這樣能夠縮小分類器的泛化偏差。數組
調用sklearn.svm的svc函數,將MNIST數據集進行分類,並將總體分類精度輸出,這裏用了兩種預處理的方法(將特徵值變成0或者1的數;將特徵值變成0-1區間的數)效果不同,而且分別調用了兩種核函數(高斯核函數和多項式核函數)。在支持向量機實驗中,將訓練集和測試集都等分紅10份,並求十份數據集總體分類精度的平均值,這樣的結果較爲準確客觀。能夠經過修改懲罰因子C的大小來看不一樣的效果,並畫出圖進行比較,C=100的時候效果較爲好。函數
#任務:比較不一樣的kernel的結果差別,並畫出相應的曲線來直觀的表示
import struct
from numpy import *
import numpy as np
import time
from sklearn.svm import SVC#C-Support Vector Classification
def read_image(file_name):
#先用二進制方式把文件都讀進來
file_handle=open(file_name,"rb") #以二進制打開文檔
file_content=file_handle.read() #讀取到緩衝區中
offset=0
head = struct.unpack_from('>IIII', file_content, offset) # 取前4個整數,返回一個元組
offset += struct.calcsize('>IIII')
imgNum = head[1] #圖片數
rows = head[2] #寬度
cols = head[3] #高度
images=np.empty((imgNum , 784))#empty,是它所常見的數組內的全部元素均爲空,沒有實際意義,它是建立數組最快的方法
image_size=rows*cols#單個圖片的大小
fmt='>' + str(image_size) + 'B'#單個圖片的format
for i in range(imgNum):
images[i] = np.array(struct.unpack_from(fmt, file_content, offset))
# images[i] = np.array(struct.unpack_from(fmt, file_content, offset)).reshape((rows, cols))
offset += struct.calcsize(fmt)
return images
#讀取標籤
def read_label(file_name):
file_handle = open(file_name, "rb") # 以二進制打開文檔
file_content = file_handle.read() # 讀取到緩衝區中
head = struct.unpack_from('>II', file_content, 0) # 取前2個整數,返回一個元組
offset = struct.calcsize('>II')
labelNum = head[1] # label數
# print(labelNum)
bitsString = '>' + str(labelNum) + 'B' # fmt格式:'>47040000B'
label = struct.unpack_from(bitsString, file_content, offset) # 取data數據,返回一個元組
return np.array(label)
def normalize(data):#圖片像素二值化,變成0-1分佈
m=data.shape[0]
n=np.array(data).shape[1]
for i in range(m):
for j in range(n):
if data[i,j]!=0:
data[i,j]=1
else:
data[i,j]=0
return data
#另外一種歸一化的方法,就是將特徵值變成[0,1]區間的數
def normalize_new(data):
m=data.shape[0]
n=np.array(data).shape[1]
for i in range(m):
for j in range(n):
data[i,j]=float(data[i,j])/255
return data
def loadDataSet():
train_x_filename="train-images-idx3-ubyte"
train_y_filename="train-labels-idx1-ubyte"
test_x_filename="t10k-images-idx3-ubyte"
test_y_filename="t10k-labels-idx1-ubyte"
train_x=read_image(train_x_filename)#60000*784 的矩陣
train_y=read_label(train_y_filename)#60000*1的矩陣
test_x=read_image(test_x_filename)#10000*784
test_y=read_label(test_y_filename)#10000*1
#能夠比較這兩種預處理的方式最後獲得的結果
# train_x=normalize(train_x)
# test_x=normalize(test_x)
# train_x=normalize_new(train_x)
# test_x=normalize_new(test_x)
return train_x, test_x, train_y, test_y
if __name__=='__main__':
classNum=10
score_train=0.0
score=0.0
temp=0.0
temp_train=0.0
print("Start reading data...")
time1=time.time()
train_x, test_x, train_y, test_y=loadDataSet()
time2=time.time()
print("read data cost",time2-time1,"second")
print("Start training data...")
# clf=SVC(C=1.0,kernel='poly')#多項式核函數
clf = SVC(C=0.01,kernel='rbf')#高斯核函數
#因爲每6000箇中的每一個類的數量都差很少相等,因此直接按照整批劃分的方法
for i in range(classNum):
clf.fit(train_x[i*6000:(i+1)*6000,:],train_y[i*6000:(i+1)*6000])
temp=clf.score(test_x[i*1000:(i+1)*1000,:], test_y[i*1000:(i+1)*1000])
# print(temp)
temp_train=clf.score(train_x[i*6000:(i+1)*6000,:],train_y[i*6000:(i+1)*6000])
print(temp_train)
score+=(clf.score(test_x[i*1000:(i+1)*1000,:], test_y[i*1000:(i+1)*1000])/classNum)
score_train+=(temp_train/classNum)
time3 = time.time()
print("score:{:.6f}".format(score))
print("score:{:.6f}".format(score_train))
print("train data cost", time3 - time2, "second")
實驗結果:對二值化(normalize)後的不一樣核函數和C的結果進行了統計和分析。結果以下表所示:
Parameter測試 |
二值化優化 |
{ "C":1," " kernel": "poly"}spa |
{"accuarcy":0.4312,"train time":558.61}code |
{"C":1, "kernel": "rbf"}orm |
{"accuarcy":0.9212,"train time":163.15}圖片 |
{"C":10, "kernel": "poly"}ci |
{"accuarcy":0.8802,"train time":277.78}文檔 |
{"C":10, "kernel": "rbf"} |
{"accuarcy":0.9354,"train time":96.07} |
{"C":100, "kernel": "poly"} |
{"accuarcy":0.9427,"train time":146.43} |
{"C":100, "kernel": "rbf"} |
{"accuarcy":0.9324,"train time":163.99} |
{"C":1000,"kernel":"poly"} |
{"accuarcy":0.9519,"train time":132.59} |
{"C":1000,"kernel":"rbf"} |
{"accuarcy":0.9325,"train time":97.54} |
{"C":10000,"kernel":"poly"} |
{"accuarcy":0.9518,"train time":115.35} |
{"C":10000,"kernel":"rbf"} |
{"accuarcy":0.9325,"train time":115.77} |
對於實驗的優化方法,能夠採用pca主成分分析方法,準確率和速度都有提高,代碼以下:結果截屏: