CIFAR-10和CIFAR-100均是帶有標籤的數據集,都出自於規模更大的一個數據集,他有八千萬張小圖片。而本次實驗採用CIFAR-10數據集,該數據集共有60000張彩色圖像,這些圖像是32*32,分爲10個類,每類6000張圖。這裏面有50000張用於訓練,構成了5個訓練批,每一批10000張圖;另外10000用於測試,單獨構成一批。測試批的數據裏,取自10類中的每一類,每一類隨機取1000張。抽剩下的就隨機排列組成了訓練批。注意一個訓練批中的各種圖像並不必定數量相同,總的來看訓練批,每一類都有5000張圖。app
下面這幅圖就是列舉了10各種,每一類展現了隨機的10張圖片:函數
個人數據集一共有三個文件,分別是訓練集train_data,測試集test_data以及標籤名稱labels_name,而標籤名稱中共有5個類,‘airplane‘, 'automobile‘, 'bird‘, 'cat‘, 'deer’.我如今準備對前三類‘airplane‘, ’automobile‘, ’bird‘,(即標籤爲1, 2, 3的數據 )進行分類。測試
通過以前大量測試,獲得在累計方差貢獻率爲0.79時,基於最小錯誤率的貝葉斯決策用於圖像分類最佳,如下爲代碼:spa
#CIFAR-10數據集:包含60000個32*32的彩色圖像,共10類,每類6000個彩色圖像。有50000個訓練圖像和10000個測試圖像。
import scipy.io train_data=scipy.io.loadmat("F:\\模式識別\\最小錯誤率的貝葉斯決策進行圖像分類\\data\\train_data.mat") print (type(train_data)) print (train_data.keys()) print (train_data.values()) print (len(train_data['Data'])) #單張圖片的數據向量長度:32X32X3=3072 #內存佔用量=3072*4*9968=116M 假定一個整數佔用4個字節
print (len(train_data['Data'][0])) print (train_data) x = train_data['Data'] y = train_data['Label'] print (y) print (len(y)) print (y.shape) print (y.flatten().shape) #labels_name:共5個標籤,分別爲airplane、automobile、bird、cat、deer
import scipy.io labels_name=scipy.io.loadmat("F:\\模式識別\\最小錯誤率的貝葉斯決策進行圖像分類\\data\\labels_name.mat") print (type(labels_name)) print (labels_name) print (len(labels_name)) #test_data:共5000個圖像,5類,每類1000個圖像
import scipy.io test_data=scipy.io.loadmat("F:\\模式識別\\最小錯誤率的貝葉斯決策進行圖像分類\\data\\test_data.mat") print (test_data['Label']) print (test_data['Data']) print (len(test_data['Label'])) datatest = test_data['Data'] labeltest = test_data['Label'] print (datatest.shape) print (labeltest.shape) test_index=[] for i in range(len(labeltest)): if labeltest[i]==1: test_index.append(i) elif labeltest[i]==2: test_index.append(i) elif labeltest[i]==3: test_index.append(i) #print (test_index)
labeltest=test_data['Label'][:3000] #print (labeltest)
import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D print (x) print (x.shape) print (type(x)) from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.decomposition import PCA pca=PCA(n_components=0.79) #訓練模型
pca.fit(x) x_new=pca.transform(x) print("降維後各主成分的累計方差貢獻率:",pca.explained_variance_ratio_) print("降維後主成分的個數:",pca.n_components_) print (x_new) index_1=[] index_2=[] index_3=[] index_num=[] for i in range(len(y)): if y[i]==1: index_1.append(i) elif y[i]==2: index_2.append(i) elif y[i]==3: index_3.append(i) index_num=[len(index_1),len(index_2),len(index_3)] print(len(index_1)) print(len(index_2)) print(len(index_3)) print (index_num) import numpy as np class1_feature=[] class2_feature=[] class3_feature=[] #index_1
for i in index_1: class1_feature.append(x_new[i]) print (len(class1_feature)) for i in index_2: class2_feature.append(x_new[i]) print (len(class2_feature)) for i in index_3: class3_feature.append(x_new[i]) print (len(class3_feature)) #計算第一類的類條件機率密度函數的參數
class1_feature=np.mat(class1_feature) print (class1_feature.shape) miu1=[] sigma1=[] for i in range(30): miu=class1_feature[:,i].sum()/len(index_1) miu1.append(miu) temp=class1_feature[:,i]-miu class1_feature[:,i]=temp sigma1=(class1_feature.T*class1_feature)/len(index_1) print (miu1) print (sigma1) print (sigma1.shape) #計算第二類類條件機率密度函數的參數
class2_feature=np.mat(class2_feature) miu2=[] sigma2=[] for i in range(30): miu=class2_feature[:,i].sum()/len(index_2) miu2.append(miu) temp=class2_feature[:,i]-miu class2_feature[:,i]=temp sigma2=(class2_feature.T*class2_feature)/len(index_2) print (miu2) print (sigma2) print (sigma2.shape) #計算第三類類條件機率密度函數的參數
class3_feature=np.mat(class3_feature) miu3=[] sigma3=[] for i in range(30): miu=class3_feature[:,i].sum()/len(index_3) miu3.append(miu) temp=class3_feature[:,i]-miu class3_feature[:,i]=temp sigma3=(class3_feature.T*class3_feature)/len(index_3) print (miu3) print (sigma3) print (sigma3.shape) #計算三個類別的先驗機率:
prior_index1=len(index_1)/len(y) prior_index2=len(index_2)/len(y) prior_index3=len(index_3)/len(y) print (prior_index1) print (prior_index2) print (prior_index3) import math #降維
x_test = pca.transform(datatest) print (x_test) print (x_test.shape) print (x_test[0]) #print ((np.mat(x_test[0]-miu1))*sigma1.I*(np.mat(x_test[0]-miu1).T)) #print (((np.mat(x_test[0]-miu1))*sigma1.I*(np.mat(x_test[0]-miu1).T))[0,0])
predict_label=[] for i in range(3000): g1=-0.5*((np.mat(x_test[i]-miu1))*sigma1.I*(np.mat(x_test[i]-miu1).T))[0,0]-0.5*math.log(np.linalg.det(sigma1))+math.log(prior_index1) g2=-0.5*((np.mat(x_test[i]-miu2))*sigma2.I*(np.mat(x_test[i]-miu2).T))[0,0]-0.5*math.log(np.linalg.det(sigma2))+math.log(prior_index2) g3=-0.5*((np.mat(x_test[i]-miu3))*sigma3.I*(np.mat(x_test[i]-miu3).T))[0,0]-0.5*math.log(np.linalg.det(sigma3))+math.log(prior_index3) if g1>g2: max=1
if g1>g3: max=1
else: max=3
else: max=2
if g2>g3: max=2
else: max=3 predict_label.append(max) from sklearn.metrics import accuracy_score print (accuracy_score(predict_label,labeltest))
能夠看到分類結果的準確率高達73%,這一數值在貝葉斯決策用於圖像分類中已是極值了。3d