咱們能夠看出 決定y取不一樣值的邊界爲:\[ \theta^T \cdot x_b = 0 \]
上式表達式是一條直線,爲決策邊界,若是新來一個樣本,和訓練後獲得的$ \theta $相乘,根據是否大於0,決定到底屬於哪一類算法
若是樣本有兩個特徵\(x1,x2\),則決策邊界有:\(\theta_0 + \theta_1 \cdot x1 +\theta_2 \cdot x2 = 0\) ,求得\(x2 = \frac{-\theta_0 - \theta_1 \cdot x1}{\theta_2}\)函數
# 定義x2和x1的關係表達式 def x2(x1): return (-logic_reg.interception_ - logic_reg.coef_[0] * x1)/logic_reg.coef_[1] x1_plot = numpy.linspace(4,8,1000) x2_plot = x2(x1_plot) pyplot.scatter(X[y==0,0],X[y==0,1],color='red') pyplot.scatter(X[y==1,0],X[y==1,1],color='blue') pyplot.plot(x1_plot,x2_plot) pyplot.show()
特徵域(爲了可視化,特徵值取2,即矩形區域)中可視化區域中全部的點,查看不規則決策邊界
定義繪製特徵域中全部點的函數:spa
def plot_decision_boundary(model,axis): x0,x1 = numpy.meshgrid( numpy.linspace(axis[0],axis[1],int((axis[1]-axis[0])*100)), numpy.linspace(axis[2],axis[3],int((axis[3]-axis[2])*100)) ) x_new = numpy.c_[x0.ravel(),x1.ravel()] y_predict = model.predict(x_new) zz = y_predict.reshape(x0.shape) from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9']) pyplot.contourf(x0,x1,zz,cmap=custom_cmap)
繪製邏輯迴歸的決策邊界:code
plot_decision_boundary(logic_reg,axis=[4,7.5,1.5,4.5]) pyplot.scatter(X[y==0,0],X[y==0,1],color='blue') pyplot.scatter(X[y==1,0],X[y==1,1],color='red') pyplot.show()
繪製K近鄰算法的決策邊界:orm
from mylib import KNN knn_clf_all = KNN.KNNClassifier(k=3) knn_clf_all.fit(iris.data[:,:2],iris.target) plot_decision_boundary(knn_clf_all,axis=[4,8,1.5,4.5]) pyplot.scatter(iris.data[iris.target==0,0],iris.data[iris.target==0,1]) pyplot.scatter(iris.data[iris.target==1,0],iris.data[iris.target==1,1]) pyplot.scatter(iris.data[iris.target==2,0],iris.data[iris.target==2,1]) pyplot.show()
k近鄰多分類(種類爲3)下的決策邊界
k取3時:
blog
k取50時:
ci