決策樹(Decision Trees)

  • 簡介

決策樹是一個預測模型,經過座標數據進行屢次分割,找出分界線,繪製決策樹。html

在機器學習中,決策樹學習算法就是根據數據,使用計算機算法自動找出決策邊界。算法

每一次分割表明一次決策,屢次決策而造成決策樹,決策樹能夠經過核技巧把簡單的線性決策面轉換爲非線性決策面。app

 

  • 基本思想

樹是由節點和邊兩種元素組成的結構。有這幾個關鍵詞:根節點、父節點、子節點和葉子節點。dom

父節點和子節點是相對的,子節點由父節點根據某一規則分裂而來,而後子節點做爲新的父親節點繼續分裂,直至不能分裂爲止。而根節點是沒有父節點的節點,即初始分裂節點,葉子節點是沒有子節點的節點,以下圖所示:機器學習

                     

決策樹利用如上圖所示的樹結構進行決策,每個非葉子節點是一個判斷條件,每個葉子節點是結論。從跟節點開始,通過屢次判斷得出結論。學習

 

舉個例子this

如圖,利用決策樹將兩類樣本點分類。spa

先從X軸觀察,在X = 3時,樣本點有一次明顯的「突變」,咱們以X = 3做爲一次決策,進行一次劃分:code

再從Y軸觀察,兩類樣本點在Y = 4 和Y = 2處能夠進行劃分,進而進行兩次劃分:orm

經過這幾回劃分,樣本點被劃分爲四個部分,其中兩類樣本點各劃爲兩部分,並且沒法再繼續分割,這種分割的過程就是決策樹:

                       

 

 

  • 熵(entropy)

熵的做用:用於控制決策樹在什麼條件下作出決策,即在什麼條件下分割數據

熵的定義:它是一系列樣本中的不純度的測量值(measure of impurity in a bunch of examples)

創建決策樹的過程就是找到變量劃分點從而產生儘量的單一的子集,實際上決策樹作決策的過程,就是對這個過程的遞歸重複。

熵描述了數據的混亂程度,熵越大,混亂程度越高,也就是純度越低;反之,熵越小,混亂程度越低,純度越高。 熵的計算公式以下所示:

                  

其中Pi表示類i的數量佔比。以二分類問題爲例,若是兩類的數量相同,此時分類節點的純度最低,熵等於1;若是節點的數據屬於同一類時,此時節點的純度最高,熵等於0。

熵的最大值爲1,最小值爲0

 

  • 信息增益

用信息增益表示分裂先後跟的數據複雜度和分裂節點數據複雜度的變化值,計算公式表示爲:

                  

其中Gain表示節點的複雜度,Gain越高,說明覆雜度越高。信息增益也能夠說是分裂前的熵減去孩子節點的熵的和,信息增益越大,分裂後的熵減少得越多,分類的效果越明顯。

 

  • 誤差(bias)與方差(variance)

高誤差機器學習算法實際上會忽略訓練數據,它幾乎沒有能力學習任何數據,這被稱爲誤差。

另外一個極端狀況就是高方差,它只能復現曾經出現過的東西,對於沒有出現過的狀況,他的反應很是差。

經過調整參數讓誤差與方差平衡,使算法具備必定泛化能力,但仍然對訓練數據開放,能根據數據調整模型,是機器學習的要點。

 

  • 代碼實現

環境:MacOS mojave  10.14.3

Python  3.7.0

使用庫:scikit-learn    0.19.2

 

sklearn.tree官方庫:https://scikit-learn.org/stable/modules/tree.html

>>> from sklearn import tree
>>> X = [[0, 0], [1, 1]]    #兩個樣本點
>>> Y = [0, 1]                #分別屬於兩個標籤
>>> clf = tree.DecisionTreeClassifier()    #進行分類
>>> clf = clf.fit(X, Y)
>>> clf.predict([[2., 2.]])   #預測新點 
array([1])                        #新點經過分類屬於標籤1    

 

Main.py  主程序

import sys
from class_vis import prettyPicture, output_image
from prep_terrain_data import makeTerrainData

import matplotlib.pyplot as plt
import numpy as np
import pylab as pl
from classifyDT import classify

features_train, labels_train, features_test, labels_test = makeTerrainData()



### the classify() function in classifyDT is where the magic
### happens--fill in this function in the file 'classifyDT.py'!
clf = classify(features_train, labels_train)



#### grader code, do not modify below this line

prettyPicture(clf, features_test, labels_test)
accuracy = clf.score(features_test, labels_test)

# output_image("test.png", "png", open("test.png", "rb").read())
print (accuracy)
acc = accuracy    ### you fill this in!

 

classifyDT.py  決策樹分類

def classify(features_train, labels_train):
    
    ### your code goes here--should return a trained decision tree classifer
    from sklearn.tree import DecisionTreeClassifier
    clf = DecisionTreeClassifier(random_state=0)
    clf.fit(features_train,labels_train)
    
    
    return clf

 

perp_terrain_data.py  生成訓練點

import random


def makeTerrainData(n_points=1000):
###############################################################################
### make the toy dataset
    random.seed(42)
    grade = [random.random() for ii in range(0,n_points)]
    bumpy = [random.random() for ii in range(0,n_points)]
    error = [random.random() for ii in range(0,n_points)]
    y = [round(grade[ii]*bumpy[ii]+0.3+0.1*error[ii]) for ii in range(0,n_points)]
    for ii in range(0, len(y)):
        if grade[ii]>0.8 or bumpy[ii]>0.8:
            y[ii] = 1.0

### split into train/test sets
    X = [[gg, ss] for gg, ss in zip(grade, bumpy)]
    split = int(0.75*n_points)
    X_train = X[0:split]
    X_test  = X[split:]
    y_train = y[0:split]
    y_test  = y[split:]

    grade_sig = [X_train[ii][0] for ii in range(0, len(X_train)) if y_train[ii]==0]
    bumpy_sig = [X_train[ii][1] for ii in range(0, len(X_train)) if y_train[ii]==0]
    grade_bkg = [X_train[ii][0] for ii in range(0, len(X_train)) if y_train[ii]==1]
    bumpy_bkg = [X_train[ii][1] for ii in range(0, len(X_train)) if y_train[ii]==1]

#    training_data = {"fast":{"grade":grade_sig, "bumpiness":bumpy_sig}
#            , "slow":{"grade":grade_bkg, "bumpiness":bumpy_bkg}}


    grade_sig = [X_test[ii][0] for ii in range(0, len(X_test)) if y_test[ii]==0]
    bumpy_sig = [X_test[ii][1] for ii in range(0, len(X_test)) if y_test[ii]==0]
    grade_bkg = [X_test[ii][0] for ii in range(0, len(X_test)) if y_test[ii]==1]
    bumpy_bkg = [X_test[ii][1] for ii in range(0, len(X_test)) if y_test[ii]==1]

    test_data = {"fast":{"grade":grade_sig, "bumpiness":bumpy_sig}
            , "slow":{"grade":grade_bkg, "bumpiness":bumpy_bkg}}

    return X_train, y_train, X_test, y_test
#    return training_data, test_data

 

class_vis.py  繪圖與保存圖像

import warnings
warnings.filterwarnings("ignore")

import matplotlib 
matplotlib.use('agg')

import matplotlib.pyplot as plt
import pylab as pl
import numpy as np

#import numpy as np
#import matplotlib.pyplot as plt
#plt.ioff()

def prettyPicture(clf, X_test, y_test):
    x_min = 0.0; x_max = 1.0
    y_min = 0.0; y_max = 1.0

    # Plot the decision boundary. For that, we will assign a color to each
    # point in the mesh [x_min, m_max]x[y_min, y_max].
    h = .01  # step size in the mesh
    xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
    Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])

    # Put the result into a color plot
    Z = Z.reshape(xx.shape)
    plt.xlim(xx.min(), xx.max())
    plt.ylim(yy.min(), yy.max())

    plt.pcolormesh(xx, yy, Z, cmap=pl.cm.seismic)

    # Plot also the test points
    grade_sig = [X_test[ii][0] for ii in range(0, len(X_test)) if y_test[ii]==0]
    bumpy_sig = [X_test[ii][1] for ii in range(0, len(X_test)) if y_test[ii]==0]
    grade_bkg = [X_test[ii][0] for ii in range(0, len(X_test)) if y_test[ii]==1]
    bumpy_bkg = [X_test[ii][1] for ii in range(0, len(X_test)) if y_test[ii]==1]

    plt.scatter(grade_sig, bumpy_sig, color = "b", label="fast")
    plt.scatter(grade_bkg, bumpy_bkg, color = "r", label="slow")
    plt.legend()
    plt.xlabel("bumpiness")
    plt.ylabel("grade")

    plt.savefig("test.png")

 

獲得結果,正確率90.8%

其中,狹長區域爲過擬合

 

  • 決策樹的參數

min_samples_split可分割的樣本數量下限,默認值爲2

對於決策樹最下層的每個節點,是否還要繼續分割,min_samples_split決定了可以繼續進行分割的最少分割樣本

acc_min_samples.py  acc_min_samples對比

import sys
from class_vis import prettyPicture
from prep_terrain_data import makeTerrainData

import matplotlib.pyplot as plt
import numpy as np
import pylab as pl

features_train, labels_train, features_test, labels_test = makeTerrainData()



########################## DECISION TREE #################################


### your code goes here--now create 2 decision tree classifiers,
### one with min_samples_split=2 and one with min_samples_split=50
### compute the accuracies on the testing data and store
### the accuracy numbers to acc_min_samples_split_2 and
### acc_min_samples_split_50, respectively


from sklearn.tree import DecisionTreeClassifier
clf1 = DecisionTreeClassifier(min_samples_split=2)
clf2 = DecisionTreeClassifier(min_samples_split=50)

clf1.fit(features_train,labels_train)
clf2.fit(features_train,labels_train)

acc_min_samples_split_2 = clf1.score(features_test, labels_test)
acc_min_samples_split_50 = clf2.score(features_test, labels_test)


print (acc_min_samples_split_2)
print (acc_min_samples_split_50)

#choose one of two
prettyPicture(clf1, features_test, labels_test)
# prettyPicture(clf2, features_test, labels_test)

 

 

上圖,min_samples_split分別爲2 和50

獲得正確率分別爲90.8%和91.2%

 

  • 決策樹的優勢與缺點

易於使用,易於理解

容易過擬合,尤爲對於具備包含大量特徵的數據時,複雜的決策樹可能會過擬合數據,經過仔細調整參數,避免過擬合(對於節點上只有單個數據點的決策樹,幾乎確定是過擬合)

相關文章
相關標籤/搜索