Python實現CART決策樹


前言

  CART算法的全稱是Classification And Regression Tree,採用的是Gini指數(選Gini指數最小的特徵s)做爲分裂標準,是一種實用的分類算法。node


1、CART決策樹算法

  主要思路是對一個數據集選擇幾個屬性做爲特徵,對於每一個特徵提出一個劃分條件,根據這個條件將結點分爲兩個子節點,對於子節點一樣利用下一個特徵進行劃分,直到某結點的Gini值符合要求,咱們認爲這個結點的不純性很小,該節點已成功分類。如此反覆執行,最後能夠獲得由若干個結點組成的決策樹,其中的每一個葉節點都是分類的結果。python

  某結點的Gini值的計算公式以下:
某結點Gini值的計算方式
  若是要對某種劃分計算Gini值,能夠利用加權平均,即:

算法

在這裏插入圖片描述
  明確了Gini值的計算以及決策樹的基本思路後,就能夠繼續向下看具體的代碼實現了,本文沒有使用sklearn庫,若是讀者只是須要使用該算法,而不想了解算法實際的實現思路的話,能夠無需向下看了。
數組

2、Python代碼實現

主要分爲6個步驟:app

  1. 尋找到最佳屬性
  2. 建立決策樹
  3. 將上一結點分裂,分別計算左、右子節點的Gini值。
  4. 計算Gnin值有一種方法:將數據集對應這個屬性的值排序,從頭開始選擇相鄰兩個值的平均值做爲劃分條件,計算該分發下的Gini值,如此遍歷一遍,選出最小的一個Gini值對應的劃分條件,做爲該屬性的最佳分裂條件
  5. 對於子節點,Gini值小於閾值,認爲其是葉節點,結束這一方向的分裂。若Gini值大於閾值,認爲分類還不夠純,需繼續分裂,下一次分裂要使用不一樣的屬性值。
  6. 遞歸調用建立決策樹,就能夠獲得完整的決策樹。

使用到的函數主要有5個:機器學習

  • calcGini(dataSet)   #計算結點GINI值
  • splitDataSet(dataSet, n, value, type)  #根據條件分離數據集
  • FindBestFeature(dataSet)  #選擇最好的特徵劃分數據集,即返回最佳特徵下標及傳入數據集各列的Gini指數
  • createTree(dataSet, features, decisionTree)  #生成決策樹。輸入:訓練數據集D,特徵集A。輸出:決策樹T
  • testTree(dataSet)  #得到測試結果,給出混淆矩陣

1.計算結點GINI值

def calcGini(dataSet):

    numTotal = dataSet.shape[0]            # 記錄本數據集總條數
    length = len(dataSet[0])               # 計算特徵列數
    frequent_0 = 0.0                         # 記錄三種樣本出現次數
    frequent_1 = 0.0
    frequent_2 = 0.0
    for i in range(0,numTotal):
        if dataSet[i][length-1] == '0.0':
            frequent_0 += 1
        elif dataSet[i][length-1] == '1.0':
            frequent_1 += 1
        elif dataSet[i][length-1] == '2.0':
            frequent_2 += 1
    gini = 1 - (frequent_0/numTotal)**2 - (frequent_1/numTotal)**2 - (frequent_2/numTotal)**2
    return gini

2.分離數據集

def splitDataSet(dataSet, n, value, type):

    subDataSet = []
    numTotal = dataSet.shape[0]            # 記錄本數據集總條數
    if type == 1:                          # type==1對應小於等於value的狀況
        for i in range(0,numTotal):
            if float(dataSet[i][n]) <= value:
                subDataSet.append(dataSet[i])
    elif type == 2:                        # type==2對應大於value的狀況
        for i in range(0,numTotal):
            if float(dataSet[i][n]) > value:
                subDataSet.append(dataSet[i])
    subDataSet = np.array(subDataSet)      # 強制轉換爲array類型
     
    return subDataSet,len(subDataSet)

3.選擇最好的特徵

def FindBestFeature(dataSet):
    numTotal = dataSet.shape[0]            # 記錄本數據集總條數
    numFeatures = len(dataSet[0]) - 2      # 計算特徵列數
    bestFeature = -1                       # 初始化參數,記錄最優特徵列i,下標從0開始
    columnFeaGini={ }                       # 初始化參數,記錄每一列x的每一種特徵的基尼 Gini(D,A)
    for i in range(1, numFeatures+1):      # 遍歷全部x特徵列,i爲特徵標號
        featList = list(dataSet[:, i])     # 取這一列x中全部數據,轉換爲list類型
        featListSort = [float(x) for x in featList]
        featListSort.sort()                # 對該特徵值排序
        FeaGinis = []
        FeaGiniv = []
        for j in range(0,len(featListSort)-1):    # j爲第幾組數據
            value = (featListSort[j]+featListSort[j+1])/2
            feaGini = 0.0
            subDataSet1,sublen1 = splitDataSet(dataSet, i, value, 1)  # 獲取切分後的數據
            subDataSet2,sublen2 = splitDataSet(dataSet, i, value, 2)
            feaGini = (sublen1/numTotal) * calcGini(subDataSet1) + (sublen2/numTotal) * calcGini(subDataSet2)  # 計算此分法對應Gini值
            FeaGinis.append(feaGini)       # 記錄該特徵下各類分法遍歷出的Gini值
            FeaGiniv.append(value)         # 記錄該特徵下的各類分法

        columnFeaGini['%d_%f'%(i,FeaGiniv[FeaGinis.index(min(FeaGinis))])] = min(FeaGinis)    # 將該特徵下最小的Gini值
    bestFeature = min(columnFeaGini, key=columnFeaGini.get) # 找到最小的Gini指數對應的數據列
    return bestFeature,columnFeaGini

4.生成決策樹

def createTree(dataSet, features, decisionTree):

    if len(features) > 2:           #特徵未用完
        bestFeature, columnFeaGini = FindBestFeature(dataSet)
        bestFeatureLable = features[int(bestFeature.split('_')[0])]  # 最佳特徵
        NodeName = bestFeatureLable + '\n' +'<=' + bestFeature.split('_')[1]    #結點名稱
        decisionTree = { NodeName: { }}   # 構建樹,以Gini指數最小的特徵bestFeature爲子節點
    else:
        return decisionTree

    LeftSet, LeftSet_len = splitDataSet(dataSet, int(bestFeature.split('_')[0]), float(bestFeature.split('_')[1]), 1)
    RightSet, RightSet_len = splitDataSet(dataSet, int(bestFeature.split('_')[0]), float(bestFeature.split('_')[1]), 2)
    del (features[int(bestFeature.split('_')[0])])        # 該特徵已爲子節點使用,則刪除,以便接下來繼續構建子樹

    if calcGini(LeftSet) <= 0.1 or len(features) == 2:
        L_lables_grp = dict(Counter(LeftSet[:,-1]))
        L_leaf = max(L_lables_grp, key=L_lables_grp.get)  # 得到劃分後出現機率最大的分類做爲結點的分類
        decisionTree[NodeName]['Y'] = L_leaf              # 設定左枝葉子值
    elif calcGini(LeftSet) > 0.1:
        dataSetNew = np.delete(LeftSet, int(bestFeature.split('_')[0]), axis=1)  # 刪除此最優劃分x列,使用剩餘的x列進行數據劃分
        L_subFeatures = features[:]
        decisionTree[NodeName]['Y'] = { 'NONE'}
        decisionTree[NodeName]['Y'] = createTree(dataSetNew, L_subFeatures, decisionTree[NodeName]['Y'])   #遞歸生成左邊的樹

    if calcGini(RightSet) <= 0.1 or len(features) == 2:
        R_lables_grp = dict(Counter(RightSet[:,-1]))
        R_leaf = max(R_lables_grp, key=R_lables_grp.get)  # 得到劃分後出現機率最大的分類做爲結點的分類
        decisionTree[NodeName]['N'] = R_leaf              # 設定右枝葉子值
    elif calcGini(RightSet) > 0.1:
        dataSetNew = np.delete(RightSet, int(bestFeature.split('_')[0]), axis=1)  # 刪除此最優劃分x列,使用剩餘的x列進行數據劃分
        R_subFeatures = features[:]
        decisionTree[NodeName]['N'] = { 'NONE'}
        decisionTree[NodeName]['N'] = createTree(dataSetNew, R_subFeatures, decisionTree[NodeName]['N'])  #遞歸生成右邊的樹

    return decisionTree

5.測試決策樹

def testTree(dataSet):
    numTotal = dataSet.shape[0]  # 記錄本數據集總條數
    testmemory = []
    label = dataSet[:,-1]
    TP = 0
    FP = 0
    TN = 0
    FN = 0
    for i in range(0,numTotal):
        if float(dataSet[i][4]) <= 0.001444:                   #標準差
            if float(dataSet[i][1]) <= 0.01022:                #均值
                if float(dataSet[i][6]) <= -0.589019:          #峯度
                    testmemory.append('0.0')
                else:
                    if float(dataSet[i][3]) <= -0.001811:        #四分位差
                        if float(dataSet[i][2]) <= -0.000026:      #中位數
                            testmemory.append('0.0')
                        else:
                            testmemory.append('2.0')
                    else:
                        if float(dataSet[i][2]) <= 0.007687:       #中位數
                            if float(dataSet[i][5]) <= 0.452516:   #偏度
                                testmemory.append('0.0')
                            else:
                                testmemory.append('0.0')
                        else:
                            testmemory.append('2.0')
            else:
                testmemory.append('2.0')
        else:
            if float(dataSet[i][3]) <= -0.013691:                # 四分位差
                testmemory.append('1.0')
            else:
                if float(dataSet[i][5]) <= 1.462280:   #偏度
                    if float(dataSet[i][6]) <= -1.034223:  # 峯度
                        if float(dataSet[i][1]) <= 0.009173:  # 均值
                            if float(dataSet[i][2]) <= -0.004193:  # 中位數
                                testmemory.append('2.0')
                            else:
                                testmemory.append('2.0')
                        else:
                            testmemory.append('0.0')
                    else:
                        testmemory.append('2.0')
                else:
                    if float(dataSet[i][1]) <= -0.023631:  # 均值
                        testmemory.append('2.0')
                    else:
                        testmemory.append('1.0')

    for i in range(0, numTotal):
        if (testmemory[i] == '1.0') and (label[i] == '1.0'):
            TP += 1
        elif (testmemory[i] == '1.0') and (label[i] != '1.0'):
            FP += 1
        elif (testmemory[i] != '1.0') and (label[i] != '1.0'):
            TN += 1
        elif (testmemory[i] != '1.0') and (label[i] == '1.0'):
            FN += 1

    print('TP:%d' % TP)    #真陽性
    print('FP:%d' % FP)    #假陽性
    print('TN:%d' % TN)    #真陰性
    print('FN:%d' % FN)    #假陰性

    cm = confusion_matrix(label, testmemory, labels=["0.0", "1.0", "2.0"])
    plt.rc('figure', figsize=(5, 5))
    plt.matshow(cm, cmap=plt.cm.cool)  # 背景顏色
    plt.colorbar()  # 顏色標籤
    # 內部添加圖例標籤
    for x in range(len(cm)):
        for y in range(len(cm)):
            plt.annotate(cm[x, y], xy=(y, x), horizontalalignment='center', verticalalignment='center')
    plt.ylabel('True Label')
    plt.xlabel('Predicted Label')
    plt.title('decision_tree')
    plt.savefig(r'confusion_matrix')

6.決策樹可視化

可視化部分基本摘自《機器學習實戰》第三章。函數

matplotlib.rcParams['font.family']='SimHei'  # 用來正常顯示中文
plt.rcParams['axes.unicode_minus']=False  # 用來正常顯示負號

decisionNode = dict(boxstyle="sawtooth", fc="0.8")
leafNode = dict(boxstyle="round4", fc="0.8")
arrow_args = dict(arrowstyle="<-")

def getNumLeafs(myTree):
    numLeafs = 0
    firstStr = list(myTree.keys())[0]
    secondDict = myTree[firstStr]
    for key in secondDict.keys():
        if type(secondDict[
                    key]).__name__ == 'dict':  # test to see if the nodes are dictonaires, if not they are leaf nodes
            numLeafs += getNumLeafs(secondDict[key])
        else:
            numLeafs += 1
    return numLeafs

def getTreeDepth(myTree):
    maxDepth = 0
    firstStr = list(myTree.keys())[0]  # myTree.keys()[0]
    secondDict = myTree[firstStr]
    for key in secondDict.keys():
        if type(secondDict[
                    key]).__name__ == 'dict':  # test to see if the nodes are dictonaires, if not they are leaf nodes
            thisDepth = 1 + getTreeDepth(secondDict[key])
        else:
            thisDepth = 1
        if thisDepth > maxDepth: maxDepth = thisDepth
    return maxDepth

def plotNode(nodeTxt, centerPt, parentPt, nodeType):
    createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction',
                            xytext=centerPt, textcoords='axes fraction',
                            va="center", ha="center", bbox=nodeType, arrowprops=arrow_args)

def plotMidText(cntrPt, parentPt, txtString):
    xMid = (parentPt[0] - cntrPt[0]) / 2.0 + cntrPt[0]
    yMid = (parentPt[1] - cntrPt[1]) / 2.0 + cntrPt[1]
    createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)

def plotTree(myTree, parentPt, nodeTxt):  # if the first key tells you what feat was split on
    numLeafs = getNumLeafs(myTree)  # this determines the x width of this tree
    # depth = getTreeDepth(myTree)
    firstStr = list(myTree.keys())[0]  # myTree.keys()[0] #the text label for this node should be this
    cntrPt = (plotTree.xOff + (1.0 + float(numLeafs)) / 2.0 / plotTree.totalW, plotTree.yOff)
    plotMidText(cntrPt, parentPt, nodeTxt)
    plotNode(firstStr, cntrPt, parentPt, decisionNode)
    secondDict = myTree[firstStr]
    plotTree.yOff = plotTree.yOff - 1.0 / plotTree.totalD
    for key in secondDict.keys():
        if type(secondDict[
                    key]).__name__ == 'dict':  # test to see if the nodes are dictonaires, if not they are leaf nodes
            plotTree(secondDict[key], cntrPt, str(key))  # recursion
        else:  # it's a leaf node print the leaf node
            plotTree.xOff = plotTree.xOff + 1.0 / plotTree.totalW
            plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
            plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
    plotTree.yOff = plotTree.yOff + 1.0 / plotTree.totalD

def createPlot(myTree):
    fig = plt.figure(1, facecolor='white')
    fig.clf()
    axprops = dict(xticks=[], yticks=[])
    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)  # no ticks
    # createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
    plotTree.totalW = float(getNumLeafs(myTree))
    plotTree.totalD = float(getTreeDepth(myTree))
    plotTree.xOff = -0.5 / plotTree.totalW;
    plotTree.yOff = 1.0;
    plotTree(myTree, (0.5, 1.0), '')
    plt.show()

7.主程序部分

trainingData, testingData= read_xslx(r'e:/Table/機器學習/1109/attribute_113.xlsx')
features = list(trainingData[0])          # x的表頭,即特徵
trainingDataSet = trainingData[1:]        # 訓練集

bestFeature, columnFeaGini=FindBestFeature(trainingDataSet)
decisionTree = { }
decisiontree = createTree(trainingDataSet, features, decisionTree)  # 創建決策樹,CART分類樹
print('CART分類樹:\n', decisiontree)
testTree(testingData)
createPlot(decisiontree)

CART決策分類樹全部代碼

# -*- coding: utf-8 -*- 支持文件中出現中文字符
#########################################################################

""" Created on Mon Nov 16 21:26:00 2020 @author: ixobgenw 代碼功能描述: (1)計算結點GINI值 (2)分離數據集 (3)選擇最好的特徵 (4)生成決策樹 (5)測試決策樹 """
#####################################################################

import xlrd
import numpy as np
from collections import Counter
import matplotlib.pyplot as plt
import matplotlib


#可視化部分
####################################################################################################################
matplotlib.rcParams['font.family']='SimHei'  # 用來正常顯示中文
plt.rcParams['axes.unicode_minus']=False  # 用來正常顯示負號

decisionNode = dict(boxstyle="sawtooth", fc="0.8")
leafNode = dict(boxstyle="round4", fc="0.8")
arrow_args = dict(arrowstyle="<-")

def getNumLeafs(myTree):
    numLeafs = 0
    firstStr = list(myTree.keys())[0]
    secondDict = myTree[firstStr]
    for key in secondDict.keys():
        if type(secondDict[
                    key]).__name__ == 'dict':  # test to see if the nodes are dictonaires, if not they are leaf nodes
            numLeafs += getNumLeafs(secondDict[key])
        else:
            numLeafs += 1
    return numLeafs

def getTreeDepth(myTree):
    maxDepth = 0
    firstStr = list(myTree.keys())[0]  # myTree.keys()[0]
    secondDict = myTree[firstStr]
    for key in secondDict.keys():
        if type(secondDict[
                    key]).__name__ == 'dict':  # test to see if the nodes are dictonaires, if not they are leaf nodes
            thisDepth = 1 + getTreeDepth(secondDict[key])
        else:
            thisDepth = 1
        if thisDepth > maxDepth: maxDepth = thisDepth
    return maxDepth

def plotNode(nodeTxt, centerPt, parentPt, nodeType):
    createPlot.ax1.annotate(nodeTxt, xy=parentPt, xycoords='axes fraction',
                            xytext=centerPt, textcoords='axes fraction',
                            va="center", ha="center", bbox=nodeType, arrowprops=arrow_args)

def plotMidText(cntrPt, parentPt, txtString):
    xMid = (parentPt[0] - cntrPt[0]) / 2.0 + cntrPt[0]
    yMid = (parentPt[1] - cntrPt[1]) / 2.0 + cntrPt[1]
    createPlot.ax1.text(xMid, yMid, txtString, va="center", ha="center", rotation=30)

def plotTree(myTree, parentPt, nodeTxt):  # if the first key tells you what feat was split on
    numLeafs = getNumLeafs(myTree)  # this determines the x width of this tree
    # depth = getTreeDepth(myTree)
    firstStr = list(myTree.keys())[0]  # myTree.keys()[0] #the text label for this node should be this
    cntrPt = (plotTree.xOff + (1.0 + float(numLeafs)) / 2.0 / plotTree.totalW, plotTree.yOff)
    plotMidText(cntrPt, parentPt, nodeTxt)
    plotNode(firstStr, cntrPt, parentPt, decisionNode)
    secondDict = myTree[firstStr]
    plotTree.yOff = plotTree.yOff - 1.0 / plotTree.totalD
    for key in secondDict.keys():
        if type(secondDict[
                    key]).__name__ == 'dict':  # test to see if the nodes are dictonaires, if not they are leaf nodes
            plotTree(secondDict[key], cntrPt, str(key))  # recursion
        else:  # it's a leaf node print the leaf node
            plotTree.xOff = plotTree.xOff + 1.0 / plotTree.totalW
            plotNode(secondDict[key], (plotTree.xOff, plotTree.yOff), cntrPt, leafNode)
            plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))
    plotTree.yOff = plotTree.yOff + 1.0 / plotTree.totalD

def createPlot(myTree):
    fig = plt.figure(1, facecolor='white')
    fig.clf()
    axprops = dict(xticks=[], yticks=[])
    createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)  # no ticks
    # createPlot.ax1 = plt.subplot(111, frameon=False) #ticks for demo puropses
    plotTree.totalW = float(getNumLeafs(myTree))
    plotTree.totalD = float(getTreeDepth(myTree))
    plotTree.xOff = -0.5 / plotTree.totalW;
    plotTree.yOff = 1.0;
    plotTree(myTree, (0.5, 1.0), '')
    plt.show()
####################################################################################################################

#讀取excel文件,70%爲訓練集,30%爲測試集
####################################################################################################################
def read_xslx(xslx_path):

    trainingdata = []                      # 先聲明一個空list
    testingdata = []
    data = xlrd.open_workbook(xslx_path)   # 讀取文件
    table = data.sheet_by_index(0)         # 按索引獲取工做表,0就是工做表1

    for i in range(int(0.7*table.nrows)):  # table.nrows表示總行數
        line = table.row_values(i)         # 讀取每行數據,保存在line裏面,line是list
        trainingdata.append(line)          # 將line加入到trainingdata中,trainingdata是二維list
    trainingdata = np.array(trainingdata)  # 將trainingdata從二維list變成數組

    for i in range(int(0.7*table.nrows),int(table.nrows)):  # table.nrows表示總行數
        line = table.row_values(i)         # 讀取每行數據,保存在line裏面,line是list
        testingdata.append(line)           # 將line加入到testingdata中,testingdata是二維list
    testingdata = np.array(testingdata)    # 將testingdata從二維list變成數組

    return trainingdata,testingdata
####################################################################################################################

#計算結點GINI值
####################################################################################################################
def calcGini(dataSet):

    numTotal = dataSet.shape[0]            # 記錄本數據集總條數
    length = len(dataSet[0])               # 計算特徵列數
    frequent_0 = 0.0                         # 記錄三種樣本出現次數
    frequent_1 = 0.0
    frequent_2 = 0.0
    for i in range(0,numTotal):
        if dataSet[i][length-1] == '0.0':
            frequent_0 += 1
        elif dataSet[i][length-1] == '1.0':
            frequent_1 += 1
        elif dataSet[i][length-1] == '2.0':
            frequent_2 += 1
    gini = 1 - (frequent_0/numTotal)**2 - (frequent_1/numTotal)**2 - (frequent_2/numTotal)**2
    return gini
####################################################################################################################

#根據條件分離數據集
####################################################################################################################
def splitDataSet(dataSet, n, value, type):

    subDataSet = []
    numTotal = dataSet.shape[0]            # 記錄本數據集總條數
    if type == 1:                          # type==1對應小於等於value的狀況
        for i in range(0,numTotal):
            if float(dataSet[i][n]) <= value:
                subDataSet.append(dataSet[i])
    elif type == 2:                        # type==2對應大於value的狀況
        for i in range(0,numTotal):
            if float(dataSet[i][n]) > value:
                subDataSet.append(dataSet[i])
    subDataSet = np.array(subDataSet)      # 強制轉換爲array類型
     
    return subDataSet,len(subDataSet)
#################################################################################################################### 

#選擇最好的特徵劃分數據集,即返回最佳特徵下標及傳入數據集各列的Gini指數
####################################################################################################################
def FindBestFeature(dataSet):
    numTotal = dataSet.shape[0]            # 記錄本數據集總條數
    numFeatures = len(dataSet[0]) - 2      # 計算特徵列數
    bestFeature = -1                       # 初始化參數,記錄最優特徵列i,下標從0開始
    columnFeaGini={ }                       # 初始化參數,記錄每一列x的每一種特徵的基尼 Gini(D,A)
    for i in range(1, numFeatures+1):      # 遍歷全部x特徵列,i爲特徵標號
        featList = list(dataSet[:, i])     # 取這一列x中全部數據,轉換爲list類型
        featListSort = [float(x) for x in featList]
        featListSort.sort()                # 對該特徵值排序
        FeaGinis = []
        FeaGiniv = []
        for j in range(0,len(featListSort)-1):    # j爲第幾組數據
            value = (featListSort[j]+featListSort[j+1])/2
            feaGini = 0.0
            subDataSet1,sublen1 = splitDataSet(dataSet, i, value, 1)  # 獲取切分後的數據
            subDataSet2,sublen2 = splitDataSet(dataSet, i, value, 2)
            feaGini = (sublen1/numTotal) * calcGini(subDataSet1) + (sublen2/numTotal) * calcGini(subDataSet2)  # 計算此分法對應Gini值
            FeaGinis.append(feaGini)       # 記錄該特徵下各類分法遍歷出的Gini值
            FeaGiniv.append(value)         # 記錄該特徵下的各類分法

        columnFeaGini['%d_%f'%(i,FeaGiniv[FeaGinis.index(min(FeaGinis))])] = min(FeaGinis)    # 將該特徵下最小的Gini值
    bestFeature = min(columnFeaGini, key=columnFeaGini.get) # 找到最小的Gini指數對應的數據列
    return bestFeature,columnFeaGini
####################################################################################################################

#生成決策樹。輸入:訓練數據集D,特徵集A。輸出:決策樹T
####################################################################################################################
def createTree(dataSet, features, decisionTree):

    if len(features) > 2:           #特徵未用完
        bestFeature, columnFeaGini = FindBestFeature(dataSet)
        bestFeatureLable = features[int(bestFeature.split('_')[0])]  # 最佳特徵
        NodeName = bestFeatureLable + '\n' +'<=' + bestFeature.split('_')[1]    #結點名稱
        decisionTree = { NodeName: { }}   # 構建樹,以Gini指數最小的特徵bestFeature爲子節點
    else:
        return decisionTree

    LeftSet, LeftSet_len = splitDataSet(dataSet, int(bestFeature.split('_')[0]), float(bestFeature.split('_')[1]), 1)
    RightSet, RightSet_len = splitDataSet(dataSet, int(bestFeature.split('_')[0]), float(bestFeature.split('_')[1]), 2)
    del (features[int(bestFeature.split('_')[0])])        # 該特徵已爲子節點使用,則刪除,以便接下來繼續構建子樹

    if calcGini(LeftSet) <= 0.1 or len(features) == 2:
        L_lables_grp = dict(Counter(LeftSet[:,-1]))
        L_leaf = max(L_lables_grp, key=L_lables_grp.get)  # 得到劃分後出現機率最大的分類做爲結點的分類
        decisionTree[NodeName]['Y'] = L_leaf              # 設定左枝葉子值
    elif calcGini(LeftSet) > 0.1:
        dataSetNew = np.delete(LeftSet, int(bestFeature.split('_')[0]), axis=1)  # 刪除此最優劃分x列,使用剩餘的x列進行數據劃分
        L_subFeatures = features[:]
        decisionTree[NodeName]['Y'] = { 'NONE'}
        decisionTree[NodeName]['Y'] = createTree(dataSetNew, L_subFeatures, decisionTree[NodeName]['Y'])   #遞歸生成左邊的樹

    if calcGini(RightSet) <= 0.1 or len(features) == 2:
        R_lables_grp = dict(Counter(RightSet[:,-1]))
        R_leaf = max(R_lables_grp, key=R_lables_grp.get)  # 得到劃分後出現機率最大的分類做爲結點的分類
        decisionTree[NodeName]['N'] = R_leaf              # 設定右枝葉子值
    elif calcGini(RightSet) > 0.1:
        dataSetNew = np.delete(RightSet, int(bestFeature.split('_')[0]), axis=1)  # 刪除此最優劃分x列,使用剩餘的x列進行數據劃分
        R_subFeatures = features[:]
        decisionTree[NodeName]['N'] = { 'NONE'}
        decisionTree[NodeName]['N'] = createTree(dataSetNew, R_subFeatures, decisionTree[NodeName]['N'])  #遞歸生成右邊的樹

    return decisionTree
####################################################################################################################

#得到測試結果
####################################################################################################################
def testTree(dataSet):
    numTotal = dataSet.shape[0]  # 記錄本數據集總條數
    testmemory = []
    label = dataSet[:,-1]
    TP = 0
    FP = 0
    TN = 0
    FN = 0
    for i in range(0,numTotal):
        if float(dataSet[i][4]) <= 0.001444:                   #標準差
            if float(dataSet[i][1]) <= 0.01022:                #均值
                if float(dataSet[i][6]) <= -0.589019:          #峯度
                    testmemory.append('0.0')
                else:
                    if float(dataSet[i][3]) <= -0.001811:        #四分位差
                        if float(dataSet[i][2]) <= -0.000026:      #中位數
                            testmemory.append('0.0')
                        else:
                            testmemory.append('2.0')
                    else:
                        if float(dataSet[i][2]) <= 0.007687:       #中位數
                            if float(dataSet[i][5]) <= 0.452516:   #偏度
                                testmemory.append('0.0')
                            else:
                                testmemory.append('0.0')
                        else:
                            testmemory.append('2.0')
            else:
                testmemory.append('2.0')
        else:
            if float(dataSet[i][3]) <= -0.013691:                # 四分位差
                testmemory.append('1.0')
            else:
                if float(dataSet[i][5]) <= 1.462280:   #偏度
                    if float(dataSet[i][6]) <= -1.034223:  # 峯度
                        if float(dataSet[i][1]) <= 0.009173:  # 均值
                            if float(dataSet[i][2]) <= -0.004193:  # 中位數
                                testmemory.append('2.0')
                            else:
                                testmemory.append('2.0')
                        else:
                            testmemory.append('0.0')
                    else:
                        testmemory.append('2.0')
                else:
                    if float(dataSet[i][1]) <= -0.023631:  # 均值
                        testmemory.append('2.0')
                    else:
                        testmemory.append('1.0')

    for i in range(0, numTotal):
        if (testmemory[i] == '1.0') and (label[i] == '1.0'):
            TP += 1
        elif (testmemory[i] == '1.0') and (label[i] != '1.0'):
            FP += 1
        elif (testmemory[i] != '1.0') and (label[i] != '1.0'):
            TN += 1
        elif (testmemory[i] != '1.0') and (label[i] == '1.0'):
            FN += 1

    print('TP:%d' % TP)    #真陽性
    print('FP:%d' % FP)    #假陽性
    print('TN:%d' % TN)    #真陰性
    print('FN:%d' % FN)    #假陰性

    cm = confusion_matrix(label, testmemory, labels=["0.0", "1.0", "2.0"])
    plt.rc('figure', figsize=(5, 5))
    plt.matshow(cm, cmap=plt.cm.cool)  # 背景顏色
    plt.colorbar()  # 顏色標籤
    # 內部添加圖例標籤
    for x in range(len(cm)):
        for y in range(len(cm)):
            plt.annotate(cm[x, y], xy=(y, x), horizontalalignment='center', verticalalignment='center')
    plt.ylabel('True Label')
    plt.xlabel('Predicted Label')
    plt.title('decision_tree')
    plt.savefig(r'confusion_matrix')
####################################################################################################################

trainingData, testingData= read_xslx(r'e:/Table/機器學習/1109/attribute_113.xlsx')
features = list(trainingData[0])          # x的表頭,即特徵
trainingDataSet = trainingData[1:]        # 訓練集

bestFeature, columnFeaGini=FindBestFeature(trainingDataSet)
decisionTree = { }
decisiontree = createTree(trainingDataSet, features, decisionTree)  # 創建決策樹,CART分類樹
print('CART分類樹:\n', decisiontree)
testTree(testingData)
createPlot(decisiontree)

3、運行結果

CART分類樹:
{‘標準差\n<=0.001444’: {‘Y’: {‘均值\n<=0.010220’: {‘Y’: {‘峯度\n<=-0.589019’: {‘Y’: ‘0.0’, ‘N’: {‘四分位差\n<=-0.001811’: {‘Y’: {‘中位數\n<=-0.000026’: {‘Y’: ‘0.0’, ‘N’: ‘2.0’}}, ‘N’: {‘中位數\n<=0.007687’: {‘Y’: {‘偏度\n<=0.452516’: {‘Y’: ‘0.0’, ‘N’: ‘0.0’}}, ‘N’: ‘2.0’}}}}}}, ‘N’: ‘2.0’}}, ‘N’: {‘四分位差\n<=-0.013691’: {‘Y’: ‘1.0’, ‘N’: {‘偏度\n<=1.462280’: {‘Y’: {‘峯度\n<=-1.034223’: {‘Y’: {‘均值\n<=0.009173’: {‘Y’: {‘中位數\n<=-0.004193’: {‘Y’: ‘2.0’, ‘N’: ‘2.0’}}, ‘N’: ‘0.0’}}, ‘N’: ‘2.0’}}, ‘N’: {‘均值\n<=-0.023631’: {‘Y’: ‘2.0’, ‘N’: ‘1.0’}}}}}}}}
學習

在這裏插入圖片描述混淆矩陣:
若是將「1」看作一類,「0」和「2」看作一類,結果爲:
TP:13
FP:0
TN:74
FN:3




測試

若是每種標籤都看作一類,則混淆矩陣爲:
在這裏插入圖片描述
this

總結

  用輪子前最好仍是先造個輪子感覺一下。以上就是CART決策分類樹的所有內容。內容基本上爲筆者在BIT的機器學習課程所學,部分思路來自博客https://blog.csdn.net/weixin_43383558/article/details/84303339。本文內容爲筆者初學之做,若有錯誤,歡迎評論指點。若有可改進之處,也歡迎討論。
相關文章
相關標籤/搜索