上次的隨機邏輯迴歸模型是發掘自變量和因變量的線型相關,決策樹和神經網絡是非線型關係變量的篩選.算法
#-*- coding: utf-8 -*- import pandas as pd inputfile = '../data/sales_data.xls' data = pd.read_excel(inputfile, index_col = u'序號') #將類別標籤好/是/高,轉化爲1,-1 data[data == u'好'] = 1 data[data == u'是'] = 1 data[data == u'高'] = 1 data[data != 1] = -1 x = data.iloc[:,:3].as_matrix().astype(int) #讀取前三列做爲自變量 y = data.iloc[:,3].as_matrix().astype(int) #讀取第三列做爲因變量,並轉爲爲整型數據 from sklearn.tree import DecisionTreeClassifier as DTC dtc = DTC(criterion='entropy') #基於信息增益 dtc.fit(x, y) #訓練模型 #訓練完畢,輸出結果可視化 from sklearn.tree import export_graphviz x = pd.DataFrame(x) from sklearn.externals.six import StringIO x = pd.DataFrame(x) with open("tree.dot", 'w') as f: f = export_graphviz(dtc, feature_names = x.columns, out_file = f)
用的是決策樹算法中的ID3算法(基於信息熵),最終使分類後的數據集的熵最小,C4.5決策樹算法利用信息增益率劃分數據集,CART決策樹算法是利用Gini(基尼)指數劃分數據集網絡