前天偶然在一個網站上看到一個數據分析的比賽(sofasofa),本身雖然學習一些關於機器學習的內容,可是並無在比賽中實踐過,因而我帶着一種好奇心參加了此次比賽。php
賽題:足球運動員身價估計比賽概述html
本比賽爲我的練習賽,主要針對於於數據新人進行自我練習、自我提升,與你們切磋。 python
練習賽時限:2018-03-05 至 2020-03-05 git
任務類型:迴歸 github
背景介紹: 每一個足球運動員在轉會市場都有各自的價碼。本次數據練習的目的是根據球員的各項信息和能力值來預測該球員的市場價值。 數組
根據以上描述,咱們很容易能夠判斷出這是一個迴歸預測類的問題。固然,要想進行預測,咱們首先要作的就是先看看數據的格式以及內容(因爲參數太多,我就不一一列舉了,你們能夠直接去網上看,下面我簡單貼個圖):app
簡單瞭解了數據的格式以及大小之後,因爲沒有實踐經驗,我就憑本身的感受,單純的認爲一下幾個字段多是最重要的:機器學習
字段 | 含義 |
---|---|
club | 該球員所屬的俱樂部。該信息已經被編碼。 |
league | 該球員所在的聯賽。已被編碼。 |
potential | 球員的潛力。數值變量。 |
international_reputation | 國際知名度。數值變量。 |
巧合的是恰好這些字段都沒有缺失值,我很開心啊,心想着能夠直接利用XGBoost模型進行預測了。具體XGBoost的使用方法,能夠參考:XGBoost以及官方文檔XGBoost Parameters。說來就來,我開始了coding工做,下面就貼出個人初版代碼:函數
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : soccer_value.py # @Author: Huangqinjian # @Date : 2018/3/22 # @Desc : import pandas as pd import matplotlib.pyplot as plt import xgboost as xgb import numpy as np from xgboost import plot_importance from sklearn.preprocessing import Imputer def loadDataset(filePath): df = pd.read_csv(filepath_or_buffer=filePath) return df def featureSet(data): data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) XList.append(tmp_list) yList = data.y.values return XList, yList def loadTestData(filePath): data = pd.read_csv(filepath_or_buffer=filePath) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) XList.append(tmp_list) return XList def trainandTest(X_train, y_train, X_test): # XGBoost訓練過程 model = xgb.XGBRegressor(max_depth=5, learning_rate=0.1, n_estimators=160, silent=False, objective='reg:gamma') model.fit(X_train, y_train) # 對測試集進行預測 ans = model.predict(X_test) ans_len = len(ans) id_list = np.arange(10441, 17441) data_arr = [] for row in range(0, ans_len): data_arr.append([int(id_list[row]), ans[row]]) np_data = np.array(data_arr) # 寫入文件 pd_data = pd.DataFrame(np_data, columns=['id', 'y']) # print(pd_data) pd_data.to_csv('submit.csv', index=None) # 顯示重要特徵 # plot_importance(model) # plt.show() if __name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = loadDataset(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) trainandTest(X_train, y_train, X_test)
而後我就把獲得的結果文件submit.csv提交到網站上,看告終果,MAE爲106.6977,排名24/28,很不理想。不過這也在預料之中,由於我基本沒有進行特徵處理。學習
我固然不滿意啦,一直想着怎麼能提升準確率呢?後來就想到了能夠利用一下scikit這個庫啊!在scikit中包含了一個特徵選擇的模塊sklearn.feature_selection,而在這個模塊下面有如下幾個方法:
我首先想到的是利用單變量特徵選擇的方法選出幾個跟預測結果最相關的特徵。根據官方文檔,有如下幾種得分函數來檢驗變量之間的依賴程度:
因爲這個比賽是一個迴歸預測問題,因此我選擇了f_regression這個得分函數(剛開始我沒有注意,錯誤使用了分類問題中的得分函數chi2,致使程序一直報錯!心很累~)
f_regression的參數:
sklearn.feature_selection.f_regression(X, y, center=True)
X:一個多維數組,大小爲(n_samples, n_features),即行數爲訓練樣本的大小,列數爲特徵的個數
y:一個一維數組,長度爲訓練樣本的大小
return:返回值爲特徵的F值以及p值
不過在進行這個操做以前,咱們還有一個重大的任務要完成,那就是對於空值的處理!幸運的是scikit中也有專門的模塊能夠處理這個問題:Imputation of missing values
sklearn.preprocessing.Imputer的參數:
sklearn.preprocessing.Imputer(missing_values=’NaN’, strategy=’mean’, axis=0, verbose=0, copy=True)
其中strategy表明對於空值的填充策略(默認爲mean,即取所在列的平均數進行填充):
axis默認值爲0:
其餘具體參數能夠參考:sklearn.preprocessing.Imputer
根據以上,我對數據進行了一些處理:
from sklearn.feature_selection import f_regression from sklearn.preprocessing import Imputer imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, 'rw':'lb']) x_new = imputer.transform(data.loc[:, 'rw':'lb']) data_num = len(x_new) XList = [] yList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) tmp_list.append(x_new[row][6]) tmp_list.append(x_new[row][7]) tmp_list.append(x_new[row][8]) tmp_list.append(x_new[row][9]) XList.append(tmp_list) yList.append(data.iloc[row]['y']) F = f_regression(XList, yList) print(len(F)) print(F)
測試結果:
2 (array([2531.07587725, 1166.63303449, 2891.97789543, 2531.07587725, 2786.75491791, 2891.62686404, 3682.42649607, 1394.46743196, 531.08672792, 1166.63303449]), array([0.00000000e+000, 1.74675421e-242, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 0.00000000e+000, 1.37584507e-286, 1.15614152e-114, 1.74675421e-242]))
根據以上獲得的結果,我選取了rw,st,lw,cf,cam,cm(選取F值相對大的)幾個特徵加入模型之中。如下是我改進後的代碼:
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : soccer_value.py # @Author: Huangqinjian # @Date : 2018/3/22 # @Desc : import pandas as pd import matplotlib.pyplot as plt import xgboost as xgb import numpy as np from xgboost import plot_importance from sklearn.preprocessing import Imputer def loadDataset(filePath): df = pd.read_csv(filepath_or_buffer=filePath) return df def featureSet(data): imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) XList.append(tmp_list) yList = data.y.values return XList, yList def loadTestData(filePath): data = pd.read_csv(filepath_or_buffer=filePath) imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) XList.append(tmp_list) return XList def trainandTest(X_train, y_train, X_test): # XGBoost訓練過程 model = xgb.XGBRegressor(max_depth=5, learning_rate=0.1, n_estimators=160, silent=False, objective='reg:gamma') model.fit(X_train, y_train) # 對測試集進行預測 ans = model.predict(X_test) ans_len = len(ans) id_list = np.arange(10441, 17441) data_arr = [] for row in range(0, ans_len): data_arr.append([int(id_list[row]), ans[row]]) np_data = np.array(data_arr) # 寫入文件 pd_data = pd.DataFrame(np_data, columns=['id', 'y']) # print(pd_data) pd_data.to_csv('submit.csv', index=None) # 顯示重要特徵 # plot_importance(model) # plt.show() if __name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = loadDataset(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) trainandTest(X_train, y_train, X_test)
再次提交,此次MAE爲 42.1227,排名16/28。雖然提高了很多,不過距離第一名仍是有差距,仍需努力。
接下來,咱們來處理一下下面這個字段:
因爲這兩個字段是標籤,須要進行處理之後(標籤標準化)纔用到模型中。咱們要用到的函數是sklearn.preprocessing.LabelEncoder:
le = preprocessing.LabelEncoder() le.fit(['Low', 'Medium', 'High']) att_label = le.transform(data.work_rate_att.values) # print(att_label) def_label = le.transform(data.work_rate_def.values) # print(def_label)
固然你也可使用pandas直接來處理離散型特徵變量,具體內容能夠參考:pandas使用get_dummies進行one-hot編碼。順帶提一句,scikit中也有一個方法能夠來處理,可參考:sklearn.preprocessing.OneHotEncoder。
調整後的代碼:
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : soccer_value.py # @Author: Huangqinjian # @Date : 2018/3/22 # @Desc : import pandas as pd import matplotlib.pyplot as plt import xgboost as xgb from sklearn import preprocessing import numpy as np from xgboost import plot_importance from sklearn.preprocessing import Imputer from sklearn.cross_validation import train_test_split def featureSet(data): imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) le = preprocessing.LabelEncoder() le.fit(['Low', 'Medium', 'High']) att_label = le.transform(data.work_rate_att.values) # print(att_label) def_label = le.transform(data.work_rate_def.values) # print(def_label) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) tmp_list.append(att_label[row]) tmp_list.append(def_label[row]) XList.append(tmp_list) yList = data.y.values return XList, yList def loadTestData(filePath): data = pd.read_csv(filepath_or_buffer=filePath) imputer = Imputer(missing_values='NaN', strategy='mean', axis=0) imputer.fit(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) x_new = imputer.transform(data.loc[:, ['rw', 'st', 'lw', 'cf', 'cam', 'cm']]) le = preprocessing.LabelEncoder() le.fit(['Low', 'Medium', 'High']) att_label = le.transform(data.work_rate_att.values) # print(att_label) def_label = le.transform(data.work_rate_def.values) # print(def_label) data_num = len(data) XList = [] for row in range(0, data_num): tmp_list = [] tmp_list.append(data.iloc[row]['club']) tmp_list.append(data.iloc[row]['league']) tmp_list.append(data.iloc[row]['potential']) tmp_list.append(data.iloc[row]['international_reputation']) tmp_list.append(data.iloc[row]['pac']) tmp_list.append(data.iloc[row]['sho']) tmp_list.append(data.iloc[row]['pas']) tmp_list.append(data.iloc[row]['dri']) tmp_list.append(data.iloc[row]['def']) tmp_list.append(data.iloc[row]['phy']) tmp_list.append(data.iloc[row]['skill_moves']) tmp_list.append(x_new[row][0]) tmp_list.append(x_new[row][1]) tmp_list.append(x_new[row][2]) tmp_list.append(x_new[row][3]) tmp_list.append(x_new[row][4]) tmp_list.append(x_new[row][5]) tmp_list.append(att_label[row]) tmp_list.append(def_label[row]) XList.append(tmp_list) return XList def trainandTest(X_train, y_train, X_test): # XGBoost訓練過程 model = xgb.XGBRegressor(max_depth=6, learning_rate=0.05, n_estimators=500, silent=False, objective='reg:gamma') model.fit(X_train, y_train) # 對測試集進行預測 ans = model.predict(X_test) ans_len = len(ans) id_list = np.arange(10441, 17441) data_arr = [] for row in range(0, ans_len): data_arr.append([int(id_list[row]), ans[row]]) np_data = np.array(data_arr) # 寫入文件 pd_data = pd.DataFrame(np_data, columns=['id', 'y']) # print(pd_data) pd_data.to_csv('submit.csv', index=None) # 顯示重要特徵 # plot_importance(model) # plt.show() if __name__ == '__main__': trainFilePath = 'dataset/soccer/train.csv' testFilePath = 'dataset/soccer/test.csv' data = pd.read_csv(trainFilePath) X_train, y_train = featureSet(data) X_test = loadTestData(testFilePath) trainandTest(X_train, y_train, X_test)
此次只提升到了40.8686。暫時想不到提升的方法了,還請大神多多賜教!
更多內容歡迎關注個人我的公衆號