鄙人調參新手,最近用lightGBM有點猛,無奈在各大博客之間找不到具體的調參方法,因而將本身的調參notebook打印成markdown出來,但願能夠跟你們互相學習。html
其實,對於基於決策樹的模型,調參的方法都是大同小異。通常都須要以下步驟:python
因此,下面的調參例子是基於上述步驟來操做。數據集爲一個(4400+, 1000+)的數據集,全是數值特徵,metric採用均方根偏差。算法
(PS:仍是吐槽一下,lightgbm參數的同義詞(alias)實在是太多了,有時候不一樣的參數但同一個意思的時候真的很困擾,下面同義的參數我都用/
劃開,方便查看。)數組
無論怎麼樣,咱們先把學習率先定一個較高的值,這裏取 learning_rate = 0.1
,其次肯定估計器boosting/boost/boosting_type
的類型,不過默認都會選gbdt
。markdown
爲了肯定估計器的數目,也就是boosting迭代的次數,也能夠說是殘差樹的數目,參數名爲n_estimators/num_iterations/num_round/num_boost_round
。咱們能夠先將該參數設成一個較大的數,而後在cv結果中查看最優的迭代次數,具體如代碼。函數
在這以前,咱們必須給其餘重要的參數一個初始值。初始值的意義不大,只是爲了方便肯定其餘參數。下面先給定一下初始值:學習
如下參數根據具體項目要求定:優化
'boosting_type'/'boosting': 'gbdt' 'objective': 'regression' 'metric': 'rmse'
如下參數我選擇的初始值,你能夠根據本身的狀況來選擇:code
'max_depth': 6 ### 根據問題來定咯,因爲個人數據集不是很大,因此選擇了一個適中的值,其實4-10都無所謂。 'num_leaves': 50 ### 因爲lightGBM是leaves_wise生長,官方說法是要小於2^max_depth 'subsample'/'bagging_fraction':0.8 ### 數據採樣 'colsample_bytree'/'feature_fraction': 0.8 ### 特徵採樣
下面我是用LightGBM的cv函數進行演示:htm
params = { 'boosting_type': 'gbdt', 'objective': 'regression', 'learning_rate': 0.1, 'num_leaves': 50, 'max_depth': 6, 'subsample': 0.8, 'colsample_bytree': 0.8, }
data_train = lgb.Dataset(df_train, y_train, silent=True) cv_results = lgb.cv( params, data_train, num_boost_round=1000, nfold=5, stratified=False, shuffle=True, metrics='rmse', early_stopping_rounds=50, verbose_eval=50, show_stdv=True, seed=0) print('best n_estimators:', len(cv_results['rmse-mean'])) print('best cv score:', cv_results['rmse-mean'][-1])
[50] cv_agg's rmse: 1.38497 + 0.0202823 best n_estimators: 43 best cv score: 1.3838664241
因爲個人數據集不是很大,因此在學習率爲0.1時,最優的迭代次數只有43。那麼如今,咱們就代入(0.1, 43)進入其餘參數的tuning。可是仍是建議,在硬件條件容許的條件下,學習率仍是越小越好。
這是提升精確度的最重要的參數。
max_depth
:設置樹深度,深度越大可能過擬合
num_leaves
:由於 LightGBM 使用的是 leaf-wise 的算法,所以在調節樹的複雜程度時,使用的是 num_leaves 而不是 max_depth。大體換算關係:num_leaves = 2^(max_depth),可是它的值的設置應該小於 2^(max_depth),不然可能會致使過擬合。
咱們也能夠同時調節這兩個參數,對於這兩個參數調優,咱們先粗調,再細調:
這裏咱們引入sklearn
裏的GridSearchCV()
函數進行搜索。不知道怎的,這個函數特別耗內存,特別耗時間,特別耗精力。
from sklearn.model_selection import GridSearchCV ### 咱們能夠建立lgb的sklearn模型,使用上面選擇的(學習率,評估器數目) model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=50, learning_rate=0.1, n_estimators=43, max_depth=6, metric='rmse', bagging_fraction = 0.8,feature_fraction = 0.8) params_test1={ 'max_depth': range(3,8,2), 'num_leaves':range(50, 170, 30) } gsearch1 = GridSearchCV(estimator=model_lgb, param_grid=params_test1, scoring='neg_mean_squared_error', cv=5, verbose=1, n_jobs=4)
gsearch1.fit(df_train, y_train) gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
Fitting 5 folds for each of 12 candidates, totalling 60 fits [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 2.0min [Parallel(n_jobs=4)]: Done 60 out of 60 | elapsed: 3.1min finished ([mean: -1.88629, std: 0.13750, params: {'max_depth': 3, 'num_leaves': 50}, mean: -1.88629, std: 0.13750, params: {'max_depth': 3, 'num_leaves': 80}, mean: -1.88629, std: 0.13750, params: {'max_depth': 3, 'num_leaves': 110}, mean: -1.88629, std: 0.13750, params: {'max_depth': 3, 'num_leaves': 140}, mean: -1.86917, std: 0.12590, params: {'max_depth': 5, 'num_leaves': 50}, mean: -1.86917, std: 0.12590, params: {'max_depth': 5, 'num_leaves': 80}, mean: -1.86917, std: 0.12590, params: {'max_depth': 5, 'num_leaves': 110}, mean: -1.86917, std: 0.12590, params: {'max_depth': 5, 'num_leaves': 140}, mean: -1.89254, std: 0.10904, params: {'max_depth': 7, 'num_leaves': 50}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 80}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 110}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 140}], {'max_depth': 7, 'num_leaves': 80}, -1.8602436718814157)
這裏,咱們運行了12個參數組合,獲得的最優解是在max_depth爲7,num_leaves爲80的狀況下,分數爲-1.860。
這裏必須說一下,sklearn模型評估裏的scoring參數都是採用的higher return values are better than lower return values(較高的返回值優於較低的返回值)。
可是,我採用的metric策略採用的是均方偏差(rmse),越低越好,因此sklearn就提供了neg_mean_squared_erro
參數,也就是返回metric的負數,因此就均方差來講,也就變成負數越大越好了。
因此,能夠看到,最優解的分數爲-1.860,轉化爲均方差爲np.sqrt(-(-1.860)) = 1.3639,明顯比step1的分數要好不少。
至此,咱們將咱們這步獲得的最優解代入第三步。其實,我這裏只進行了粗調,若是要獲得更好的效果,能夠將max_depth在7附近多取幾個值,num_leaves在80附近多取幾個值。千萬不要怕麻煩,雖然這確實很麻煩。
params_test2={ 'max_depth': [6,7,8], 'num_leaves':[68,74,80,86,92] } gsearch2 = GridSearchCV(estimator=model_lgb, param_grid=params_test2, scoring='neg_mean_squared_error', cv=5, verbose=1, n_jobs=4) gsearch2.fit(df_train, y_train) gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
Fitting 5 folds for each of 15 candidates, totalling 75 fits [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 2.8min [Parallel(n_jobs=4)]: Done 75 out of 75 | elapsed: 5.1min finished ([mean: -1.87506, std: 0.11369, params: {'max_depth': 6, 'num_leaves': 68}, mean: -1.87506, std: 0.11369, params: {'max_depth': 6, 'num_leaves': 74}, mean: -1.87506, std: 0.11369, params: {'max_depth': 6, 'num_leaves': 80}, mean: -1.87506, std: 0.11369, params: {'max_depth': 6, 'num_leaves': 86}, mean: -1.87506, std: 0.11369, params: {'max_depth': 6, 'num_leaves': 92}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 68}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 74}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 80}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 86}, mean: -1.86024, std: 0.11364, params: {'max_depth': 7, 'num_leaves': 92}, mean: -1.88197, std: 0.11295, params: {'max_depth': 8, 'num_leaves': 68}, mean: -1.89117, std: 0.12686, params: {'max_depth': 8, 'num_leaves': 74}, mean: -1.86390, std: 0.12259, params: {'max_depth': 8, 'num_leaves': 80}, mean: -1.86733, std: 0.12159, params: {'max_depth': 8, 'num_leaves': 86}, mean: -1.86665, std: 0.12174, params: {'max_depth': 8, 'num_leaves': 92}], {'max_depth': 7, 'num_leaves': 68}, -1.8602436718814157)
可見最大深度7是沒問題的,可是看細節的話,發如今最大深度爲7的狀況下,葉結點的數量對分數並無影響。
說到這裏,就該下降過擬合了。
min_data_in_leaf
是一個很重要的參數, 也叫min_child_samples,它的值取決於訓練數據的樣本個樹和num_leaves. 將其設置的較大能夠避免生成一個過深的樹, 但有可能致使欠擬合。
min_sum_hessian_in_leaf
:也叫min_child_weight,使一個結點分裂的最小海森值之和,真拗口(Minimum sum of hessians in one leaf to allow a split. Higher values potentially decrease overfitting)
咱們採用跟上面相同的方法進行:
params_test3={ 'min_child_samples': [18, 19, 20, 21, 22], 'min_child_weight':[0.001, 0.002] } model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=80, learning_rate=0.1, n_estimators=43, max_depth=7, metric='rmse', bagging_fraction = 0.8, feature_fraction = 0.8) gsearch3 = GridSearchCV(estimator=model_lgb, param_grid=params_test3, scoring='neg_mean_squared_error', cv=5, verbose=1, n_jobs=4) gsearch3.fit(df_train, y_train) gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
Fitting 5 folds for each of 10 candidates, totalling 50 fits [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 2.9min [Parallel(n_jobs=4)]: Done 50 out of 50 | elapsed: 3.3min finished ([mean: -1.88057, std: 0.13948, params: {'min_child_samples': 18, 'min_child_weight': 0.001}, mean: -1.88057, std: 0.13948, params: {'min_child_samples': 18, 'min_child_weight': 0.002}, mean: -1.88365, std: 0.13650, params: {'min_child_samples': 19, 'min_child_weight': 0.001}, mean: -1.88365, std: 0.13650, params: {'min_child_samples': 19, 'min_child_weight': 0.002}, mean: -1.86024, std: 0.11364, params: {'min_child_samples': 20, 'min_child_weight': 0.001}, mean: -1.86024, std: 0.11364, params: {'min_child_samples': 20, 'min_child_weight': 0.002}, mean: -1.86980, std: 0.14251, params: {'min_child_samples': 21, 'min_child_weight': 0.001}, mean: -1.86980, std: 0.14251, params: {'min_child_samples': 21, 'min_child_weight': 0.002}, mean: -1.86750, std: 0.13898, params: {'min_child_samples': 22, 'min_child_weight': 0.001}, mean: -1.86750, std: 0.13898, params: {'min_child_samples': 22, 'min_child_weight': 0.002}], {'min_child_samples': 20, 'min_child_weight': 0.001}, -1.8602436718814157)
這是我通過粗調後細調的結果,能夠看到,min_data_in_leaf的最優值爲20,而min_sum_hessian_in_leaf對最後的值幾乎沒有影響。且這裏調參以後,最後的值沒有進行優化,說明以前的默認值即爲20,0.001。
這兩個參數都是爲了下降過擬合的。
feature_fraction參數來進行特徵的子抽樣。這個參數能夠用來防止過擬合及提升訓練速度。
bagging_fraction+bagging_freq參數必須同時設置,bagging_fraction至關於subsample樣本採樣,可使bagging更快的運行,同時也能夠降擬合。bagging_freq默認0,表示bagging的頻率,0意味着沒有使用bagging,k意味着每k輪迭代進行一次bagging。
不一樣的參數,一樣的方法。
params_test4={ 'feature_fraction': [0.5, 0.6, 0.7, 0.8, 0.9], 'bagging_fraction': [0.6, 0.7, 0.8, 0.9, 1.0] } model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=80, learning_rate=0.1, n_estimators=43, max_depth=7, metric='rmse', bagging_freq = 5, min_child_samples=20) gsearch4 = GridSearchCV(estimator=model_lgb, param_grid=params_test4, scoring='neg_mean_squared_error', cv=5, verbose=1, n_jobs=4) gsearch4.fit(df_train, y_train) gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
Fitting 5 folds for each of 25 candidates, totalling 125 fits [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 2.6min [Parallel(n_jobs=4)]: Done 125 out of 125 | elapsed: 7.1min finished ([mean: -1.90447, std: 0.15841, params: {'bagging_fraction': 0.6, 'feature_fraction': 0.5}, mean: -1.90846, std: 0.13925, params: {'bagging_fraction': 0.6, 'feature_fraction': 0.6}, mean: -1.91695, std: 0.14121, params: {'bagging_fraction': 0.6, 'feature_fraction': 0.7}, mean: -1.90115, std: 0.12625, params: {'bagging_fraction': 0.6, 'feature_fraction': 0.8}, mean: -1.92586, std: 0.15220, params: {'bagging_fraction': 0.6, 'feature_fraction': 0.9}, mean: -1.88031, std: 0.17157, params: {'bagging_fraction': 0.7, 'feature_fraction': 0.5}, mean: -1.89513, std: 0.13718, params: {'bagging_fraction': 0.7, 'feature_fraction': 0.6}, mean: -1.88845, std: 0.13864, params: {'bagging_fraction': 0.7, 'feature_fraction': 0.7}, mean: -1.89297, std: 0.12374, params: {'bagging_fraction': 0.7, 'feature_fraction': 0.8}, mean: -1.89432, std: 0.14353, params: {'bagging_fraction': 0.7, 'feature_fraction': 0.9}, mean: -1.88088, std: 0.14247, params: {'bagging_fraction': 0.8, 'feature_fraction': 0.5}, mean: -1.90080, std: 0.13174, params: {'bagging_fraction': 0.8, 'feature_fraction': 0.6}, mean: -1.88364, std: 0.14732, params: {'bagging_fraction': 0.8, 'feature_fraction': 0.7}, mean: -1.88987, std: 0.13344, params: {'bagging_fraction': 0.8, 'feature_fraction': 0.8}, mean: -1.87752, std: 0.14802, params: {'bagging_fraction': 0.8, 'feature_fraction': 0.9}, mean: -1.88348, std: 0.13925, params: {'bagging_fraction': 0.9, 'feature_fraction': 0.5}, mean: -1.87472, std: 0.13301, params: {'bagging_fraction': 0.9, 'feature_fraction': 0.6}, mean: -1.88656, std: 0.12241, params: {'bagging_fraction': 0.9, 'feature_fraction': 0.7}, mean: -1.89029, std: 0.10776, params: {'bagging_fraction': 0.9, 'feature_fraction': 0.8}, mean: -1.88719, std: 0.11915, params: {'bagging_fraction': 0.9, 'feature_fraction': 0.9}, mean: -1.86170, std: 0.12544, params: {'bagging_fraction': 1.0, 'feature_fraction': 0.5}, mean: -1.87334, std: 0.13099, params: {'bagging_fraction': 1.0, 'feature_fraction': 0.6}, mean: -1.85412, std: 0.12698, params: {'bagging_fraction': 1.0, 'feature_fraction': 0.7}, mean: -1.86024, std: 0.11364, params: {'bagging_fraction': 1.0, 'feature_fraction': 0.8}, mean: -1.87266, std: 0.12271, params: {'bagging_fraction': 1.0, 'feature_fraction': 0.9}], {'bagging_fraction': 1.0, 'feature_fraction': 0.7}, -1.8541224387666373)
從這裏能夠看出來,bagging_feaction和feature_fraction的理想值分別是1.0和0.7,一個很重要緣由就是,個人樣本數量比較小(4000+),可是特徵數量不少(1000+)。因此,這裏咱們取更小的步長,對feature_fraction進行更細緻的取值。
params_test5={ 'feature_fraction': [0.62, 0.65, 0.68, 0.7, 0.72, 0.75, 0.78 ] } model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=80, learning_rate=0.1, n_estimators=43, max_depth=7, metric='rmse', min_child_samples=20) gsearch5 = GridSearchCV(estimator=model_lgb, param_grid=params_test5, scoring='neg_mean_squared_error', cv=5, verbose=1, n_jobs=4) gsearch5.fit(df_train, y_train) gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
Fitting 5 folds for each of 7 candidates, totalling 35 fits [Parallel(n_jobs=4)]: Done 35 out of 35 | elapsed: 2.3min finished ([mean: -1.86696, std: 0.12658, params: {'feature_fraction': 0.62}, mean: -1.88337, std: 0.13215, params: {'feature_fraction': 0.65}, mean: -1.87282, std: 0.13193, params: {'feature_fraction': 0.68}, mean: -1.85412, std: 0.12698, params: {'feature_fraction': 0.7}, mean: -1.88235, std: 0.12682, params: {'feature_fraction': 0.72}, mean: -1.86329, std: 0.12757, params: {'feature_fraction': 0.75}, mean: -1.87943, std: 0.12107, params: {'feature_fraction': 0.78}], {'feature_fraction': 0.7}, -1.8541224387666373)
好吧,feature_fraction就是0.7了
正則化參數lambda_l1(reg_alpha), lambda_l2(reg_lambda),毫無疑問,是下降過擬合的,二者分別對應l1正則化和l2正則化。咱們也來嘗試一下使用這兩個參數。
params_test6={ 'reg_alpha': [0, 0.001, 0.01, 0.03, 0.08, 0.3, 0.5], 'reg_lambda': [0, 0.001, 0.01, 0.03, 0.08, 0.3, 0.5] } model_lgb = lgb.LGBMRegressor(objective='regression',num_leaves=80, learning_rate=0.b1, n_estimators=43, max_depth=7, metric='rmse', min_child_samples=20, feature_fraction=0.7) gsearch6 = GridSearchCV(estimator=model_lgb, param_grid=params_test6, scoring='neg_mean_squared_error', cv=5, verbose=1, n_jobs=4) gsearch6.fit(df_train, y_train) gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_
Fitting 5 folds for each of 49 candidates, totalling 245 fits [Parallel(n_jobs=4)]: Done 42 tasks | elapsed: 2.8min [Parallel(n_jobs=4)]: Done 192 tasks | elapsed: 10.6min [Parallel(n_jobs=4)]: Done 245 out of 245 | elapsed: 13.3min finished ([mean: -1.85412, std: 0.12698, params: {'reg_alpha': 0, 'reg_lambda': 0}, mean: -1.85990, std: 0.13296, params: {'reg_alpha': 0, 'reg_lambda': 0.001}, mean: -1.86367, std: 0.13634, params: {'reg_alpha': 0, 'reg_lambda': 0.01}, mean: -1.86787, std: 0.13881, params: {'reg_alpha': 0, 'reg_lambda': 0.03}, mean: -1.87099, std: 0.12476, params: {'reg_alpha': 0, 'reg_lambda': 0.08}, mean: -1.87670, std: 0.11849, params: {'reg_alpha': 0, 'reg_lambda': 0.3}, mean: -1.88278, std: 0.13064, params: {'reg_alpha': 0, 'reg_lambda': 0.5}, mean: -1.86190, std: 0.13613, params: {'reg_alpha': 0.001, 'reg_lambda': 0}, mean: -1.86190, std: 0.13613, params: {'reg_alpha': 0.001, 'reg_lambda': 0.001}, mean: -1.86515, std: 0.14116, params: {'reg_alpha': 0.001, 'reg_lambda': 0.01}, mean: -1.86908, std: 0.13668, params: {'reg_alpha': 0.001, 'reg_lambda': 0.03}, mean: -1.86852, std: 0.12289, params: {'reg_alpha': 0.001, 'reg_lambda': 0.08}, mean: -1.88076, std: 0.11710, params: {'reg_alpha': 0.001, 'reg_lambda': 0.3}, mean: -1.88278, std: 0.13064, params: {'reg_alpha': 0.001, 'reg_lambda': 0.5}, mean: -1.87480, std: 0.13889, params: {'reg_alpha': 0.01, 'reg_lambda': 0}, mean: -1.87284, std: 0.14138, params: {'reg_alpha': 0.01, 'reg_lambda': 0.001}, mean: -1.86030, std: 0.13332, params: {'reg_alpha': 0.01, 'reg_lambda': 0.01}, mean: -1.86695, std: 0.12587, params: {'reg_alpha': 0.01, 'reg_lambda': 0.03}, mean: -1.87415, std: 0.13100, params: {'reg_alpha': 0.01, 'reg_lambda': 0.08}, mean: -1.88543, std: 0.13195, params: {'reg_alpha': 0.01, 'reg_lambda': 0.3}, mean: -1.88076, std: 0.13502, params: {'reg_alpha': 0.01, 'reg_lambda': 0.5}, mean: -1.87729, std: 0.12533, params: {'reg_alpha': 0.03, 'reg_lambda': 0}, mean: -1.87435, std: 0.12034, params: {'reg_alpha': 0.03, 'reg_lambda': 0.001}, mean: -1.87513, std: 0.12579, params: {'reg_alpha': 0.03, 'reg_lambda': 0.01}, mean: -1.88116, std: 0.12218, params: {'reg_alpha': 0.03, 'reg_lambda': 0.03}, mean: -1.88052, std: 0.13585, params: {'reg_alpha': 0.03, 'reg_lambda': 0.08}, mean: -1.87565, std: 0.12200, params: {'reg_alpha': 0.03, 'reg_lambda': 0.3}, mean: -1.87935, std: 0.13817, params: {'reg_alpha': 0.03, 'reg_lambda': 0.5}, mean: -1.87774, std: 0.12477, params: {'reg_alpha': 0.08, 'reg_lambda': 0}, mean: -1.87774, std: 0.12477, params: {'reg_alpha': 0.08, 'reg_lambda': 0.001}, mean: -1.87911, std: 0.12027, params: {'reg_alpha': 0.08, 'reg_lambda': 0.01}, mean: -1.86978, std: 0.12478, params: {'reg_alpha': 0.08, 'reg_lambda': 0.03}, mean: -1.87217, std: 0.12159, params: {'reg_alpha': 0.08, 'reg_lambda': 0.08}, mean: -1.87573, std: 0.14137, params: {'reg_alpha': 0.08, 'reg_lambda': 0.3}, mean: -1.85969, std: 0.13109, params: {'reg_alpha': 0.08, 'reg_lambda': 0.5}, mean: -1.87632, std: 0.12398, params: {'reg_alpha': 0.3, 'reg_lambda': 0}, mean: -1.86995, std: 0.12651, params: {'reg_alpha': 0.3, 'reg_lambda': 0.001}, mean: -1.86380, std: 0.12793, params: {'reg_alpha': 0.3, 'reg_lambda': 0.01}, mean: -1.87577, std: 0.13002, params: {'reg_alpha': 0.3, 'reg_lambda': 0.03}, mean: -1.87402, std: 0.13496, params: {'reg_alpha': 0.3, 'reg_lambda': 0.08}, mean: -1.87032, std: 0.12504, params: {'reg_alpha': 0.3, 'reg_lambda': 0.3}, mean: -1.88329, std: 0.13237, params: {'reg_alpha': 0.3, 'reg_lambda': 0.5}, mean: -1.87196, std: 0.13099, params: {'reg_alpha': 0.5, 'reg_lambda': 0}, mean: -1.87196, std: 0.13099, params: {'reg_alpha': 0.5, 'reg_lambda': 0.001}, mean: -1.88222, std: 0.14735, params: {'reg_alpha': 0.5, 'reg_lambda': 0.01}, mean: -1.86618, std: 0.14006, params: {'reg_alpha': 0.5, 'reg_lambda': 0.03}, mean: -1.88579, std: 0.12398, params: {'reg_alpha': 0.5, 'reg_lambda': 0.08}, mean: -1.88297, std: 0.12307, params: {'reg_alpha': 0.5, 'reg_lambda': 0.3}, mean: -1.88148, std: 0.12622, params: {'reg_alpha': 0.5, 'reg_lambda': 0.5}], {'reg_alpha': 0, 'reg_lambda': 0}, -1.8541224387666373)
哈哈,看來我畫蛇添足了。
以前使用較高的學習速率是由於可讓收斂更快,可是準確度確定沒有細水長流來的好。最後,咱們使用較低的學習速率,以及使用更多的決策樹n_estimators來訓練數據,看能不能能夠進一步的優化分數。
咱們能夠用回lightGBM的cv函數了 ,咱們代入以前優化好的參數。
params = { 'boosting_type': 'gbdt', 'objective': 'regression', 'learning_rate': 0.005, 'num_leaves': 80, 'max_depth': 7, 'min_data_in_leaf': 20, 'subsample': 1, 'colsample_bytree': 0.7, } data_train = lgb.Dataset(df_train, y_train, silent=True) cv_results = lgb.cv( params, data_train, num_boost_round=10000, nfold=5, stratified=False, shuffle=True, metrics='rmse', early_stopping_rounds=50, verbose_eval=100, show_stdv=True) print('best n_estimators:', len(cv_results['rmse-mean'])) print('best cv score:', cv_results['rmse-mean'][-1])
[100] cv_agg's rmse: 1.52939 + 0.0261756 [200] cv_agg's rmse: 1.43535 + 0.0187243 [300] cv_agg's rmse: 1.39584 + 0.0157521 [400] cv_agg's rmse: 1.37935 + 0.0157429 [500] cv_agg's rmse: 1.37313 + 0.0164503 [600] cv_agg's rmse: 1.37081 + 0.0172752 [700] cv_agg's rmse: 1.36942 + 0.0177888 [800] cv_agg's rmse: 1.36854 + 0.0180575 [900] cv_agg's rmse: 1.36817 + 0.0188776 [1000] cv_agg's rmse: 1.36796 + 0.0190279 [1100] cv_agg's rmse: 1.36783 + 0.0195969 best n_estimators: 1079 best cv score: 1.36772351783
這就是一個大概過程吧,其實也有更高級的方法,可是這種基本的對於GBM模型的調參方法也是須要了解的吧。若有問題,請多指教。
Reference: