from http://blog.csdn.net/zc02051126/article/details/46771793python
下面將介紹XGBoost的Python模塊,內容以下:
* 編譯及導入Python模塊
* 數據接口
* 參數設置
* 訓練模型l
* 提早終止程序
* 預測git
A walk through python example for UCI Mushroom dataset is provided.github
首先安裝XGBoost的C++版本,而後進入源文件的根目錄下的 wrappers
文件夾執行以下腳本安裝Python模塊算法
python setup.py install
安裝完成後按照以下方式導入XGBoost的Python模塊shell
import xgboost as xgb
XGBoost能夠加載libsvm格式的文本數據,加載的數據格式能夠爲Numpy的二維數組和XGBoost的二進制的緩存文件。加載的數據存儲在對象DMatrix
中。數組
dtrain = xgb.DMatrix('train.svm.txt') dtest = xgb.DMatrix('test.svm.buffer')
DMatrix
對象時,能夠用以下方式data = np.random.rand(5,10) # 5 entities, each contains 10 features label = np.random.randint(2, size=5) # binary target dtrain = xgb.DMatrix( data, label=label)
scipy.sparse
格式的數據轉化爲 DMatrix
格式時,能夠使用以下方式csr = scipy.sparse.csr_matrix( (dat, (row,col)) ) dtrain = xgb.DMatrix( csr )
DMatrix
格式的數據保存成XGBoost的二進制格式,在下次加載時能夠提升加載速度,使用方式以下dtrain = xgb.DMatrix('train.svm.txt') dtrain.save_binary("train.buffer")
DMatrix
中的缺失值:dtrain = xgb.DMatrix( data, label=label, missing = -999.0)
w = np.random.rand(5,1) dtrain = xgb.DMatrix( data, label=label, missing = -999.0, weight=w)
XGBoost使用key-value格式保存參數. Eg
* Booster(基本學習器)參數緩存
param = {'bst:max_depth':2, 'bst:eta':1, 'silent':1, 'objective':'binary:logistic' } param['nthread'] = 4 plst = param.items() plst += [('eval_metric', 'auc')] # Multiple evals can be handled in this way plst += [('eval_metric', 'ams@0')]
evallist = [(dtest,'eval'), (dtrain,'train')]
有了參數列表和數據就能夠訓練模型了
* 訓練app
num_round = 10 bst = xgb.train( plst, dtrain, num_round, evallist )
bst.save_model('0001.model')
# dump model bst.dump_model('dump.raw.txt') # dump model with feature map bst.dump_model('dump.raw.txt','featmap.txt')
bst = xgb.Booster({'nthread':4}) #init model bst.load_model("model.bin") # load data
若是有評價數據,能夠提早終止程序,這樣能夠找到最優的迭代次數。若是要提早終止程序必須至少有一個評價數據在參數evals
中。 If there’s more than one, it will use the last.dom
train(..., evals=evals, early_stopping_rounds=10)
ide
The model will train until the validation score stops improving. Validation error needs to decrease at least every early_stopping_rounds
to continue training.
If early stopping occurs, the model will have two additional fields: bst.best_score
and bst.best_iteration
. Note that train()
will return a model from the last iteration, not the best one.
This works with both metrics to minimize (RMSE, log loss, etc.) and to maximize (MAP, NDCG, AUC).
=
After you training/loading a model and preparing the data, you can start to do prediction.
data = np.random.rand(7,10) # 7 entities, each contains 10 features dtest = xgb.DMatrix( data, missing = -999.0 ) ypred = bst.predict( xgmat )
If early stopping is enabled during training, you can predict with the best iteration.
ypred = bst.predict(xgmat,ntree_limit=bst.best_iteration)