refrence :http://cloga.info/python/2014/02/07/classify_use_Sklearn/html
這裏我使用pandas來加載數據集,數據集採用kaggle的titanic的數據集,下載train.csv。python
import pandas as pd df = pd.read_csv('train.csv') df = df.fillna(0) #將缺失值都替換爲0 df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22 | 1 | 0 | A/5 21171 | 7.2500 | 0 | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26 | 0 | 0 | STON/O2. 3101282 | 7.9250 | 0 | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35 | 0 | 0 | 373450 | 8.0500 | 0 | S |
5 rows × 12 columns算法
len(df)
891
能夠看到訓練集中共有891條記錄,有12個列(其中一列Survived是目標分類)。將數據集分爲特徵集和目標分類集,兩個DataFrame。dom
exc_cols = [u'PassengerId', u'Survived', u'Name'] cols = [c for c in df.columns if c not in exc_cols] x = df.ix[:,cols] y = df['Survived'].values
因爲Sklearn爲了效率,接受的特徵數據類型是dtype=np.float32以便得到最佳的算法效率。所以,對於類別類型的特徵就須要轉化爲向量。Sklearn 提供了DictVectorizer類將類別的特徵轉化爲向量。DictVectorizer接受記錄的形式爲字典的列表。所以須要用pandas的to_dict方法轉 換DataFrame。測試
from sklearn.feature_extraction import DictVectorizer v = DictVectorizer() x = v.fit_transform(x.to_dict(outtype='records')).toarray()
讓咱們比較一下同一個實例的原始信息及向量化後的結果。spa
print 'Vectorized:', x[10] print 'Unvectorized:', v.inverse_transform(x[10]) Vectorized: [ 4. 0. 0. ..., 0. 0. 0.] Unvectorized: [{'Fare': 16.699999999999999, 'Name=Sandstrom, Miss. Marguerite Rut': 1.0, 'Embarked=S': 1.0, 'Age': 4.0, 'Sex=female': 1.0, 'Parch': 1.0, 'Pclass': 3.0, 'Ticket=PP 9549': 1.0, 'Cabin=G6': 1.0, 'SibSp': 1.0, 'PassengerId': 11.0}]
若是分類的標籤也是字符的,那麼就還須要用LabelEncoder方法進行轉化。rest
將數據集分紅訓練集和測試集。code
from sklearn.cross_validation import train_test_split data_train, data_test, target_train, target_test = train_test_split(x, y) len(data_train) 668 len(data_test) 223
默認是以數據集的25%做爲測試集。到這裏爲止,用於訓練和測試的數據集都已經準備好了。orm
Model = EstimatorObject() Model.fit(dataset.data, dataset.target) dataset.data = dataset dataset.target = labels Model.predict(dataset.data)
這裏選擇樸素貝葉斯、決策樹、隨機森林和SVM來作一個對比。htm
from sklearn import cross_validation from sklearn.naive_bayes import GaussianNB from sklearn import tree from sklearn.ensemble import RandomForestClassifier from sklearn import svm import datetime estimators = {} estimators['bayes'] = GaussianNB() estimators['tree'] = tree.DecisionTreeClassifier() estimators['forest_100'] = RandomForestClassifier(n_estimators = 100) estimators['forest_10'] = RandomForestClassifier(n_estimators = 10) estimators['svm_c_rbf'] = svm.SVC() estimators['svm_c_linear'] = svm.SVC(kernel='linear') estimators['svm_linear'] = svm.LinearSVC() estimators['svm_nusvc'] = svm.NuSVC()
首先是定義各個model所用的算法。
for k in estimators.keys(): start_time = datetime.datetime.now() print '----%s----' % k estimators[k] = estimators[k].fit(data_train, target_train) pred = estimators[k].predict(data_test) print("%s Score: %0.2f" % (k, estimators[k].score(data_test, target_test))) scores = cross_validation.cross_val_score(estimators[k], data_test, target_test, cv=5) print("%s Cross Avg. Score: %0.2f (+/- %0.2f)" % (k, scores.mean(), scores.std() * 2)) end_time = datetime.datetime.now() time_spend = end_time - start_time print("%s Time: %0.2f" % (k, time_spend.total_seconds()))
----svm_c_rbf----
svm_c_rbf Score: 0.63
svm_c_rbf Cross Avg. Score: 0.54 (+/- 0.18)
svm_c_rbf Time: 1.67
----tree----
tree Score: 0.81
tree Cross Avg. Score: 0.75 (+/- 0.09)
tree Time: 0.90
----forest_10----
forest_10 Score: 0.83
forest_10 Cross Avg. Score: 0.80 (+/- 0.10)
forest_10 Time: 0.56
----forest_100----
forest_100 Score: 0.84
forest_100 Cross Avg. Score: 0.80 (+/- 0.14)
forest_100 Time: 5.38
----svm_linear----
svm_linear Score: 0.74
svm_linear Cross Avg. Score: 0.65 (+/- 0.18)
svm_linear Time: 0.15
----svm_nusvc----
svm_nusvc Score: 0.63
svm_nusvc Cross Avg. Score: 0.55 (+/- 0.21)
svm_nusvc Time: 1.62
----bayes----
bayes Score: 0.44
bayes Cross Avg. Score: 0.47 (+/- 0.07)
bayes Time: 0.16
----svm_c_linear----
svm_c_linear Score: 0.83
svm_c_linear Cross Avg. Score: 0.79 (+/- 0.14)
svm_c_linear Time: 465.57
這裏經過算法的score方法及cross_validation來計算預測的準確性。
能夠看到準確性比較高的算法須要的時間也會增長。性價比較高的算法是隨機森林。 讓咱們用kaggle給出的test.csv的數據集測試一下。
test = pd.read_csv('test.csv') test = test.fillna(0) test_d = test.to_dict(outtype='records') test_vec = v.transform(test_d).toarray()
這裏須要注意的是test的數據也須要通過一樣的DictVectorizer轉換。
for k in estimators.keys(): estimators[k] = estimators[k].fit(x, y) pred = estimators[k].predict(test_vec) test['Survived'] = pred test.to_csv(k + '.csv', cols=['Survived', 'PassengerId'], index=False)
好了,向Kaggle提交你的結果吧~