這是一篇翻譯的博客,原文連接在這裏。這是我看的爲數很少的介紹scikit-learn簡介而全面的文章,特別適合入門。我這裏把這篇文章翻譯一下,英語好的同窗能夠直接看原文。python
大部分喜歡用Python來學習數據科學的人,應該聽過scikit-learn,這個開源的Python庫幫咱們實現了一系列有關機器學習,數據處理,交叉驗證和可視化的算法。其提供的接口很是好用。算法
這就是爲何DataCamp(原網站)要爲那些已經開始學習Python庫卻沒有一個簡明且方便的總結的人提供這個總結。(原文是cheat sheet,翻譯過來就是小抄,我這裏翻譯成總結,感受意思上更積極點)。或者你壓根都不知道scikit-learn如何使用,那這份總結將會幫助你快速的瞭解其相關的基本知識,讓你快速上手。數組
你會發現,當你處理機器學習問題時,scikit-learn簡直就是神器。dom
這份scikit-learn總結將會介紹一些基本步驟讓你快速實現機器學習算法,主要包括:讀取數據,數據預處理,如何建立模型來擬合數據,如何驗證你的模型以及如何調參讓模型變得更好。機器學習
總的來講,這份總結將會經過示例代碼讓你開始你的數據科學項目,你能馬上建立模型,驗證模型,調試模型。(原文提供了pdf版的下載,內容和原文差很少)ide
>>> from sklearn import neighbors, datasets, preprocessing >>> from sklearn.cross_validation import train_test_split >>> from sklearn.metrics import accuracy_score >>> iris = datasets.load_iris() >>> X, y = iris.data[:, :2], iris.target >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33) >>> scaler = preprocessing.StandardScaler().fit(X_train) >>> X_train = scaler.transform(X_train) >>> X_test = scaler.transform(X_test) >>> knn = neighbors.KNeighborsClassifier(n_neighbors=5) >>> knn.fit(X_train, y_train) >>> y_pred = knn.predict(X_test) >>> accuracy_score(y_test, y_pred)
(補充,這裏看不懂沒關係,其實就是個小例子,後面會詳細解答)學習
你的數據須要是numeric類型,而後存儲成numpy數組或者scipy稀疏矩陣。咱們也接受其餘能轉換成numeric數組的類型,好比Pandas的DataFrame。網站
>>> import numpy as np >>> X = np.random.random((10,5)) >>> y = np.array(['M','M','F','F','M','F','M','M','F','F','F']) >>> X[X < 0.7] = 0
>>> from sklearn.preprocessing import StandardScaler >>> scaler = StandardScaler().fit(X_train) >>> standardized_X = scaler.transform(X_train) >>> standardized_X_test = scaler.transform(X_test)
>>> from sklearn.preprocessing import Normalizer >>> scaler = Normalizer().fit(X_train) >>> normalized_X = scaler.transform(X_train) >>> normalized_X_test = scaler.transform(X_test)
>>> from sklearn.preprocessing import Binarizer >>> binarizer = Binarizer(threshold=0.0).fit(X) >>> binary_X = binarizer.transform(X)
>>> from sklearn.preprocessing import LabelEncoder >>> enc = LabelEncoder() >>> y = enc.fit_transform(y)
>>>from sklearn.preprocessing import Imputer >>>imp = Imputer(missing_values=0, strategy='mean', axis=0) >>>imp.fit_transform(X_train)
>>> from sklearn.preprocessing import PolynomialFeatures) >>> poly = PolynomialFeatures(5)) >>> oly.fit_transform(X))
>>> from sklearn.cross_validation import train_test_split) >>> X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=0))
>>> from sklearn.linear_model import LinearRegression) >>> lr = LinearRegression(normalize=True))
>>> from sklearn.svm import SVC) >>> svc = SVC(kernel='linear'))
>>> from sklearn.naive_bayes import GaussianNB) >>> gnb = GaussianNB())
>>> from sklearn import neighbors) >>> knn = neighbors.KNeighborsClassifier(n_neighbors=5))
>>> from sklearn.decomposition import PCA) >>> pca = PCA(n_components=0.95))
>>> from sklearn.cluster import KMeans) >>> k_means = KMeans(n_clusters=3, random_state=0))
>>> lr.fit(X, y)) >>> knn.fit(X_train, y_train)) >>> svc.fit(X_train, y_train))
>>> k_means.fit(X_train)) >>> pca_model = pca.fit_transform(X_train))
>>> y_pred = svc.predict(np.random.random((2,5))))
>>> y_pred = lr.predict(X_test))
>>> y_pred = knn.predict_proba(X_test))
>>> y_pred = k_means.predict(X_test))
>>> knn.score(X_test, y_test)) >>> from sklearn.metrics import accuracy_score) >>> accuracy_score(y_test, y_pred))
>>> from sklearn.metrics import classification_report) >>> print(classification_report(y_test, y_pred)))
>>> from sklearn.metrics import confusion_matrix) >>> print(confusion_matrix(y_test, y_pred)))
>>> from sklearn.metrics import mean_absolute_error) >>> y_true = [3, -0.5, 2]) >>> mean_absolute_error(y_true, y_pred))
>>> from sklearn.metrics import mean_squared_error) >>> mean_squared_error(y_test, y_pred))
>>> from sklearn.metrics import r2_score) >>> r2_score(y_true, y_pred))
>>> from sklearn.metrics import adjusted_rand_score) >>> adjusted_rand_score(y_true, y_pred))
>>> from sklearn.metrics import homogeneity_score) >>> homogeneity_score(y_true, y_pred))
>>> from sklearn.metrics import v_measure_score) >>> metrics.v_measure_score(y_true, y_pred))
>>> print(cross_val_score(knn, X_train, y_train, cv=4)) >>> print(cross_val_score(lr, X, y, cv=2))
>>> from sklearn.grid_search import GridSearchCV >>> params = {"n_neighbors": np.arange(1,3), "metric": ["euclidean", "cityblock"]} >>> grid = GridSearchCV(estimator=knn,param_grid=params) >>> grid.fit(X_train, y_train) >>> print(grid.best_score_) >>> print(grid.best_estimator_.n_neighbors)
>>> from sklearn.grid_search import RandomizedSearchCV >>> params = {"n_neighbors": range(1,5), "weights": ["uniform", "distance"]} >>> rsearch = RandomizedSearchCV(estimator=knn, param_distributions=params, cv=4, n_iter=8, random_state=5) >>> rsearch.fit(X_train, y_train) >>> print(rsearch.best_score_)
學習完上面的例子後,你能夠經過our scikit-learn tutorial for beginners來學習更多的例子。另外你能夠學習matplotlib來可視化數據。lua
不要錯事後續教程 Bokeh cheat sheet, the Pandas cheat sheet or the Python cheat sheet for data science.idea