[Feature] Final pipeline: custom transformers

有視頻:https://www.youtube.com/watch?v=BFaadIqWlAghtml

有代碼:https://github.com/jem1031/pandas-pipelines-custom-transformerspython

 

 

幼兒級模型


1、模型訓練

簡單的preprocessing後,僅使用一個「屬性」作預測,看看結果如何?git

#%%
import pandas as pd
import numpy as np
import os

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.pipeline import Pipeline

# SET UP

# Read in data
# source: https://data.seattle.gov/Permitting/Special-Events-Permits/dm95-f8w5
data_folder = '../data/'
data_file = 'Special_Events_Permits_2016.csv'
data_file_path = os.path.join(data_folder, data_file)
print("debug: data_file_path is {}".format(data_file_path))
df = pd.read_csv(data_file_path)

# Set aside 25% as test data
df_train, df_test = train_test_split(df, random_state=4321)

# Take a look
df_train.head()

#%%
# SIMPLE MODEL

# Binarize string feature
y_train = np.where(df_train.permit_status == 'Complete', 1, 0)
y_test  = np.where(df_test.permit_status == 'Complete', 1, 0)

print(y_train[:5])
print(y_test[:5])

# Missing value,且只使用這一列作出此次模型訓練的特徵!
X_train_1 = df_train[['attendance']].fillna(value=0)
X_test_1  = df_test[['attendance']].fillna(value=0)

print(X_train_1[:5])
print(X_test_1[:5])

#%%
# Fit model
model_1 = LogisticRegression(random_state=5678)
model_1.fit(X_train_1, y_train)

 

2、模型評估

評估指標 ROC AUC

(1) 得到二值化的分類結果; github

(2) 得到分類的機率數值。app

y_pred_train_1 = model_1.predict(X_train_1)
print("y_pred_train_1 is {}".format(y_pred_train_1))
p_pred_train_1 = model_1.predict_proba(X_train_1)[:, 1]
print("p_pred_train_1 is {}".format(p_pred_train_1))

# Evaluate model
# baseline: always predict the average
p_baseline_test = [y_train.mean()]*len(y_test)
auc_baseline = roc_auc_score(y_test, p_baseline_test)
print(auc_baseline)  # 0.5

#######################################################
y_pred_test_1 = model_1.predict(X_test_1) print("y_pred_test_1 is {}".format(y_pred_test_1)) p_pred_test_1 = model_1.predict_proba(X_test_1)[:, 1] print("p_pred_test_1 is {}".format(p_pred_test_1))
# Evaluate model auc_test_1
= roc_auc_score(y_test, p_pred_test_1) print(auc_test_1) # 0.576553672316

 

Ref: 機器學習評價指標 ROC與AUC 的理解和python實現dom

以FPR爲橫座標,TPR爲縱座標,那麼ROC曲線就是改變各類閾值後獲得的全部座標點 (FPR,TPR) 的連線,畫出來以下。機器學習

紅線是隨機亂猜狀況下的 ROC,曲線越靠左上角,分類器越佳。ide

AUC(Area Under Curve)就是ROC曲線下的面積。post

既然已經這麼多評價標準,爲何還要使用ROC和AUC呢?性能

由於ROC曲線有個很好的特性:當測試集中的正負樣本的分佈變化的時候,ROC曲線可以保持不變

 

評估指標 R2

決定係數R2 Score ,衡量模型預測能力好壞(真實和預測的 相關程度百分比)

預測數據和真實數據越接近,R2越大,固然最大值是 1;模型的R2 值爲0,還不如直接用平均值(均值模型)來預測效果好。

 

Ref: 【從零開始學機器學習12】MSE、RMSE、R2_score

既然不一樣數據集的量綱不一樣,很難經過上面的三種方式去比較,那麼不妨找一個第三者做爲參照,根據參照計算 R方值,就能夠比較模型的好壞了。

R2_score < 0 :分子大於分母,訓練模型產生的偏差比使用均值產生的還要大,也就是訓練模型反而不如直接去均值效果好。出現這種狀況,一般是模型自己不是線性關係的,而咱們誤使用了線性模型,致使偏差很大。

評估指標 Residual

方差越大,模型越不穩定; 

import numpy as np
from sklearn.datasets import load_boston
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as CK
from sklearn.model_selection import cross_val_predict

boston = load_boston()
boston_X = boston.data
boston_y = boston.target
train_set = np.random.choice([True, False], len(boston_y),p=[.75, .25])
# 這裏得到布爾index,方便從數據集中pick up所需數據

mixed_kernel = kernel = CK(1.0, (1e-4, 1e4)) * RBF(10, (1e-4, 1e4))
gpr = GaussianProcessRegressor(alpha=5, n_restarts_optimizer=20, kernel = mixed_kernel) 
gpr.fit(boston_X[train_set], boston_y[train_set])
test_preds = gpr.predict(boston_X[~train_set]
View Code
from matplotlib import pyplot as plt
f, ax = plt.subplots(figsize=(10, 7), nrows=3)
f.tight_layout()


ax[0].plot(range(len(test_preds)), test_preds,           label='Predicted Values')
ax[0].plot(range(len(test_preds)), boston_y[~train_set], label='Actual Values')
ax[0].set_title("Predicted vs Actuals")
# ax[0].legend(loc='best')

# 參差圖 residual
residual = test_preds - boston_y[~train_set]

ax[1].plot(range(len(test_preds)), residual)
ax[1].set_title("Plotted Residuals")

ax[2].hist(residual)
ax[2].set_title("Histogram of Residuals")

Result:

 

 

 

 

 

模型改進


初探數據

1、數據清理時,須要考慮的內容

Ref: [Pandas] 03 - DataFrame

    • 看看某列,瞧瞧某行【第一步】
    • 可視化一列數據【第一步】
    • 分組統計【第三步】
    • 重採樣【第三步】

 

Ref: [Feature] Preprocessing tutorial

    • 特徵統計分佈【第一步】
    • 空數據【第二步】
    • 特徵間線性關係【第一步】

 

2、空數據太多怎麼辦?

能夠考慮放棄這個特徵。

park_cts = df_train.event_location_park.value_counts(dropna=False)
print(park_cts)
# NaN                                    364
# Magnuson Park                            8
# Gas Works Park                           5
# Occidental Park                          3
# Greenlake Park                           2
# Volunteer Park                           2
# Seattle Center                           1
# Seward Park                              1
# Anchor Park                              1
# Madison Park                             1
# OTHER                                    1
# Myrtle Edwards Park                      1
# Martin Luther King Jr Memorial Park      1
# Hamilton Viewpoint Park                  1
# Ballard Commons Park                     1
# Lake Union Park                          1
# Judkins Park                             1
# Bell Street Park                         1
# Comments:
# - about 90% missing values
# - could be new values in test data
# - Note: there are 400+ parks in Seattle
View Code

 

3、數據太多且分散怎麼辦?

相似高頻特徵,可分組歸類,resampling。

org_cts = df_train.organization.value_counts(dropna=False)
Red Carpet Valet                                             44
Seattle Sounders FC                                          19
Butler Valet                                                 15
Seafair                                                       9
Fuel Sports Eats and Beats                                    6
CBS Seattle                                                   5
Pro-Motion Events, Inc.                                       5
Madison Park Business Association                             4
Rejuvenation                                                  4
Fremont Arts Council                                          4
The U District Partnership                                    4
Seattle Department of Transportation                          4
University of Washington Rowing                               4
Upper Left                                                    3
Seattle Symphony                                              3
Argosy Cruises                                                3
The Corson Building                                           3
Waterways Cruises                                             3
Run for Good Racing Co./5 Focus                               3
Seattle Symphony/Benaroya Hall                                3
West Seattle Junction Association                             3
University of Washington Husky Marching Band                  3
Pro-Motion Events, Inc                                        2
Northwest Yacht Brokers Association                           2
Seattle Yacht Club                                            2
Café Campagne                                                 2
HONK! Fest West                                               2
Umoja Fest                                                    2
Ethiopians in Seattle                                         2
Emerald City Pet Rescue                                       2
                                                             ..
Fizz Events, LLC                                              1
Wing Luke Museum of the Asian Pacific American Experience     1
Independent Event Solutions                                   1
Vulcan Inc.                                                   1
City of Seattle/Animal Shelter                                1
GO LONG SR520 Floating Bridge Run                             1
The Queen AnneCamber of Commerce                              1
Greenwood Knights                                             1
Alki Art Fair                                                 1
Fizz Events LLC                                               1
Sea Deli, Inc                                                 1
Rotary Foundation of West Seattle                             1
Seattle Buddhist Church                                       1
TUNE                                                          1
AMERICAN CANCER SOCIETY, INC.                                 1
CWD Group, Inc.                                               1
Beacon Arts                                                   1
Southwest Seattle Historical Society                          1
Northwest Museum of Legends and Lore                          1
magnolia chamber of commerce                                  1
Ram Racing                                                    1
Seattle Events A Non-Profit Corporation                       1
Sound Transit                                                 1
Piranha Blonde Interactive                                    1
City of Seattle Parks and Recreation Department               1
El Centro de La Raza                                          1
Northwest Hope and Healing Foundation                         1
Orswell Events                                                1
Lifelong                                                      1
NaN                                                           1
Name: organization, Length: 245, dtype: int64
Result

 

4、極端值outlier太多怎麼辦?

」泰爾森估算「是其中的一個策略,但這屬於ML estimator的選擇範疇。

具體參見:[AI] 深度數學 - Bayes

 

 

清理數據

1、特徵名字統一格式

# Switch column names to lower_case_with_underscores
def standardize_name(cname):
    cname = re.sub(r'[-\.]', ' ', cname)
    cname = cname.strip().lower()
    cname = re.sub(r'\s+', '_', cname)
    return cname

print(df_raw.columns)
df_raw.columns = df_raw.columns.map(standardize_name)
print(df_raw.columns)
Index(['Application Date', 'Permit Status', 'Permit Type', 'Event Category',
       'Event Sub-Category', 'Name of Event', 'Year-Month-App.',
       'Event Start Date', 'Event End Date', 'Event Location - Park',
       'Event Location - Neighborhood', 'Council District', 'Precinct',
       'Organization', 'Attendance'],
      dtype='object')
Index(['application_date', 'permit_status', 'permit_type', 'event_category',
       'event_sub_category', 'name_of_event', 'year_month_app',
       'event_start_date', 'event_end_date', 'event_location_park',
       'event_location_neighborhood', 'council_district', 'precinct',
       'organization', 'attendance'],
      dtype='object')
Result

 

2、分割數據

按照時間分割,比較常見的方式。

# Filter to 2016 events
df_raw['event_start_date1'] = pd.to_datetime(df_raw.event_start_date)
df
= df_raw[np.logical_and(df_raw.event_start_date1 >= '2016-01-01', df_raw.event_start_date1 <= '2016-12-31')] df = df.drop('event_start_date1', axis=1) # Export data data_file = 'Special_Events_Permits_2016.csv' df.to_csv(data_folder + data_file, index=False)

 

 

特徵選擇

能夠本身添加一些隨機特徵做爲noise,做爲特徵選擇的上手練習。

 

 

工做流模型

1、FeatureUnion 組織 Transform

>>> from sklearn.pipeline import FeatureUnion
>>> feature_union = FeatureUnion([
... ('fill_avg',  Imputer(strategy='mean')),
... ('fill_mid',  Imputer(strategy='median')),
... ('fill_freq', Imputer(strategy='most_frequent'))
... ])

>>> X_train = feature_union.fit_transform(X_train_raw)
>>> X_test  = feature_union.transform(X_test_raw)

 

2、構建自定義 Transform

一個表格中有不少特徵,"定性特徵" 和 "定量特徵" 能夠按照以下的思路分開且並行的解決。

# Preprocessing with a Pipeline
pipeline = Pipeline([
(
'features', DFFeatureUnion([ ('categoricals', Pipeline([ ('extract', ColumnExtractor(CAT_FEATS)), ('dummy', DummyTransformer()) ])), ('numerics', Pipeline([ ('extract', ColumnExtractor(NUM_FEATS)), ('zero_fill', ZeroFillTransformer()), ('log', Log1pTransformer()) ])) ])), ('scale', DFStandardScaler()) ])

固定的套路是:繼承TransformerMixin後,實現 fit 和 tranform 方法。

class DummyTransformer(TransformerMixin):

    def __init__(self):
        self.dv = None

    def fit(self, X, y=None):
        # assumes all columns of X are strings
        Xdict = X.to_dict('records')
        self.dv = DictVectorizer(sparse=False)
        self.dv.fit(Xdict)
        return self

    def transform(self, X):
        # assumes X is a DataFrame
        Xdict = X.to_dict('records')
        Xt   = self.dv.transform(Xdict)
        cols = self.dv.get_feature_names()
        Xdum = pd.DataFrame(Xt, index=X.index, columns=cols)
        # drop column indicating NaNs
        nan_cols = [c for c in cols if '=' not in c]
        Xdum = Xdum.drop(nan_cols, axis=1)
        return Xdum

 

知識點

處理 "定性特徵" 的套路。

Ref: pandas.DataFrame.to_dict()的使用詳解

Ref: 特徵提高之特徵抽取----DictVectorizer

 

3、特徵聯合 Feature Union

由於默認是用numpy做爲參數格式,但這裏都是dataframe結構,稍微自定義下便可。

class DFFeatureUnion(TransformerMixin):
    # FeatureUnion but for pandas DataFrames

    def __init__(self, transformer_list):
        self.transformer_list = transformer_list

    def fit(self, X, y=None):
# 執行完,卻不須要結果
for (_, t) in self.transformer_list: t.fit(X, y) return self def transform(self, X): # 執行完,須要結果;由於結果還要被用來作reduce操做 Xts = [t.transform(X) for _, t in self.transformer_list] Xunion = reduce(lambda X1, X2: pd.merge(X1, X2, left_index=True, right_index=True), Xts) return Xunion

 

4、訓練模型並測試

可見,測試結果好了一些。

pipeline.fit(df_train)
X_train_2 = pipeline.transform(df_train)
X_test_2  = pipeline.transform(df_test)

# Fit model
model_2 = LogisticRegression(random_state=5678)
model_2.fit(X_train_2, y_train)
y_pred_train_2 = model_2.predict(X_train_2)
p_pred_train_2 = model_2.predict_proba(X_train_2)[:, 1]

# Evaluate model
p_pred_test_2 = model_2.predict_proba(X_test_2)[:, 1]
auc_test_2 = roc_auc_score(y_test, p_pred_test_2)
print(auc_test_2)  # 0.70508474576

 

 

過擬合

更多的特徵致使過擬合,以下,反而性能降低了。

# Preprocessing with a Pipeline
pipeline3 = Pipeline([
    ('features', DFFeatureUnion([
        ('dates', Pipeline([
            ('extract',  ColumnExtractor(DATE_FEATS)),  # 考慮日期相關特徵  
            ('to_date',  DateFormatter()),
            ('diffs',    DateDiffer()),
            ('mid_fill', DFImputer(strategy='median'))
        ])),
        ('categoricals', Pipeline([
            ('extract',  ColumnExtractor(CAT_FEATS)),
            ('dummy',    DummyTransformer())
        ])),
        ('multi_labels', Pipeline([
            ('extract',     ColumnExtractor(MULTI_FEATS)),
            ('multi_dummy', MultiEncoder(sep=';'))
        ])),
        ('numerics', Pipeline([
            ('extract',   ColumnExtractor(NUM_FEATS)),
            ('zero_fill', ZeroFillTransformer()),
            ('log',       Log1pTransformer())
        ]))
    ])),
    ('scale', DFStandardScaler())
])
pipeline3.fit(df_train) X_train_3
= pipeline3.transform(df_train) X_test_3 = pipeline3.transform(df_test) # Fit model model_3 = LogisticRegression(random_state=5678) model_3.fit(X_train_3, y_train) y_pred_train_3 = model_3.predict(X_train_3) p_pred_train_3 = model_3.predict_proba(X_train_3)[:, 1] # Evaluate model p_pred_test_3 = model_3.predict_proba(X_test_3)[:, 1] auc_test_3 = roc_auc_score(y_test, p_pred_test_3) print(auc_test_3) # 0.680790960452 # too many features -> starting to overfit

 

End.

相關文章
相關標籤/搜索