數據挖掘入門——以二手車價格預測爲例
做者:張傑html
數據挖掘的步驟
- Data Analysis
- Feature Engineering
- Feature Selection
- Model Building
- Model Deployment
1. Data Analysis
對於數據分析部分,須要探索如下幾個點: 1)Missing Valuespython
2)All The Numerical Variables算法
3)Distribution of the Numerical Variables網絡
4)Categorical Variables架構
5)Cardinality of Categorical Variablesapp
6)Outliersdom
Relationship between independent and dependent feature(SalePrice)機器學習
2. Feature Engineering
數據和特徵決定了機器學習的上限,而模型和算法只是逼近這個上限而已。那特徵工程究竟是什麼呢?顧名思義,其本質是一項工程活動,目的是最大限度地從原始數據中提取特徵以供算法和模型使用。函數
特徵工程中可能有如下問題:工具
- 不屬於同一量綱:即特徵的規格不同,不可以放在一塊兒比較;
- 定性特徵不能直接使用:某些機器學習算法和模型只能接受定量特徵的輸入,那麼須要將定性特徵轉換爲定量特徵。最簡單的方式是爲每一種定性值指定一個定量值,可是這種方式過於靈活,增長了調參的工做。一般使用啞編碼的方式將定性特徵轉換爲定量特徵:假設有N種特徵,當原始特徵值爲第i種定性值時,第i個擴展特徵爲1,其餘擴展特徵賦值爲0。啞編碼的方式相比直接指定的方式,不用增長調參的工做,對於線性模型來講,使用啞編碼的特徵可達到非線性的效果;
- 存在缺失值:缺失值須要補充;
- 信息利用率低:不一樣的機器學習算法和模型對數據中信息的利用是不一樣的,以前提到在線性模型中,使用對定性特徵啞編碼能夠達到非線性的效果。相似地,對定量變量多項式化,或者進行其餘的轉換,都能達到非線性的效果。
特別地,須要重點關注缺失值的處理,異常值的處理,數據歸一化,數據編碼等關鍵問題。
3. Feature Selection
特徵選擇意味着選擇那些在咱們的模型上能夠提升性能的特徵變量。可使用了一些機器學習和統計方法來選擇最有相關的特徵來改進模型性能。
4. Model Building
模型部分一般能夠選擇機器學習模型,也能夠選擇使用深度學習模型,特別地,不少時候模型集成每每有着出人意料的效果。
賽題分析
賽題數據
賽題以預測二手車的交易價格爲任務,該數據來自某交易平臺的二手車交易記錄,總數據量超過40w,包含31列變量信息,其中15列爲匿名變量。爲了保證比賽的公平性,將會從中抽取15萬條做爲訓練集,5萬條做爲測試集,同時會對name、model、brand和regionCode等信息進行脫敏。 數據連接:[https://tianchi.aliyun.com/competition/entrance/231784/introduction]
評測標準
評價標準爲MAE(Mean Absolute Error)。
導入基本模塊
# 基礎工具 import numpy as np import pandas as pd import warnings import matplotlib import matplotlib.pyplot as plt import seaborn as sns from scipy.special import jn from IPython.display import display, clear_output import time from tqdm import tqdm import itertools warnings.filterwarnings('ignore') %matplotlib inline ## 模型預測的 from sklearn import linear_model from sklearn import preprocessing from sklearn.svm import SVR from sklearn.ensemble import RandomForestRegressor,GradientBoostingRegressor ## 數據降維處理的 from sklearn.decomposition import PCA,FastICA,FactorAnalysis,SparsePCA ## 參數搜索和評價的 from sklearn.model_selection import GridSearchCV,cross_val_score,StratifiedKFold,train_test_split from sklearn.metrics import mean_squared_error, mean_absolute_error import scipy.signal as signal
數據分析與特徵工程
def reduce_mem_usage(df): """ iterate through all the columns of a dataframe and modify the data type to reduce memory usage. """ start_mem = df.memory_usage().sum() print('Memory usage of dataframe is {:.2f} MB'.format(start_mem)) for col in df.columns: col_type = df[col].dtype if col_type != object: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) else: df[col] = df[col].astype('category') end_mem = df.memory_usage().sum() print('Memory usage after optimization is: {:.2f} MB'.format(end_mem)) print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem)) return df
Train_data = reduce_mem_usage(pd.read_csv('used_car_train_20200313.csv', sep=' ')) Test_data = reduce_mem_usage(pd.read_csv('used_car_testB_20200421.csv', sep=' ')) ## 輸出數據的大小信息 print('Train data shape:',Train_data.shape) print('TestA data shape:',Test_data.shape)
#合併數據集 concat_data = pd.concat([Train_data,Test_data]) concat_data.isnull().sum()
這裏咱們發現bodyType,fuelType和gearbox缺失比較多,model缺失一行,price因爲是輸出,這裏不用額外處理。
分析V系列匿名特徵和非V系列特徵
對於匿名變量來講,只有數值信息,須要更多關注到,這裏把變量分爲匿名變量和非匿名變量,單獨進行分析
concat_data.columns
首先提取非匿名變量進行分析,隨機抽樣10行數據.
concat_data[['bodyType', 'brand', 'creatDate', 'fuelType', 'gearbox', 'kilometer', 'model', 'name', 'notRepairedDamage', 'offerType', 'power', 'regDate', 'regionCode', 'seller']].sample(10)
concat_data[['bodyType', 'brand', 'creatDate', 'fuelType', 'gearbox', 'kilometer', 'model', 'name', 'notRepairedDamage', 'offerType', 'power', 'regDate', 'regionCode', 'seller']].describe()
這裏發現了列名爲notRepairedDamage的取值中包含了"-"的異常值,這裏使用衆數進行替換
concat_data['notRepairedDamage'].value_counts()
concat_data['notRepairedDamage'] = concat_data['notRepairedDamage'].replace('-',0).astype('float16')
接着繼續分析匿名變量
concat_data[['v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6', 'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13', 'v_14']].sample(10)
concat_data[['v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6', 'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13', 'v_14']].describe()
對於缺失值,先簡單的使用衆數進行填充。 填充完後,數據中再也不含有缺失值。
concat_data = concat_data.fillna(concat_data.mode().iloc[0,:]) print('concat_data shape:',concat_data.shape) concat_data.isnull().sum()
對離散型數值進行獨熱編碼
對於每個特徵,若是它有m個可能值,那麼通過獨熱編碼後,就變成了m個二元特徵(如成績這個特徵有好,中,差變成one-hot就是100, 010, 001)。而且,這些特徵互斥,每次只有一個激活。所以,數據會變成稀疏的。 這樣作的好處主要有:
-
解決了分類器很差處理屬性數據的問題
-
在必定程度上也起到了擴充特徵的做用
參考連接[https://www.cnblogs.com/zongfa/p/9305657.html]
能夠經過df.value_counts().plot.bar來繪製數值分佈狀況
def plot_discrete_bar(data) : cnt = data.value_counts() p1 = plt.bar(cnt.index, height=list(cnt) , width=0.8) for x,y in zip(cnt.index,list(cnt)): plt.text(x+0.05,y+0.05,'%.2f' %y, ha='center',va='bottom')
clo_list = ['bodyType','fuelType','gearbox','notRepairedDamage'] i = 1 fig = plt.figure(figsize=(8,8)) for col in clo_list: plt.subplot(2,2,i) plot_discrete_bar(concat_data[col]) i = i + 1
對類別較少的特徵採用one-hot編碼,編碼後特徵由31個變爲50個。
one_hot_list = ['gearbox','notRepairedDamage','bodyType','fuelType'] for col in one_hot_list: one_hot = pd.get_dummies(concat_data[col]) one_hot.columns = [col+'_'+str(i) for i in range(len(one_hot.columns))] concat_data = pd.concat([concat_data,one_hot],axis=1)
這裏發現seller和offerType雖然應該是二分類的取值,可是分佈狀況都偏向於一種取值結果,能夠直接刪掉
concat_data['seller'].value_counts()
concat_data['offerType'].value_counts()
concat_data.drop(['offerType','seller'],axis=1,inplace=True)
對於匿名變量來講,但願能更多的使用到這裏的數值信息,經過選取若干個非匿名變量和匿名變量進行加法和乘法的數值操做,擴展數據的特徵
for i in ['v_' +str(t) for t in range(14)]: for j in ['v_' +str(k) for k in range(int(i[2:])+1,15)]: concat_data[str(i)+'+'+str(j)] = concat_data[str(i)]+concat_data[str(j)] for i in ['model','brand', 'bodyType', 'fuelType','gearbox', 'power', 'kilometer', 'notRepairedDamage', 'regionCode']: for j in ['v_' +str(i) for i in range(14)]: concat_data[str(i)+'*'+str(j)] = concat_data[i]*concat_data[j] concat_data.shape
對日期數據處理
日期數據一樣是很重要的數據,有着具體的實際含義。這裏咱們先提取出日期中的年月日,再具體分析每一個日期的數據。
# 設置日期的格式,例如20160404,設爲2016-04-04,其中月份從1-12 def date_proc(x): m = int(x[4:6]) if m == 0: m = 1 return x[:4] + '-' + str(m) + '-' + x[6:] #定義日期提取函數 def date_transform(df,fea_col): for f in tqdm(fea_col): df[f] = pd.to_datetime(df[f].astype('str').apply(date_proc)) df[f + '_year'] = df[f].dt.year df[f + '_month'] = df[f].dt.month df[f + '_day'] = df[f].dt.day df[f + '_dayofweek'] = df[f].dt.dayofweek return (df)
#提取日期信息 date_cols = ['regDate', 'creatDate'] concat_data = date_transform(concat_data,date_cols)
繼續使用日期的數據,構造其餘特徵。分析var=data['creatDate'] - data['regDate']的含義,var表明了車輛註冊日期和建立交易的日期之間相差的天數,能夠從側面反映出汽車使用的時長,通常來講價格與使用時間成反比 不過要注意,數據裏有時間出錯的格式,因此咱們須要 errors='coerce'
data = concat_data.copy() # 統計使用天數 data['used_time1'] = (pd.to_datetime(data['creatDate'], format='%Y%m%d', errors='coerce') - pd.to_datetime(data['regDate'], format='%Y%m%d', errors='coerce')).dt.days data['used_time2'] = (pd.datetime.now() - pd.to_datetime(data['regDate'], format='%Y%m%d', errors='coerce')).dt.days data['used_time3'] = (pd.datetime.now() - pd.to_datetime(data['creatDate'], format='%Y%m%d', errors='coerce') ).dt.days
#分桶操做,劃分到區間內 def cut_group(df,cols,num_bins=50): for col in cols: all_range = int(df[col].max()-df[col].min()) # print(all_range) bin = [i*all_range/num_bins for i in range(all_range)] df[col+'_bin'] = pd.cut(df[col], bin, labels=False) # 使用cut方法進行分箱 return df #分桶操做 cut_cols = ['used_time1','used_time2','used_time3'] data = cut_group(data,cut_cols,50) #分桶操做 data = cut_group(data,['kilometer'],10)
對年份和月份進行處理,繼續使用獨熱編碼
data['creatDate_year'].value_counts()
data['creatDate_month'].value_counts() # data['regDate_year'].value_counts()
# 對類別較少的特徵採用one-hot編碼 one_hot_list = ['creatDate_year','creatDate_month','regDate_month','regDate_year'] for col in one_hot_list: one_hot = pd.get_dummies(data[col]) one_hot.columns = [col+'_'+str(i) for i in range(len(one_hot.columns))] data = pd.concat([data,one_hot],axis=1)
# 刪除無用的SaleID data.drop(['SaleID'],axis=1,inplace=True)
增長特徵數量
增長特徵的數量能夠從數理統計的角度出發,經過選取一些變量的數理特性來增長數據維度
# count編碼 def count_coding(df,fea_col): for f in fea_col: df[f + '_count'] = df[f].map(df[f].value_counts()) return(df)
#count編碼 count_list = ['model', 'brand', 'regionCode','bodyType','fuelType','name','regDate_year', 'regDate_month', 'regDate_day', 'regDate_dayofweek' , 'creatDate_month','creatDate_day', 'creatDate_dayofweek','kilometer'] data = count_coding(data,count_list)
繪製熱力圖來分析匿名變量和price的相關性,其中v_0,v_8,v_12相關性較高
temp = Train_data[['v_0', 'v_1', 'v_2', 'v_3', 'v_4', 'v_5', 'v_6', 'v_7', 'v_8', 'v_9', 'v_10', 'v_11', 'v_12', 'v_13', 'v_14','price']] # Zoomed heatmap, correlation matrix sns.set(rc={'figure.figsize':(8,6)}) correlation_matrix = temp.corr() k = 8 #number of variables for heatmap cols = correlation_matrix.nlargest(k, 'price')['price'].index cm = np.corrcoef(temp[cols].values.T) sns.set(font_scale=1.25) hm = sns.heatmap(cm, cbar=True, annot=True, square=True, fmt='.2f', annot_kws={'size': 10}, yticklabels=cols.values, xticklabels=cols.values) plt.show()
#定義交叉特徵統計 def cross_cat_num(df,num_col,cat_col): for f1 in tqdm(cat_col): # 對類別特徵遍歷 g = df.groupby(f1, as_index=False) for f2 in tqdm(num_col): # 對數值特徵遍歷 feat = g[f2].agg({ '{}_{}_max'.format(f1, f2): 'max', '{}_{}_min'.format(f1, f2): 'min', '{}_{}_median'.format(f1, f2): 'median', }) df = df.merge(feat, on=f1, how='left') return(df)
# 用數值特徵對類別特徵作統計刻畫,挑了幾個跟price相關性最高的匿名特徵 cross_cat = ['model', 'brand','regDate_year'] cross_num = ['v_0','v_3', 'v_4', 'v_8', 'v_12','power'] data = cross_cat_num(data,cross_num,cross_cat)#一階交叉
劃分數據集
## 選擇特徵列 numerical_cols = data.columns feature_cols = [col for col in numerical_cols if col not in ['price']] ## 提早特徵列,標籤列構造訓練樣本和測試樣本 X_data = data.iloc[:len(Train_data),:][feature_cols] Y_data = Train_data['price'] X_test = data.iloc[len(Train_data):,:][feature_cols] print("X_data: ",X_data.shape) print("X_test: ",X_test.shape)
平均數編碼:針對高基數定性特徵(類別特徵)的數據預處理
定性特徵的基數(cardinality)指的是這個定性特徵全部可能的不一樣值的數量。在高基數(high cardinality)的定性特徵面前,這些數據預處理的方法每每得不到使人滿意的結果。
高基數定性特徵的例子:IP地址、電子郵件域名、城市名、家庭住址、街道、產品號碼。
主要緣由:
- LabelEncoder編碼高基數定性特徵,雖然只須要一列,可是每一個天然數都具備不一樣的重要意義,對於y而言線性不可分。使用簡單模型,容易欠擬合(underfit),沒法徹底捕獲不一樣類別之間的區別;使用複雜模型,容易在其餘地方過擬合(overfit)。
- OneHotEncoder編碼高基數定性特徵,必然產生上萬列的稀疏矩陣,易消耗大量內存和訓練時間,除非算法自己有相關優化(例:SVM)。
所以,咱們能夠嘗試使用平均數編碼(mean encoding)的編碼方法,在貝葉斯的架構下,利用所要預測的應變量(target variable),有監督地肯定最適合這個定性特徵的編碼方式。在Kaggle的數據競賽中,這也是一種常見的提升分數的手段。參考連接[https://blog.csdn.net/juzexia/article/details/78581462]
import numpy as np import pandas as pd from sklearn.model_selection import StratifiedKFold,KFold from itertools import product class MeanEncoder: def __init__(self, categorical_features, n_splits=10, target_type='classification', prior_weight_func=None): """ :param categorical_features: list of str, the name of the categorical columns to encode :param n_splits: the number of splits used in mean encoding :param target_type: str, 'regression' or 'classification' :param prior_weight_func: a function that takes in the number of observations, and outputs prior weight when a dict is passed, the default exponential decay function will be used: k: the number of observations needed for the posterior to be weighted equally as the prior f: larger f --> smaller slope """ self.categorical_features = categorical_features self.n_splits = n_splits self.learned_stats = {} if target_type == 'classification': self.target_type = target_type self.target_values = [] else: self.target_type = 'regression' self.target_values = None if isinstance(prior_weight_func, dict): self.prior_weight_func = eval('lambda x: 1 / (1 + np.exp((x - k) / f))', dict(prior_weight_func, np=np)) elif callable(prior_weight_func): self.prior_weight_func = prior_weight_func else: self.prior_weight_func = lambda x: 1 / (1 + np.exp((x - 2) / 1)) @staticmethod def mean_encode_subroutine(X_train, y_train, X_test, variable, target, prior_weight_func): X_train = X_train[[variable]].copy() X_test = X_test[[variable]].copy() if target is not None: nf_name = '{}_pred_{}'.format(variable, target) X_train['pred_temp'] = (y_train == target).astype(int) # classification else: nf_name = '{}_pred'.format(variable) X_train['pred_temp'] = y_train # regression prior = X_train['pred_temp'].mean() col_avg_y = X_train.groupby(by=variable, axis=0)['pred_temp'].agg({'mean': 'mean', 'beta': 'size'}) col_avg_y['beta'] = prior_weight_func(col_avg_y['beta']) col_avg_y[nf_name] = col_avg_y['beta'] * prior + (1 - col_avg_y['beta']) * col_avg_y['mean'] col_avg_y.drop(['beta', 'mean'], axis=1, inplace=True) nf_train = X_train.join(col_avg_y, on=variable)[nf_name].values nf_test = X_test.join(col_avg_y, on=variable).fillna(prior, inplace=False)[nf_name].values return nf_train, nf_test, prior, col_avg_y def fit_transform(self, X, y): """ :param X: pandas DataFrame, n_samples * n_features :param y: pandas Series or numpy array, n_samples :return X_new: the transformed pandas DataFrame containing mean-encoded categorical features """ X_new = X.copy() if self.target_type == 'classification': skf = StratifiedKFold(self.n_splits) else: skf = KFold(self.n_splits) if self.target_type == 'classification': self.target_values = sorted(set(y)) self.learned_stats = {'{}_pred_{}'.format(variable, target): [] for variable, target in product(self.categorical_features, self.target_values)} for variable, target in product(self.categorical_features, self.target_values): nf_name = '{}_pred_{}'.format(variable, target) X_new.loc[:, nf_name] = np.nan for large_ind, small_ind in skf.split(y, y): nf_large, nf_small, prior, col_avg_y = MeanEncoder.mean_encode_subroutine( X_new.iloc[large_ind], y.iloc[large_ind], X_new.iloc[small_ind], variable, target, self.prior_weight_func) X_new.iloc[small_ind, -1] = nf_small self.learned_stats[nf_name].append((prior, col_avg_y)) else: self.learned_stats = {'{}_pred'.format(variable): [] for variable in self.categorical_features} for variable in self.categorical_features: nf_name = '{}_pred'.format(variable) X_new.loc[:, nf_name] = np.nan for large_ind, small_ind in skf.split(y, y): nf_large, nf_small, prior, col_avg_y = MeanEncoder.mean_encode_subroutine( X_new.iloc[large_ind], y.iloc[large_ind], X_new.iloc[small_ind], variable, None, self.prior_weight_func) X_new.iloc[small_ind, -1] = nf_small self.learned_stats[nf_name].append((prior, col_avg_y)) return X_new def transform(self, X): """ :param X: pandas DataFrame, n_samples * n_features :return X_new: the transformed pandas DataFrame containing mean-encoded categorical features """ X_new = X.copy() if self.target_type == 'classification': for variable, target in product(self.categorical_features, self.target_values): nf_name = '{}_pred_{}'.format(variable, target) X_new[nf_name] = 0 for prior, col_avg_y in self.learned_stats[nf_name]: X_new[nf_name] += X_new[[variable]].join(col_avg_y, on=variable).fillna(prior, inplace=False)[ nf_name] X_new[nf_name] /= self.n_splits else: for variable in self.categorical_features: nf_name = '{}_pred'.format(variable) X_new[nf_name] = 0 for prior, col_avg_y in self.learned_stats[nf_name]: X_new[nf_name] += X_new[[variable]].join(col_avg_y, on=variable).fillna(prior, inplace=False)[ nf_name] X_new[nf_name] /= self.n_splits return X_new
# 高基數定性特徵:name汽車交易名稱,brand汽車品牌,regionCode地區編碼 class_list = ['model','brand','name','regionCode']+date_cols # date_cols = ['regDate', 'creatDate'] MeanEnocodeFeature = class_list # 聲明須要平均數編碼的特徵 ME = MeanEncoder(MeanEnocodeFeature,target_type='regression') # 聲明平均數編碼的類 X_data = ME.fit_transform(X_data,Y_data) # 對訓練數據集的X和y進行擬合 X_test = ME.transform(X_test)#對測試集進行編碼
X_data['price'] = Train_data['price']
from sklearn.model_selection import KFold # target encoding目標編碼,迴歸場景相對來講作目標編碼的選擇更多,不只能夠作均值編碼,還能夠作標準差編碼、中位數編碼等 enc_cols = [] stats_default_dict = { 'max': X_data['price'].max(), 'min': X_data['price'].min(), 'median': X_data['price'].median(), 'mean': X_data['price'].mean(), 'sum': X_data['price'].sum(), 'std': X_data['price'].std(), 'skew': X_data['price'].skew(), 'kurt': X_data['price'].kurt(), 'mad': X_data['price'].mad() } ### 暫且選擇這三種編碼 enc_stats = ['max','min','mean'] skf = KFold(n_splits=10, shuffle=True, random_state=42) for f in tqdm(['regionCode','brand','regDate_year','creatDate_year','kilometer','model']): enc_dict = {} for stat in enc_stats: enc_dict['{}_target_{}'.format(f, stat)] = stat X_data['{}_target_{}'.format(f, stat)] = 0 X_test['{}_target_{}'.format(f, stat)] = 0 enc_cols.append('{}_target_{}'.format(f, stat)) for i, (trn_idx, val_idx) in enumerate(skf.split(X_data, Y_data)): trn_x, val_x = X_data.iloc[trn_idx].reset_index(drop=True), X_data.iloc[val_idx].reset_index(drop=True) enc_df = trn_x.groupby(f, as_index=False)['price'].agg(enc_dict) val_x = val_x[[f]].merge(enc_df, on=f, how='left') test_x = X_test[[f]].merge(enc_df, on=f, how='left') for stat in enc_stats: val_x['{}_target_{}'.format(f, stat)] = val_x['{}_target_{}'.format(f, stat)].fillna(stats_default_dict[stat]) test_x['{}_target_{}'.format(f, stat)] = test_x['{}_target_{}'.format(f, stat)].fillna(stats_default_dict[stat]) X_data.loc[val_idx, '{}_target_{}'.format(f, stat)] = val_x['{}_target_{}'.format(f, stat)].values X_test['{}_target_{}'.format(f, stat)] += test_x['{}_target_{}'.format(f, stat)].values / skf.n_splits
drop_list = ['regDate', 'creatDate','brand_power_min', 'regDate_year_power_min'] x_train = X_data.drop(drop_list+['price'],axis=1) x_test = X_test.drop(drop_list,axis=1) x_train.shape
x_train = x_train.astype('float32') x_test = x_test.astype('float32')
使用MinMaxScaler處理數據,再使用PCA下降維度
from sklearn.preprocessing import MinMaxScaler #特徵歸一化 min_max_scaler = MinMaxScaler() min_max_scaler.fit(pd.concat([x_train,x_test]).values) all_data = min_max_scaler.transform(pd.concat([x_train,x_test]).values)
print(all_data.shape) from sklearn import decomposition pca = decomposition.PCA(n_components=400) all_pca = pca.fit_transform(all_data) X_pca = all_pca[:len(x_train)] test = all_pca[len(x_train):] y = Train_data['price'].values print(all_pca.shape)
模型選擇
這裏以keras搭建基礎的神經網絡模型,模型結構選取全鏈接神經網絡。
from keras.layers import Conv1D, Activation, MaxPool1D, Flatten, Dense from keras.layers import Input, Dense, Concatenate, Reshape, Dropout, merge, Add def NN_model(input_dim): init = keras.initializers.glorot_uniform(seed=1) model = keras.models.Sequential() model.add(Dense(units=300, input_dim=input_dim, kernel_initializer=init, activation='softplus')) #model.add(Dropout(0.2)) model.add(Dense(units=300, kernel_initializer=init, activation='softplus')) #model.add(Dropout(0.2)) model.add(Dense(units=64, kernel_initializer=init, activation='softplus')) model.add(Dense(units=32, kernel_initializer=init, activation='softplus')) model.add(Dense(units=8, kernel_initializer=init, activation='softplus')) model.add(Dense(units=1)) return model
from keras.callbacks import Callback, EarlyStopping class Metric(Callback): def __init__(self, model, callbacks, data): super().__init__() self.model = model self.callbacks = callbacks self.data = data def on_train_begin(self, logs=None): for callback in self.callbacks: callback.on_train_begin(logs) def on_train_end(self, logs=None): for callback in self.callbacks: callback.on_train_end(logs) def on_epoch_end(self, batch, logs=None): X_train, y_train = self.data[0][0], self.data[0][1] y_pred3 = self.model.predict(X_train) y_pred = np.zeros((len(y_pred3), )) y_true = np.zeros((len(y_pred3), )) for i in range(len(y_pred3)): y_pred[i] = y_pred3[i] for i in range(len(y_pred3)): y_true[i] = y_train[i] trn_s = mean_absolute_error(y_true, y_pred) logs['trn_score'] = trn_s X_val, y_val = self.data[1][0], self.data[1][1] y_pred3 = self.model.predict(X_val) y_pred = np.zeros((len(y_pred3), )) y_true = np.zeros((len(y_pred3), )) for i in range(len(y_pred3)): y_pred[i] = y_pred3[i] for i in range(len(y_pred3)): y_true[i] = y_val[i] val_s = mean_absolute_error(y_true, y_pred) logs['val_score'] = val_s print('trn_score', trn_s, 'val_score', val_s) for callback in self.callbacks: callback.on_epoch_end(batch, logs)
import keras.backend as K from keras.callbacks import LearningRateScheduler def scheduler(epoch): # 每隔20個epoch,學習率減少爲原來的0.5 if epoch % 20 == 0 and epoch != 0: lr = K.get_value(model.optimizer.lr) K.set_value(model.optimizer.lr, lr * 0.5) print("lr changed to {}".format(lr * 0.5)) return K.get_value(model.optimizer.lr) reduce_lr = LearningRateScheduler(scheduler) #model.fit(train_x, train_y, batch_size=32, epochs=5, callbacks=[reduce_lr])
n_splits = 5 kf = KFold(n_splits=n_splits, shuffle=True) import keras b_size = 2000 max_epochs = 145 oof_pred = np.zeros((len(X_pca), )) sub = pd.read_csv('used_car_testB_20200421.csv',sep = ' ')[['SaleID']].copy() sub['price'] = 0 avg_mae = 0 for fold, (trn_idx, val_idx) in enumerate(kf.split(X_pca, y)): print('fold:', fold) X_train, y_train = X_pca[trn_idx], y[trn_idx] X_val, y_val = X_pca[val_idx], y[val_idx] model = NN_model(X_train.shape[1]) simple_adam = keras.optimizers.Adam(lr = 0.01) model.compile(loss='mae', optimizer=simple_adam,metrics=['mae']) es = EarlyStopping(monitor='val_score', patience=10, verbose=0, mode='min', restore_best_weights=True,) es.set_model(model) metric = Metric(model, [es], [(X_train, y_train), (X_val, y_val)]) model.fit(X_train, y_train, batch_size=b_size, epochs=max_epochs, validation_data = [X_val, y_val], callbacks=[reduce_lr], shuffle=True, verbose=0) y_pred3 = model.predict(X_val) y_pred = np.zeros((len(y_pred3), )) sub['price'] += model.predict(test).reshape(-1,)/n_splits for i in range(len(y_pred3)): y_pred[i] = y_pred3[i] oof_pred[val_idx] = y_pred val_mae = mean_absolute_error(y[val_idx], y_pred) avg_mae += val_mae/n_splits print() print('val_mae is:{}'.format(val_mae)) print() mean_absolute_error(y, oof_pred)