在不少機器學習任務中,特徵並不老是連續值,而有多是分類值。python
例如,考慮一下的三個特徵:web
["male", "female"] ["from Europe", "from US", "from Asia"] ["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]
若是將上述特徵用數字表示,效率會高不少。例如:算法
["male", "from US", "uses Internet Explorer"] 表示爲[0, 1, 3] ["female", "from Asia", "uses Chrome"]表示爲[1, 2, 1]
可是,即便轉化爲數字表示後,上述數據也不能直接用在咱們的分類器中。這個的整數特徵表示並不能在分類器中直接使用,由於這樣的連續輸入,估計器會認爲類別之間是有序的,但實際倒是無序的。(例如:瀏覽器的類別數據則是任意排序的)。瀏覽器
一、Why do we binarize categorical features?
We binarize the categorical input so that they can be thought of as a vector from the Euclidean space (we call this as embedding the vector in the Euclidean space).使用one-hot編碼,將離散特徵的取值擴展到了歐式空間,離散特徵的某個取值就對應歐式空間的某個點。app
二、Why do we embed the feature vectors in the Euclidean space?
Because many algorithms for classification/regression/clustering etc. requires computing distances between features or similarities between features. And many definitions of distances and similarities are defined over features in Euclidean space. So, we would like our features to lie in the Euclidean space as well.dom
將離散特徵經過one-hot編碼映射到歐式空間,是由於,在迴歸,分類,聚類等機器學習算法中,特徵之間距離的計算或類似度的計算是很是重要的,而咱們經常使用的距離或類似度的計算都是在歐式空間的類似度計算,計算餘弦類似性,基於的就是歐式空間。機器學習
三、Why does embedding the feature vector in Euclidean space require us to binarize categorical features?
Let us take an example of a dataset with just one feature (say job_type as per your example) and let us say it takes three values 1,2,3.
Now, let us take three feature vectors x_1 = (1), x_2 = (2), x_3 = (3). What is the euclidean distance between x_1 and x_2, x_2 and x_3 & x_1 and x_3? d(x_1, x_2) = 1, d(x_2, x_3) = 1, d(x_1, x_3) = 2. This shows that distance between job type 1 and job type 2 is smaller than job type 1 and job type 3. Does this make sense? Can we even rationally define a proper distance between different job types? In many cases of categorical features, we can properly define distance between different values that the categorical feature takes. In such cases, isn't it fair to assume that all categorical features are equally far away from each other?
Now, let us see what happens when we binary the same feature vectors. Then, x_1 = (1, 0, 0), x_2 = (0, 1, 0), x_3 = (0, 0, 1). Now, what are the distances between them? They are sqrt(2). So, essentially, when we binarize the input, we implicitly state that all values of the categorical features are equally away from each other.
將離散型特徵使用one-hot編碼,確實會讓特徵之間的距離計算更加合理。好比,有一個離散型特徵,表明工做類型,該離散型特徵,共有三個取值,不使用one-hot編碼,其表示分別是x_1 = (1), x_2 = (2), x_3 = (3)。兩個工做之間的距離是,(x_1, x_2) = 1, d(x_2, x_3) = 1, d(x_1, x_3) = 2。那麼x_1和x_3工做之間就越不類似嗎?顯然這樣的表示,計算出來的特徵的距離是不合理。那若是使用one-hot編碼,則獲得x_1 = (1, 0, 0), x_2 = (0, 1, 0), x_3 = (0, 0, 1),那麼兩個工做之間的距離就都是sqrt(2).即每兩個工做之間的距離是同樣的,顯得更合理。ide
四、About the original question?
Note that our reason for why binarize the categorical features is independent of the number of the values the categorical features take, so yes, even if the categorical feature takes 1000 values, we still would prefer to do binarization.學習
五、Are there cases when we can avoid doing binarization?ui
不必用one-hot 編碼的情形
Yes. As we figured out earlier, the reason we binarize is because we want some meaningful distance relationship between the different values. As long as there is some meaningful distance relationship, we can avoid binarizing the categorical feature. For example, if you are building a classifier to classify a webpage as important entity page (a page important to a particular entity) or not and let us say that you have the rank of the webpage in the search result for that entity as a feature, then 1] note that the rank feature is categorical, 2] rank 1 and rank 2 are clearly closer to each other than rank 1 and rank 3, so the rank feature defines a meaningful distance relationship and so, in this case, we don't have to binarize the categorical rank feature.
More generally, if you can cluster the categorical values into disjoint subsets such that the subsets have meaningful distance relationship amongst them, then you don't have binarize fully, instead you can split them only over these clusters. For example, if there is a categorical feature with 1000 values, but you can split these 1000 values into 2 groups of 400 and 600 (say) and within each group, the values have meaningful distance relationship, then instead of fully binarizing, you can just add 2 features, one for each cluster and that should be fine.
將離散型特徵進行one-hot編碼的做用,是爲了讓距離計算更合理,但若是特徵是離散的,而且不用one-hot編碼就能夠很合理的計算出距離,那麼就不必進行one-hot編碼,好比,該離散特徵共有1000個取值,咱們分紅兩組,分別是400和600,兩個小組之間的距離有合適的定義,組內的距離也有合適的定義,那就不必用one-hot 編碼。
離散特徵進行one-hot編碼後,編碼後的特徵,其實每一維度的特徵均可以看作是連續的特徵。就能夠跟對連續型特徵的歸一化方法同樣,對每一維特徵進行歸一化。好比歸一化到[-1,1]或歸一化到均值爲0,方差爲1。
有些狀況不須要進行特徵的歸一化:
It depends on your ML algorithms, some methods requires almost no efforts to normalize features or handle both continuous and discrete features, like tree based methods: c4.5, Cart, random Forrest, bagging or boosting. But most of parametric models (generalized linear models, neural network, SVM,etc) or methods using distance metrics (KNN, kernels, etc) will require careful work to achieve good results. Standard approaches including binary all features, 0 mean unit variance all continuous features, etc。
基於樹的方法是不須要進行特徵的歸一化,例如隨機森林,bagging 和 boosting等。基於參數的模型或基於距離的模型,都是要進行特徵的歸一化。
對於決策樹來講,one-hot的本質是增長樹的深度
tree-model是在動態的過程當中生成相似 One-Hot + Feature Crossing 的機制
1. 一個特徵或者多個特徵最終轉換成一個葉子節點做爲編碼 ,one-hot能夠理解成三個獨立事件
2. 決策樹是沒有特徵大小的概念的,只有特徵處於他分佈的哪一部分的概念
爲了解決上述問題,其中一種可能的解決方法是採用獨熱編碼(One-Hot Encoding)。獨熱編碼即 One-Hot 編碼,又稱一位有效編碼,其方法是使用N位狀態寄存器來對N個狀態進行編碼,每一個狀態都由他獨立的寄存器位,而且在任意時候,其中只有一位有效。例如:
天然狀態碼爲:000,001,010,011,100,101 獨熱編碼爲:000001,000010,000100,001000,010000,100000
能夠這樣理解,對於每個特徵,若是它有m個可能值,那麼通過獨熱編碼後,就變成了m個二元特徵(如成績這個特徵有好,中,差變成one-hot就是100, 010, 001)。而且,這些特徵互斥,每次只有一個激活。所以,數據會變成稀疏的。
這樣作的好處主要有:
1. 決了分類器很差處理屬性數據的問題
2. 必定程度上也起到了擴充特徵的做用
kaggle中tianic問題中: 登錄的地點有三個,在數據中分別用 S,C,Q表示。
因爲這三個值是沒有任何關聯的,能夠對其進行編碼爲 0 ,1,2。 理論上計算這三個特徵值之間的距離應該時相等的,可是這時在計算歐式距離時他們的距離並不相等。 因此採用獨熱碼進行編碼,python代碼以下:
數據填充:
def dataPreprocess(df): df.loc[df['Sex'] == 'male', 'Sex'] = 0 df.loc[df['Sex'] == 'female', 'Sex'] = 1 # 因爲 Embarked中有兩個數據未填充,須要先將數據填滿 df['Embarked'] = df['Embarked'].fillna('S') # 部分年齡數據未空, 填充爲 均值 df['Age'] = df['Age'].fillna(df['Age'].median())df.loc[df['Embarked']=='S', 'Embarked'] = 0 df.loc[df['Embarked'] == 'C', 'Embarked'] = 1 df.loc[df['Embarked'] == 'Q', 'Embarked'] = 2
df['NewFare'] = df['Fare']
df.loc[(df.Fare < 40), 'NewFare'] = 0
df.loc[((df.Fare >= 40) & (df.Fare < 100)), 'NewFare'] = 1
df.loc[((df.Fare >= 100) & (df.Fare < 150)), 'NewFare'] = 2
df.loc[((df.Fare >= 150) & (df.Fare < 200)), 'NewFare'] = 3
df.loc[(df.Fare >= 200), 'NewFare'] = 4
return df
利用獨熱碼對 'Embarked' 屬性進行編碼
def data_process_onehot(df): #copy_df = df.copy() train_Embarked = df["Embarked"].values.reshape(-1,1) onehot_encoder = OneHotEncoder(sparse=False) train_OneHotEncoded = onehot_encoder.fit_transform(train_Embarked) df["EmbarkedS"] = train_OneHotEncoded[:, 0] df["EmbarkedC"] = train_OneHotEncoded[:, 1] df["EmbarkedQ"] = train_OneHotEncoded[:, 2] return df
編碼後效果:
整個數據處理過程:
data_train = ReadData.readSourceData() data_train = dataPreprocess(data_train) data_train = data_process_onehot(data_train) precent = linearRegression(data_train)
參考: