參考西瓜書《機器學習》線性迴歸python
給定訓練集$D={(\boldsymbol x_1, y_1), (\boldsymbol x_2, y_2), ..., (\boldsymbol x_i, y_i), ( \boldsymbol x_n, y_n)}$,其中$\boldsymbol {x_i} = (x_{i1};x_{i2}; ...; x_{im})$,$y_i\in \mathbb{R}$.線性迴歸(linear regression)試圖學得一個線性模型以儘量準確地預測實值輸出標記
。git
線性迴歸試圖學得:$$f(x_i)=wx_i+b,使得f(x_i)\simeq y_i$$github
最小二乘法是基於預測值和真實值的均方差最小化的方法來估計參數$w$和$b$,即:算法
$$\eqalign{
(w^,b^) & = \mathop {\arg \min }\limits_{(w,b)} \sum\limits_{i = 1}^n {{{(f({x_i}) - {y_i})}^2}} \cr
& = \mathop {\arg \min }\limits_{(w,b)} \sum\limits_{i = 1}^n {{{({y_i} - w{x_i} - b)}^2}} \cr}$$app
求解$w$和$b$使${E_{(w,b)}} = \sum\limits_{i = 1}^n {{{({y_i} - w{x_i} - b)}^2}}$最小化的過程,稱爲線性迴歸模型的最小二乘「參數估計」(parameter estimation)。dom
咱們可將$E_{(w,b)}$分別對$w$和$b$求導,獲得機器學習
$$\eqalign{
& {{\partial E(w,b)} \over {\partial w}} = 2\left( {w\sum\limits_{i = 1}^n {x_i^2 - \sum\limits_{i = 1}^n {({y_i} - b){x_i}} } } \right) \cr
& {{\partial E(w,b)} \over {\partial b}} = 2\left( {nb - \sum\limits_{i = 1}^n {({y_i} - w{x_i})} } \right) \cr}$$ide
令上兩式爲零可獲得$w$和$b$最優解的閉式(closed-form)解:函數
$$\eqalign{
w & = {{\sum\limits_{i = 1}^n {{y_i}({x_i} - \bar x)} } \over {\sum\limits_{i = 1}^n {x_i^2 - {\textstyle{1 \over n}}{{\left( {\sum\limits_{i = 1}^n {{x_i}} } \right)}^2}} }} \cr
b & = {\textstyle{1 \over n}}\sum\limits_{i = 1}^n {({y_i} - w{x_i})} \cr}$$學習
其中,$$\bar x = {\textstyle{1 \over n}}\sum\limits_{i = 1}^n {{x_i}} $$爲$x$的均值。
給定數據集D,樣本由d個屬性描述,此時咱們試圖學得
$f(\boldsymbol {x_i}) = \boldsymbol {w^T}\boldsymbol {x_i} + b$, 使得$f({x_i}) \simeq {y_i}$
這稱爲「多元線性迴歸」(multivariate linear regression).
相似的,可利用最小二乘法來對$w$和$b$進行估計。爲便於討論,咱們把$w$和$b$吸取入向量形式$\widehat w = (w;b)$。
相應的,把數據集D表示爲一個mx(d+1)大小的矩陣${\bf{X}}$,其中每行對應於一個示例,該行前d個元素對應於前d個屬性值,最後一個元素恆置爲1,即
$${\bf{X}} = \left( {\matrix{
{{x_{11}}} & \cdots & {{x_{1d}}} & 1 \cr
{{x_{21}}} & \cdots & {{x_{2d}}} & 1 \cr
\vdots & \ddots & \vdots & 1 \cr
{{x_{m1}}} & \cdots & {{x_{md}}} & 1 \cr } } \right) = \left( {\matrix{
{x_1^T} \cr
{x_2^T} \cr
\vdots \cr
{x_m^T} \cr } \matrix{
1 \cr
1 \cr
\vdots \cr
1 \cr } } \right)$$
再把標記
也寫成向量形式$\boldsymbol{y}=(y_1;y_2;...;y_m)$,有
$$ \hat {\boldsymbol {w}^*} = \mathop {\arg \min }\limits_{\hat w} \boldsymbol{(y - X\hat w)^T}\boldsymbol {(y - X\hat w)}$$
令${E_{\hat w}} = {(\boldsymbol y - {\bf{X}}\boldsymbol {\hat w})^T}(\boldsymbol y - {\bf{X}}\boldsymbol {\hat w})$,對$\hat w$求導獲得:
$${{\partial {E_{\hat w}}} \over {\partial \hat {\boldsymbol w}}} = 2{{\bf{X}}^T}({\bf{X}}\hat w - \boldsymbol y)$$
令上式爲零可得$\hat {w}$最優解的閉式解。
當${{\bf{X}}^T}{\bf{X}}$爲滿秩矩陣或正定矩陣時,令上式爲零可得:
$$\boldsymbol {\hat w^*} = {({{\bf{X}}^T}{\bf{X}})^{ - 1}}{{\bf{X}}^T}\boldsymbol {y}$$
令$\boldsymbol {\hat x_i} = (\boldsymbol{x_i};1)$,則最終學得的多元線性迴歸模型爲
$$f({\boldsymbol{\hat x_i}}) = \boldsymbol {\hat x_i}{({{\bf{X}}^T}{\bf{X}})^{ - 1}}{{\bf{X}}^T}\boldsymbol y$$
線性迴歸假定輸入空間到輸出空間的函數映射成線性關係,但現實應用中,不少問題都是非線性的。爲了拓展其應用場景,咱們能夠將線性迴歸的預測值作一個非線性的函數變化去逼近真實值,這樣獲得的模型統稱爲廣義線性模型(generalized linear model):
$$y = {g^{ - 1}}(\boldsymbol{w^T}\boldsymbol x + b)$$
其中函數$g(\cdot)$稱爲「聯繫函數」(link function),它連續且充分光滑
。
當聯繫函數爲$g(\cdot)=ln(\cdot)$時,稱爲對數線性迴歸。即
$$\ln y= \boldsymbol{w^T}\boldsymbol x + b \
y = e^{\boldsymbol{w^T}\boldsymbol x + b}$$
前面介紹的都是完成迴歸任務,若是要作的是分類任務該怎麼辦呢?
按照上面介紹的廣義線性模型,要完成分類任務,也就是去尋找一個合適的$g(\cdot)$函數。爲了簡化問題,咱們先考慮二分類任務,其輸出標記$y \in { 0,1}$。線性模型產生的預測值$z=\boldsymbol{w^T}\boldsymbol {x} + b$是實值,所以,咱們須要將實值z轉換爲0/1值。最理想的是「單位階躍函數」。
$$
y=\begin{cases}
0,&z<0 \ 0.5, &z=0 \ 1,&z>0
\end{cases}
$$
即若預測值z大於零就判爲正例,小於零則判爲反例,預測值爲臨界值零則可任意斷定,如圖所示
從圖中能夠發現,單位階躍函數不連續,所以不能直接用做聯繫函數$g(\cdot)$。因而咱們但願找到能在必定程度上近似單位階躍函數的替代函數,並但願它在臨界點連續且單調可微。
對數概率函數(logistic function)正是這樣一個經常使用的替代函數。
$$y = \frac{1}{{1 + {e^{ - (\boldsymbol{w^T}\boldsymbol x + b)}}}}$$
從圖中能夠看出,它的圖像形式S,對這類圖像形式S的函數稱爲"Sigmoid函數",對數概率函數是它最重要的表明。
這種利用對數概率函數去逼近真實標記的對數概率的模型稱爲「對數概率迴歸」,又常叫作「邏輯斯蒂迴歸」,雖然它的名字是「迴歸」,但其實是一種分類學習方法。
對數邏輯迴歸有不少優勢:
將y視爲樣本$\boldsymbol x$做爲正例的可能性。顯然有:
$$p(y=1|\boldsymbol x) = \frac{{{e^{\boldsymbol {w^T}\boldsymbol x + b}}}}{{1 + {e^{\boldsymbol {w^T}\boldsymbol x + b}}}}$$
$$p(y=0|\boldsymbol x) = \frac{1}{{1 + {e^{\boldsymbol {w^T}\boldsymbol x + b}}}}$$
因而,可經過「極大似然法」來估計參數$\boldsymbol w$和$b$。給定數據集${ (\boldsymbol {x_i},{y_i})} {i = 1}^m$,其「對數似然」爲:
$$l(\boldsymbol w,b) = \sum\limits{i = 1}^n {\ln p({y_i}|\boldsymbol {x_i};\boldsymbol w,b)} $$
令
$$\eqalign{
& \boldsymbol \beta = (\boldsymbol w;b),\hat {\boldsymbol x} = (\boldsymbol x;1) \cr
& {\boldsymbol \beta ^T}\hat {\boldsymbol x} = \boldsymbol{w^T}\boldsymbol x + b \cr
& {p_1}(\boldsymbol {\hat x};\boldsymbol \beta ) = p(y = 1|\boldsymbol {\hat x};\boldsymbol \beta ) \cr
& {p_0}(\boldsymbol {\hat x};\boldsymbol \beta ) = p(y = 0|\boldsymbol {\hat x};\boldsymbol \beta ) \cr} $$
則似然項可重寫爲:
$$p({y_i}|\boldsymbol {x_i};\boldsymbol w,b) = {y_i}{p_1}({\boldsymbol {\hat x_i}};\boldsymbol \beta ) + (1 - {y_i}){p_0}(\boldsymbol {\hat x};\boldsymbol \beta )$$
將上式帶入似然函數:
$$\eqalign{
l(\beta ) & = \sum\limits_{i = 1}^n {\ln ({y_i}{p_1}(\hat x;\beta ) + (1 - {y_i}){p_0}(\hat x;\beta )} \cr
& = \sum\limits_{i = 1}^n {\ln \left( {{y_i}\frac{{{e^{{\beta ^T}\hat x}}}}{{1 + {e^{{\beta ^T}\hat x}}}} + (1 - {y_i})\frac{1}{{1 + {e^{{\beta ^T}\hat x}}}}} \right)} \cr
& = \sum\limits_{i = 1}^n {\ln \left( {\frac{{{y_i}{e^{{\beta ^T}\hat x}} + (1 - {y_i})}}{{1 + {e^{{\beta ^T}\hat x}}}}} \right)} \cr
& = \sum\limits_{i = 1}^n {\ln \left( {\frac{{{y_i}({e^{{\beta ^T}\hat x}} - 1) + 1}}{{1 + {e^{{\beta ^T}\hat x}}}}} \right)} \cr
& = \sum\limits_{i = 1}^n {\left( {\ln ({y_i}({e^{{\beta ^T}\hat x}} - 1) + 1) - \ln (1 + {e^{{\beta ^T}\hat x}})} \right)} \cr
& =\begin{cases}
{\sum\limits_{i = 1}^n { - \ln (1 + {e^{{\beta ^T}\hat x}})} },&y_i=0 \ {\sum\limits_{i = 1}^n {({\beta ^T}\hat x - \ln (1 + {e^{{\beta ^T}\hat x}}))} },&y_i=1 \cr
\end{cases}} $$
考慮到$y_i \in {0, 1}$,即最大化$l(\boldsymbol w,b)$等價於最小化
$$l(\boldsymbol \beta ) = \sum\limits_{i = 1}^n {(-y_i\boldsymbol{\beta ^T}\boldsymbol {\hat x_i} + \ln (1 + {e^{{\beta ^T}\hat x_i}}))} $$
接下來能夠利用數值優化算法對其求解,即
$$\beta ^* {\text{ = }}\mathop {\arg \min }\limits_\beta l(\beta )$$
迴歸模型
$$y = \frac{1}{{1 + {e^{ - z}}}}$$
損失函數(最小化):
$$l(\boldsymbol \beta ) = \sum\limits_{i = 1}^n {(-y_i\boldsymbol{\beta ^T}\boldsymbol {\hat x_i} + \ln (1 + {e^{{\beta ^T}\hat x_i}}))} $$
即
$$\beta ^* {\text{ = }}\mathop {\arg \min }\limits_\beta l(\beta )$$
以牛頓法求解爲例:其第$t+1$輪迭代解的更新公式爲
$$\boldsymbol \beta ^{t+1}=\boldsymbol \beta ^t-\left ( \frac{\partial ^2l(\boldsymbol \beta))}{\partial {\boldsymbol \beta}\partial {\boldsymbol \beta} ^T} \right )^{-1}\frac{\partial l(\boldsymbol \beta)}{\partial \boldsymbol \beta}$$
其中關於$\boldsymbol \beta$的一階、二階導數分別爲
$$\frac{\partial l(\beta)}{\partial \beta} = -\sum ^m_{i=1}\hat x_i(y_i-p_1(\hat x_i;\beta))$$
$$\frac{\partial ^2l(\beta)}{\partial \beta\partial \beta ^T}=\sum ^m_{i=1}\hat x_i\hat x_i^Tp_1(\hat x_i;\beta)(1-p_1(\hat x_i; \beta))$$
import os import numpy as np import pandas as pd
data = pd.read_csv("../data/irisdata.txt") # 只保留兩種標籤,進行二分類任務 data = data[data['name'] != 'Iris-setosa'] data['name'].value_counts()
Iris-versicolor 50 Iris-virginica 50 Name: name, dtype: int64
# 分離標籤,並將標籤映射到數值 y = data['name'] y[y == 'Iris-versicolor'] = 1 y[y == 'Iris-virginica'] = 0 X = data.drop('name', axis=1)
# 劃分訓練集和驗證集 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)
class LogisticReressionClassifier: def __init__(self, max_iter): self.max_iter = max_iter def sigmod(self, z): return 1 / (1 + np.exp(-z)) def fit(self, X, y): self.beta = np.random.normal(size=(X.shape[0], X.shape[1] + 1)) # 初始化參數 self.X_hat = np.c_[X, np.ones(X.shape[0])] # 爲數據集加入一列1 self.loss_function(X, y) # 打印訓練前loss for j in range(self.max_iter): # 迭代優化 pd1 = 0 # 一階偏導 for i in range(len(y)): pd1 -= self.X_hat[i]*(y[i] - self.sigmod(np.dot(self.beta[i].T, self.X_hat[i]))) pd2 = 0 # 二階偏導 for i in range(len(y)): pd2 += self.X_hat[i].dot(self.X_hat[i].T.dot(self.sigmod(self.beta[i].T.dot(self.X_hat[i]))*(1 - self.sigmod(self.beta[i].T.dot(self.X_hat[i]))))) self.beta = self.beta - (1 / pd2)*pd1 # 更新參數beta self.loss_function(X, y) # 打印訓練後的loss def loss_function(self, X, y): loss = 0 # 根據損失函數公式計算當前loss for i in range(len(y)): loss += -y[i]*np.dot(self.beta[i].T, self.X_hat[i]) + np.log(1 + np.exp(np.dot(self.beta[i].T, self.X_hat[i]))) print(loss) def predict(self, X): y = [] # 存儲預測結果 X = np.c_[X, np.ones(X.shape[0])] # 爲訓練集加入一列1 for i in range(X.shape[0]): # 計算樣本做爲正例的相對可能性(概率) odds = self.sigmod(np.mean(self.beta, axis=0).T.dot(X[i])) if (odds >= 0.5): y.append(1) else: y.append(0) return y
clf = LogisticReressionClassifier(10000)
clf.fit(X_train.values, y_train.values)
187.27577618364464 38.2785420108109
y_pred = clf.predict(X_test.values)
# 正確率 sum(y_pred == y_test)/len(y_test)
0.96