回顧發現,李航的《統計學習方法》有些章節還沒看完,爲了記錄,特地再水一文。機器學習
0 - logistic分佈
如《統計學習方法》書上,設X是連續隨機變量,X服從logistic分佈是指X具備如下分佈函數和密度函數:
\[F(x) = P(X \leq x)=\frac{1}{1+e^{-(x-\mu)/\gamma}}\]
\[f(x) = F'(x) = \frac{e^{-(x-\mu)/\gamma}}{1+e^{-(x-\mu)/\gamma}}\]
其中\(\mu\)是位置參數,\(\gamma\)是形狀參數,logistic分佈函數是一條S形曲線,該曲線以點\((\mu,\frac{1}{2})\)爲中心對稱,即:
\[F(-x+\mu)-\frac{1}{2} = -F(x-\mu)+\frac{1}{2}\]
\(\gamma\)參數越小,那麼該曲線越往中間縮,則中心附近增加越快
函數
圖0.1 logistic 密度函數和分佈函數
1 - 二項logistic迴歸
咱們一般所說的邏輯迴歸就是這裏的二項logistic迴歸,它有以下的式子:
\[h_\theta(\bf x)=g(\theta^T\bf x) = \frac{1}{1+e^{-\theta^T\bf x }}\]
這個函數叫作logistic函數,也被稱爲sigmoid函數,其中\(x_i\in{\bf R}^n,y_i\in\{0,1\}\)且有以下式子:
學習
\(P(y=1|\bf x;\theta) = h_\theta(\bf x)\)
\(P(y=0|\bf x;\theta) =1- h_\theta(\bf x)\)
\(\log\frac{P(y=1|\bf x;\theta) }{1-P(y=1|\bf x;\theta) }=\theta^T\bf x\)
即緊湊的寫法爲:
\[p(y|\bf x; \theta) = (h_\theta(x))^y(1-h_\theta(\bf x))^{1-y}\]
基於
\(m\)個訓練樣本,經過極大似然函數來求解該模型的參數:
\[\begin{eqnarray}L(\theta) &=&\prod_{i=1}^mp(y^{(i)}|x^{(i)};\theta)\\ &=&\prod_{i=1}^m(h_\theta(x^{(i)}))^{y^{(i)}}(1-h_\theta(\bf x^{(i)}))^{1-y^{(i)}} \end{eqnarray}\]
將其轉換成log最大似然:
\[\begin{eqnarray}\it l(\theta) &=&\log L(\theta)\\ &=&\sum_{i=1}^my^{(i)}\log h(x^{(i)})+(1-y^{(i)})\log (1-h(x^{(i)})) \end{eqnarray}\]
而該sigmoid函數的導數爲:
\(g'(z) = g(z)(1-g(z))\),假設
\(m=1\)(即隨機梯度降低法),將上述式子對關於
\(\theta_j\)求導得:
ps:上述式子是單樣本下梯度更新過程,且基於第
\(j\)個參數(標量)進行求導,即涉及到輸入樣本
\(x\)的第
\(j\)個元素
\(x_j\)
而關於參數
\(\theta\)的更新爲:
\(\theta:=\theta+\alpha\nabla_\theta\it l(\theta)\)
ps:上面式子是加號而不是減號,是由於這裏是爲了最大化,而不是最小化
經過屢次迭代,使得模型收斂,並獲得最後的模型參數。
2 - 多項logistic迴歸
假設離散型隨機變量\(Y\)的取值集合爲\({1,2,...,K}\),那麼多項logistic迴歸模型爲:
\[P(Y=k|x) = \frac{e^{(\theta_k* \bf x)}}{1+\sum_{k=1}^{K-1}e^{(\theta_k* \bf x)}},k=1,2,...K-1\]
而第\(K\)個機率爲:
\[P(Y=K|x) = \frac{1}{1+\sum_{k=1}^{K-1}e^{(\theta_k* \bf x)}}\]
這裏\(x\in{\bf R}^{n+1},\theta_k\in {\bf R}^{n+1}\),即引入偏置。spa
3 - softmax
logistic迴歸模型的代價函數爲:
\[J(\theta) = -\frac{1}{m}\left[\sum_{i=1}^{m} y^{(i)}\log h_\theta({\bf x}^{(i)})+(1-y^{(i)})\log (1-h_\theta({\bf x}^{(i)})) \right]\]
而softmax是當多分類問題,即\(y^{(i)}\in \{1,2,...,K\}\)。對於給定的樣本向量\(\bf x\),模型對每一個類別都會輸出一個機率值\(p(y=j|\bf x)\),則以下圖:
blog
其中
\(\theta_1,\theta_2,...\theta_k \in R^{n+1}\)都是模型的參數,其中分母是爲了歸一化使得全部機率之和爲1.
從而softmax的代價函數爲:
\[\begin{eqnarray}J(\theta) &=& -\frac{1}{m}\left[\sum_{i=1}^{m} \sum_{j=1}^K1\{y^{(i)}=j\}\log \frac{e^{\theta_j^T{\bf x}^{(i)}}}{\sum_{l=1}^Ke^{\theta_l^T{\bf x}^{(i)}}}\right]\\ &=& -\frac{1}{m}\left[\sum_{i=1}^{m} \sum_{j=1}^K1\{y^{(i)}=j\}\left[\log {e^{\theta_j^T{\bf x}^{(i)}}}-\log{\sum_{l=1}^Ke^{\theta_l^T{\bf x}^{(i)}}}\right]\right] \end{eqnarray}\]
其中,
\(p(y^{(i)}=j|{\bf x}^{(i)};\theta)=\frac{e^{\theta_j^T{\bf x}^{(i)}}}{\sum_{l=1}^Ke^{\theta_l^T{\bf x}^{(i)}}}\)該代價函數關於第j個參數的導數爲:
\[\begin{eqnarray}\nabla_{\theta_j}J(\theta) &=&-\frac{1}{m}\sum_{i=1}^{m}1\{y^{(i)}=j\}\left[\frac{e^{\theta_j^T{\bf x}^{(i)}}* {\bf x}^{(i)}}{e^{\theta_j^T{\bf x}^{(i)}}}-\frac{e^{\theta_j^T{\bf x}^{(i)}}* {\bf x}^{(i)}}{\sum_{l=1}^Ke^{\theta_l^T{\bf x}^{(i)}}}\right]\\ &=&-\frac{1}{m}\sum_{i=1}^{m}1\{y^{(i)}=j\}\left[{\bf x}^{(i)}-\frac{e^{\theta_j^T{\bf x}^{(i)}}* {\bf x}^{(i)}}{\sum_{l=1}^Ke^{\theta_l^T{\bf x}^{(i)}}}\right]\\ &=&-\frac{1}{m}\sum_{i=1}^{m}{\bf x}^{(i)}\left(1\{y^{(i)}=j\}-\frac{e^{\theta_j^T{\bf x}^{(i)}}}{\sum_{l=1}^Ke^{\theta_l^T{\bf x}^{(i)}}}\right)\\ &=&-\frac{1}{m}\sum_{i=1}^{m}{\bf x}^{(i)}\left[1\{y^{(i)}=j\} - p(y^{(i)}=j|{\bf x}^{(i)};\theta)\right] \end{eqnarray}\]
ps:由於在關於
\(\theta_j\)求導的時候,其餘非
\(\theta_j\)引發的函數對該導數爲0。因此
\(\sum_{j=1}^K\)中省去了其餘部分
ps:這裏的
\(\theta_j\)不一樣於邏輯迴歸部分,這裏是一個向量;
4 - softmax與logistic的關係
將邏輯迴歸寫成以下形式:
\[\begin{eqnarray}J(\theta) &=& -\frac{1}{m}\left[\sum_{i=1}^{m} y^{(i)}\log h_\theta({\bf x}^{(i)})+(1-y^{(i)})\log (1-h_\theta({\bf x}^{(i)})) \right]\\ &=& -\frac{1}{m}\left[\sum_{i=1}^{m} \sum_{j=0}^11\{y^{(i)}=j\}\log p(y^{(i)}=j|{\bf x}^{(i)};\theta)\right] \end{eqnarray}\]
能夠看出當k=2的時候,softmax就是邏輯迴歸模型it
參考資料:
[] 李航,統計學習方法
[] 周志華,機器學習
[] CS229 Lecture notes Andrew Ng
[] ufldl
[] Foundations of Machine Learningio