機器學習——梯度降低法

Notation:

m=number of training exampleshtml

n=number of features算法

x="input" variables / features機器學習

y="output"variable/"target" variableide

\((x^{(i)},y^{(i)})\) = the ith trainging example函數

\(h_\theta\) = fitting function學習

1、梯度降低法(Gradient Descent)(主要)

其中\(h_\theta(x)=\theta_0+\theta_1x_1+...+\theta_nx_n=\sum_{i=0}^{n}{\theta_ix_i}=\theta^T\)idea

假設損失函數爲\(J(\theta)=\frac{1}{2}\sum_{i=1}^{m}{(h_\theta(x)-y)^2}\) , To minimize the \(J(\theta)\)spa

main idea: Initalize \(\theta\) (may \(\theta=\vec{0}\)) ,then keep changing \(\theta\) to reduce \(J(\theta)\) ,untill minimumcode

img

Gradient decent:htm

只有一個樣本時,對第i個參數進行更新 \(\theta_i:=\theta_i-\alpha\frac{\partial }{\partial \theta_i}J(\theta)=\theta_i-\alpha(h_\theta(x)-y)x_i\)

Repeat until convergence(收斂):

{

\(\theta_i:=\theta_i-\alpha\sum_{j=1}^{m}(h_\theta(x^{(j)})-y^{j})x_i^{(j)}\) ,(for every i)

}

矩陣描述(簡單):

Repeat until convergence(收斂):

{

\(\theta:=\theta -\nabla_\theta J\)

}

IF \(A\epsilon R^{n*n}\)

​ tr(A)=\(\sum_{i=1}^nA_{ii}\) :A的跡

\(J(\theta)=\frac{1}{2}(X\theta - \vec{y})^T(X\theta - \vec{y})\)

\(\nabla_\theta J=\frac{1}{2}\nabla_\theta (\theta^TX^TX\theta-\theta^TX^Ty-y^Tx\theta+y^Ty) =X^TX\theta-X^Ty\)

備註

當目標函數是凸函數時,梯度降低法的解纔是全局最優解

2、隨機梯度降低(Stochastic Gradient Descent )

Repeat until convergence:

{

​ for j=1 to m{

\(\theta_i:=\theta_i-\alpha(h_\theta(x^{(j)})-y^{j})x_i^{(j)}\) ,for every i

​ }

}

備註:

1.訓練速度很快,每次僅僅採用一個樣原本迭代;

2.解可能不是最優解,僅僅用一個樣本決定梯度方向;

3.不能很快收斂,迭代方向變化很大。

3、mini-batch梯度降低

Repeat until convergence:

{

​ for j=1 to m/n{

\(\theta_i:=\theta_i-\alpha\sum_{j=1}^{n}(h_\theta(x^{(j)})-y^{j})x_i^{(j)}\) ,for every i

​ }

}

備註:

機器學習中每每採用該算法

參考地址:

http://www.javashuo.com/article/p-cnkzgeoo-eg.html

相關文章
相關標籤/搜索