反向傳播算法推導過程

轉自:https://www.zhihu.com/question/24827633/answer/91489990網絡

 

 

1. 前向傳播

對於節點 h_1 來講, h_1 的淨輸入 net_{h_1} 以下:函數

net_{h_1}=w_1\times i_1+w_2\times i_2+b_1\times 1
接着對 net_{h_1} 作一個sigmoid函數獲得節點 h_1 的輸出:
out_{h_1}=\frac{1}{1+e^{-net_{h_1}}}
相似的,咱們能獲得節點 h_2 、 o_1 、 o_2 的輸出 out_{h_2} 、 out_{o_1} 、 out_{o_2} 。blog

2. 偏差

獲得結果後,整個神經網絡的輸出偏差能夠表示爲:
E_{total}=\sum\frac{1}{2}(target-output)^2
其中 output 就是剛剛經過前向傳播算出來的 out_{o_1} 、 out_{o_2} ; target 是節點 o_1 、 o_2 的目標值。 E_{total} 用來衡量兩者的偏差。
這個 E_{total} 也能夠認爲是cost function,不過這裏省略了防止overfit的regularization term( \sum{w_i^2} )
展開獲得
E_{total}=E{o_1}+E{o_2}=\frac{1}{2}(target_{o_1}-out_{o_1})^2+\frac{1}{2}(target_{o_2}-out_{o_2})^2get

3. 後向傳播

3.1. 對輸出層的 w_5

經過梯度降低調整 w_5 ,須要求 \frac{\partial {E_{total}}}{\partial {w_5}} ,由鏈式法則:
\frac{\partial {E_{total}}}{\partial {w_5}}=\frac{\partial {E_{total}}}{\partial {out_{o_1}}}\frac{\partial {out_{o_1}}}{\partial {net_{o_1}}}\frac{\partial {net_{o_1}}}{\partial {w_5}} ,
以下圖所示:it

 

 

\frac{\partial {E_{total}}}{\partial {out_{o_1}}}=\frac{\partial}{\partial {out_{o_1}}}(\frac{1}{2}(target_{o_1}-out_{o_1})^2+\frac{1}{2}(target_{o_2}-out_{o_2})^2)=-(target_{o_1}-out_{o_1})

\frac{\partial {out_{o_1}}}{\partial {net_{o_1}}}=\frac{\partial }{\partial {net_{o_1}}}\frac{1}{1+e^{-net_{o_1}}}=out_{o_1}(1-out_{o_1})
\frac{\partial {net_{o_1}}}{\partial {w_5}}=\frac{\partial}{\partial {w_5}}(w_5\times out_{h_1}+w_6\times out_{h_2}+b_2\times 1)=out_{h_1}
以上3個相乘獲得梯度 \frac{\partial {E_{total}}}{\partial {w_5}} ,以後就能夠用這個梯度訓練了:
w_5^+=w_5-\eta \frac{\partial {E_{total}}}{\partial {w_5}}
不少教材好比Stanford的課程,會把中間結果 \frac{\partial {E_{total}}}{\partial {net_{o_1}}}=\frac{\partial {E_{total}}}{\partial {out_{o_1}}}\frac{\partial {out_{o_1}}}{\partial {net_{o_1}}} 記作 \delta_{o_1} ,表示這個節點對最終的偏差須要負多少責任。。因此有 \frac{\partial {E_{total}}}{\partial {w_5}}=\delta_{o_1}out_{h_1} 。io

3.2. 對隱藏層的 w_1

經過梯度降低調整 w_1 ,須要求 \frac{\partial {E_{total}}}{\partial {w_1}} ,由鏈式法則:
\frac{\partial {E_{total}}}{\partial {w_1}}=\frac{\partial {E_{total}}}{\partial {out_{h_1}}}\frac{\partial {out_{h_1}}}{\partial {net_{h_1}}}\frac{\partial {net_{h_1}}}{\partial {w_1}} ,function

以下圖所示:神經網絡

 

 

參數 w_1 影響了 net_{h_1} ,進而影響了 out_{h_1} ,以後又影響到 E_{o_1} 、 E_{o_2} 。
求解每一個部分:im

\frac{\partial {E_{total}}}{\partial {out_{h_1}}}=\frac{\partial {E_{o_1}}}{\partial {out_{h_1}}}+\frac{\partial {E_{o_2}}}{\partial {out_{h_1}}} ,img

其中

\frac{\partial {E_{o_1}}}{\partial {out_{h_1}}}=\frac{\partial {E_{o_1}}}{\partial {net_{o_1}}}\times \frac{\partial {net_{o_1}}}{\partial {out_{h_1}}}=\delta_{o_1}\times \frac{\partial {net_{o_1}}}{\partial {out_{h_1}}}=\delta_{o_1}\times \frac{\partial}{\partial {out_{h_1}}}(w_5\times out_{h_1}+w_6\times out_{h_2}+b_2\times 1)=\delta_{o_1}w_5 ,這裏 \delta_{o_1} 以前計算過

\frac{\partial {E_{o_2}}}{\partial {out_{h_1}}} 的計算也相似,因此獲得
\frac{\partial {E_{total}}}{\partial {out_{h_1}}}=\delta_{o_1}w_5+\delta_{o_2}w_7
\frac{\partial {E_{total}}}{\partial {w_1}} 的鏈式中其餘兩項以下:
\frac{\partial {out_{h_1}}}{\partial {net_{h_1}}}=out_{h_1}(1-out_{h_1}) ,
\frac{\partial {net_{h_1}}}{\partial {w_1}}=\frac{\partial }{\partial {w_1}}(w_1\times i_1+w_2\times i_2+b_1\times 1)=i_1

相乘獲得

\frac{\partial {E_{total}}}{\partial {w_1}}=\frac{\partial {E_{total}}}{\partial {out_{h_1}}}\frac{\partial {out_{h_1}}}{\partial {net_{h_1}}}\frac{\partial {net_{h_1}}}{\partial {w_1}}=(\delta_{o_1}w_5+\delta_{o_2}w_7)\times out_{h_1}(1-out_{h_1}) \times i_1

獲得梯度後,就能夠對 w_1 迭代了:

w_1^+=w_1-\eta \frac{\partial{E_{total}}}{\partial{w_1}} 。

在前一個式子裏一樣能夠對 \delta_{h_1} 進行定義,

\delta_{h_1}=\frac{\partial {E_{total}}}{\partial {out_{h_1}}}\frac{\partial {out_{h_1}}}{\partial {net_{h_1}}}=(\delta_{o_1}w_5+\delta_{o_2}w_7)\times out_{h_1}(1-out_{h_1}) =(\sum_o \delta_ow_{ho})\times out_{h_1}(1-out_{h_1})

因此整個梯度能夠寫成

\frac{\partial {E_{total}}}{\partial {w_1}}=\delta_{h_1}\times i_1

相關文章
相關標籤/搜索