Focal Loss 的前向與後向公式推導

把Focal Loss的前向和後向進行數學化描述。本文的公式可能數學公式比較多。本文儘可能採用分解的方式一步一步的推倒。達到能易懂的目的。spa

Focal Loss 前向計算


Loss(x, class) = -\alpha_{class}(1-\frac {e^{x[class]}} {\sum_j e^{x[j]}} )^\gamma \log{(\frac {e^{x[class]}} {\sum_j e^{x[j]}} )} (1)

其中 x 是輸入的數據 class 是輸入的標籤。數學

 = \alpha_{class}(1-\frac {e^{x[class]}} {\sum_j e^{x[j]}} )^\gamma \cdot (-x[class] + \log{\sum_j e^{x[j]}}) (2)

= -\alpha_{class}(1- softmax(x)[class] )^\gamma \cdot \log\big(softmax(x)[class]\big) (3)

其中 softmax(x) = \frac {e^{x[class]}} {\sum_j e^{x[j]}} = p_{class}io

 

Focal Loss 後向梯度計算


爲了計算前向公式(3)的梯度咱們,首先計算單元 \log p_t 的導數。class

\begin {aligned} \frac{\partial}{\partial x_i} \log p_t & =\frac{1}{p_t}\cdot\frac{\partial p_t}{\partial x_i} \\\ &=\frac{1}{p_t}\cdot\frac{\partial}{\partial x_i} \frac{e^{x_t}} {\sum_j{e^{x_j}}} \\\ & = \begin {cases} \frac{1}{p_t}\cdot(p_t-p_t^2) = 1-p_t, & i=t \\\ \frac{1}{p_t}\cdot(-p_i \cdot p_t) = -p_i, & i \neq t \end {cases} \end {aligned} (4)

 

計算計算 p_t 導數:im

\begin {aligned} \frac{\partial}{\partial x_i}p_t & = \frac{\partial}{\partial x_i} \frac{e^{x_t}}{\sum_j{e^{x_j}}} \\\ & = \begin {cases} \frac{e^{x_t}\cdot \sum_j{e^{x_j}} - e^{x_t}\cdot e^{x_t}}{\sum_j{e^{x_j}} \cdot \sum_j{e^{x_j}}} = p_t -p_t^2 & i=t \\\ \frac{-e^{x_t}\cdot e^{x_i}}{\sum_j{e^{x_j}} \cdot \sum_j{e^{x_j}}} = -p_i \cdot p_t, &i \neq t \end {cases} \end {aligned}(5)

 

有了(4)和(5)咱們就來對(3)進行推倒。數據

\because \begin {aligned} FL(x, t) &= -(1-p_t)^{\gamma}\log{p_t} \end {aligned}

\begin {aligned} \therefore \frac{\partial{FL(x, t)}}{\partial x_i} &= -\gamma(1-p_t)^{\gamma-1} \cdot \frac {\partial (-p_t)} {\partial x_i}\cdot \log p_t-(1-p_t)^\gamma \cdot \frac {\partial \log p_t}{\partial x_i} \\\ & = \gamma(1-p_t)^{\gamma-1} \cdot \log p_t \cdot \frac {\partial (p_t)} {\partial x_i}-(1-p_t)^\gamma \cdot \frac {\partial \log p_t}{\partial x_i} \\\ &= \begin {cases} \gamma(1-p_t)^{\gamma-1} \cdot \log p_t \cdot (p_t-p_t^2)-(1-p_t)^\gamma \cdot (1-p_t), & i=t \\\ \gamma(1-p_t)^{\gamma-1} \cdot \log p_t \cdot (-p_i\cdot p_t)-(1-p_t)^\gamma \cdot (-p_i), & i \neq t \end {cases} \end {aligned} (6)

在(6)中把(4)(5)帶入併合並整理就獲得(7)img

\therefore \frac{\partial{FL(x, t)}}{\partial x_i} = \begin {cases} -(1-p_t)^{\gamma}\cdot(1-p_t-\gamma p_t\log p_t),&i = t \\\ p_i\cdot(1-p_t)^{\gamma - 1 } \cdot(1-p_t-\gamma p_t \log p_t),&i\neq t \end {cases} (7)

 

(7)就是Focal loss的後向的最後結果。要是在TF, Pytorch等中實現Focal Loss 便可採用(7)實現backward。標籤

相關文章
相關標籤/搜索