先看迭代效果圖(附帶sklearn的LinearRegression對比)html
咱們能夠看到,不一樣的目標函數對最終的擬合曲線效果也是不同的。如下是函數代碼:node
def huber_approx_obj(real, predict): d = predict - real h = 1 # h is delta in the graphic scale = 1 + (d / h) ** 2 scale_sqrt = np.sqrt(scale) grad = d / scale_sqrt hess = 1 / scale / scale_sqrt return grad, hess
相應損失函數圖像(當預測值越偏離真實值的時候):python
def fair_obj(real, predict): """y = c * abs(x) - c**2 * np.log(abs(x)/c + 1)""" x = predict - real c = 1 den = abs(x) + c grad = c * x / den hess = c * c / den ** 2 return grad, hess
著名的Los-Cosh: app
def log_cosh_obj(real, predict): x = predict - real grad = np.tanh(x) # hess = 1 / np.cosh(x)**2 帶除法的原方法,可能報ZeroDivisionException hess = 1.0 - np.tanh(x) ** 2 return grad, hess
以及它的函數圖像:機器學習
def m4e(real, predict): grad = 4.0 * predict * predict * predict - 12.0 * predict * predict * real + 12.0 * predict * real * real - 4.0 * real * real hess = 12.0 * predict * predict - 24.0 * predict * real + 12.0 * real * real return grad, hess
附錄:ide
log-cosh求導:函數
參考連接:學習
損失函數續集:Huber Loss,Log-Cosh Loss3d