Line Search and Quasi-Newton Methods

Gradient Descent

機器學習中不少模型的參數估計都要用到優化算法,梯度降低是其中最簡單也用得最多的優化算法之一。梯度降低(Gradient Descent)[3]也被稱之爲最快梯度(Steepest Descent),可用於尋找函數的局部最小值。梯度降低的思路爲,函數值在梯度反方向降低是最快的,只要沿着函數的梯度反方向移動足夠小的距離到一個新的點,那麼函數值一定是非遞增的,如圖1所示。算法

梯度降低思想的數學表述以下: \begin{equation} b=a-\alpha \nabla F(a)\Rightarrow f(a)\geq f(b) \end{equation} 其中\(f(x)\)爲存在下界的可導函數。根據該思路,若是咱們從\(x_0\)爲出發點,每次沿着當前函數梯度反方向移動必定距離\(\alpha_k\),獲得序列\(x_0,x_1,\cdots,x_n\): \begin{equation} x_{k+1}=x_k-\alpha_k\nabla f(x_k), 0\leq k\leq n \end{equation} 對應的各點函數值序列之間的關係爲: \begin{equation} f(x_0)\geq f(x_1)\geq f(x_2)\geq\cdots\geq f(x_n) \end{equation} 很顯然,當\(n\)達到必定值時,函數\(f(x)\)是會收斂到局部最小值的。算法1簡單描述了通常化的梯度優化方法。 在算法1中,咱們須要選擇一個搜索方向\(d_k\)知足如下關係: \begin{equation} f(x_k+\alpha d_k)<f(x_k)\; \forall\alpha\in (0,\epsilon] \end{equation} 當\(d_k=-\nabla f(x)\)時\(f(x)\)降低最快,可是隻要知足\(\nabla f(x_k)^Td_k<0\)的\(d_k\)均可以做爲搜素方向。通常搜索方向表述爲以下形式: \begin{equation} d_k=-B_k\nabla f(x_k) \end{equation} 其中\(B_k\)爲正定矩陣。當\(B_k=I\)時對應最快梯度降低算法;當\(B_k=H(x_k)^{-1}\)時對應牛頓法,若是\(H(x_k)=\nabla^2f(x_k)\)爲正定矩陣。 在迭代過程當中用於更新\(x_k\)的步長\(\alpha_k\)能夠是常數也能夠是變化的。若是\(\alpha_k\)足夠小,收斂是能夠獲得保證的,但這意味這迭代次數\(n\)要很大時函數纔會收斂(圖2(a));若是\(\alpha_k\)比較大,更新後的點極可能越過局部最優解(圖2(b))。有什麼方法能夠幫助咱們自動肯定最優步長呢?下面要說的線性搜索就包含一組解決方案。app

Line Search

在給定搜索方向\(d_k\)的前提下,線性搜索要解決的問題以下: \begin{equation} \alpha=arg\;\underset{\alpha\geq 0}{\min}h(\alpha)=arg\;\underset{\alpha\geq 0}{\min}f(x_k+\alpha d_k) \end{equation} 若是\(h(\alpha)\)是可微的凸函數,咱們能經過解析解直接求得上式最優的步長;但非線性的優化問題須要經過迭代形式求得近似的最優步長。對於上式,局部或全局最優解對應的導數爲\(h'(\alpha)=\nabla f(x_k+\alpha d_k)^Td_k=0\)。由於\(d_k\)與\(f(x_k)\)在\(x_k\)處的梯度方向夾角大於90度,所以\(h'(0)\leq 0\),若是能找到\(\hat{\alpha}\)使得\(h'(\hat{\alpha})>0\),那麼一定存在\(\alpha^{\star}\in [0,\hat{\alpha})\)使得\(h'(\alpha^{\star})=0\)。有多種迭代算法能夠求得\(\alpha^{\star}\)的近似值,下面選擇幾種典型的介紹。機器學習

Bisection Search

二分線性搜索(Bisection Line Search)[2]可用於求解函數的根,其思想很簡單,就是不斷將現有區間劃分爲兩半,選擇一定含有使\(h'(\alpha)=0\)的半個區間做爲下次迭代的區間,直到尋得\(h'(\alpha^{\star})\approx 0\)爲止,算法描述見2。 二分線性搜素能夠確保\(h(\alpha)\)是收斂的,只要\(h(\alpha)\)在區間\((0,\hat{\alpha})\)上是連續的且\(h'(0)\)和\(h(\hat{\alpha})\)異號。經歷\(n\)次迭代後,當前區間\([\alpha_l,\alpha_h]\)的長度爲: \begin{equation} L=\left(\frac{1}{2}\right)^n\hat{\alpha} \end{equation} 由迭代的終止條件之一\(\alpha_h-\alpha_l\geq\epsilon\)知迭代次數的上界爲: \begin{equation} L\leq \epsilon\Rightarrow k\leq\left[\log_2\left(\frac{\hat{\alpha}}{\epsilon}\right)\right] \end{equation} 下面給出二分搜索的Python代碼ide

 1 def bisection(dfun,theta,args,d,low,high,maxiter=1e4):
 2     """
 3     #Functionality:find the root of the function(fun) in the interval [low,high]
 4     #@Parameters
 5     #dfun:compute the graident of function f(x)
 6     #theta:Parameters of the model
 7     #args:other variables needed to compute the value of dfun
 8     #[low,high]:the interval which contains the root
 9     #maxiter:the max number of iterations
10     """
11     eps=1e-6
12     val_low=np.sum(dfun(theta+low*d,args)*d.T)
13     val_high=np.sum(dfun(theta+high*d,args)*d.T)
14     if val_low*val_high>0:
15         raise Exception('Invalid interval!')
16     iter_num=1
17     while iter_num<maxiter:
18         mid=(low+high)/2
19         val_mid=np.sum(dfun(theta+mid*d,args)*d.T)
20         if abs(val_mid)<eps or abs(high-low)<eps:
21             return mid
22         elif val_mid*val_low>0:
23             low=mid
24         else:
25             high=mid
26         iter_num+=1

Backtracking

回溯線性搜索(Backing Line Search)[1]基於Armijo準則計算搜素方向上的最大步長,其基本思想是沿着搜索方向移動一個較大的步長估計值,而後以迭代形式不斷縮減步長,直到該步長使得函數值\(f(x_k+\alpha d_k)\)相對與當前函數值\(f(x_k)\)的減少程度大於指望值(知足Armijo準則)爲止。Armijo準則(見圖3)的數學描述以下: \begin{equation} f(x_k+\alpha d_k)\leq f(x_k)+c_1\alpha f'(x_k)^Td_k \end{equation} 其中\(f:\mathbb{R}^n\rightarrow\mathbb{R}\),\(c_1\in(0,1)\),\(\alpha\)爲步長,\(d_k\in\mathbb{R}^n\)爲知足\(f'(x_k)^Td_k<0\)的搜索方向。 可是僅憑Armijo準則不足以求得較好的步長,根據前面的梯度降低的知識可知,只要\(\alpha\)足夠小就能知足Armijo準則。所以經常使用的策略就是從較大的步長開始,而後以\(\tau\in(0,1)\)的速度縮短步長,直到知足Armijo準則爲止,這樣選出來的步長不至於過小,對應的算法描述見3。前面介紹的二分線性搜索的目標是求得知足\(h'(\alpha)\approx 0\)的最優步長近似值,而回溯線性搜索放鬆了對步長的約束,只要步長能使函數值有足夠大的變化便可。前者能夠少計算幾回搜索方向,但在計算最優步長上花費了很多代價;後者退而求其次,找到一個差很少的步長便可,那麼代價就是要多計算幾回搜索方向。 接下來,咱們要證實回溯線性搜索在Armijo準則下的收斂性問題[6]。由於\(h'(0)=f'(x_k)^Td_k<0\),且\(0<c_1<1\),則有 \begin{equation} h'(0)<c_1h'(0)<0 \end{equation} 根據導數的基本定義,結合上式,有以下關係: \begin{equation} h'(0)=\lim_{\alpha\rightarrow 0}\frac{h(\alpha)-h(0)}{\alpha}=\lim_{\alpha\rightarrow 0}\frac{f(x_k+\alpha d_k)-f(x_k)}{\alpha}<ch'(0) \end{equation} 所以,存在一個步長\(\hat{\alpha}>0\),對任意的\(\alpha\in(0,\hat{\alpha})\),下式均成立 \begin{equation} \frac{f(x_k+\alpha d_k)-f(x_k)}{\alpha}<cf'(x_k)^Td_k \end{equation} 即\(\forall \alpha\in(0,\hat{\alpha}),f(x_k+\alpha d_k)<f(x_k)+c\alpha f'(x_k)^Td_k\)。 下面給出基於Armijo準則的線性搜索Python代碼:函數

 1 def ArmijoBacktrack(fun,dfun,theta,args,d,stepsize=1,tau=0.5,c1=1e-3):
 2     """
 3     #Functionality:find an acceptable stepsize via backtrack under Armijo rule
 4     #@Parameters
 5     #fun:compute the value of objective function
 6     #dfun:compute the gradient of objective function
 7     #theta:a vector of parameters of the model
 8     #stepsize:initial step size
 9     #c1:sufficient decrease Parameters
10     #tau:rate of shrink of stepsize
11     """
12     slope=np.sum(dfun(theta,args)*d.T)
13     obj_old=costFunction(theta,args)
14     theta_new=theta+stepsize*d
15     obj_new=costFunction(theta_new,args)
16     while obj_new>obj_old+c1*stepsize*slope:
17         stepsize*=tau
18         theta_new=theta+stepsize*d
19         obj_new=costFunction(theta_new,args)
20     return stepsize

Interpolation

基於Armijo準則的回溯線性搜索的收斂速度沒法獲得保證,特別是要回退不少次後才能落入知足Armijo準則的區間。若是咱們根據已有的函數值和導數信息,採用多項式插值法(Interpolation)[12,6,5,9]擬合函數,而後根據該多項式函數估計函數的極值點,這樣選擇合適步長的效率會高不少。 假設咱們只有\(x_k\)處的函數值\(f(x_k)\)及其倒數\(f'(x_k)\),且第一次嘗試的步長爲\(\alpha_0\)。若是\(\alpha_0\)不知足條件,那麼咱們根據這些信息能夠構造一個二次近似函數\(h_q(\alpha)\) \begin{equation} h_q(\alpha)=\left(\frac{h(\alpha_0)-h(0)-\alpha_0h'(0)}{\alpha_0^2}\right)\alpha^2+h'(0)\alpha+h(0) \end{equation} 注意,該二次函數知足\(h_q(0)=h(0)\),\(h_q'(0)=h'(0)\)和\(h_q(\alpha_0)=h(\alpha_0)\),如圖4(a)所示。接下來,根據\(h_q(\alpha)\)的最小值估計下一個步長: \begin{equation} \alpha_1=\frac{h'(0)\alpha_0^2}{2[h(0)+h'(0)\alpha_0-h(\alpha_0)]} \end{equation} 若是\(\alpha_1\)仍然不知足條件,咱們能夠繼續重複上述過程,直到獲得的步長知足條件爲止。假設咱們在整個線性搜索過程當中都用二次插值函數,那麼最好有\(c_1\in(0,0.5]\),爲何呢?簡單證實一下:若是\(\alpha_0\)不知足Armijo準則,那麼一定存在比\(\alpha_0\)小的步長知足該準則,因此利用二次插值函數估算的步長\(\alpha_1<\alpha_0\)才合理。結合\(\alpha_0\)不知足Armijo準則和\(\alpha_1<\alpha_0\),可知\(c_1\leq 0.5\)。 若是咱們已經嘗試了多個步長,卻每次只用上一次步長的相關信息構造二次函數,未免是對計算資源的浪費,其實咱們能夠利用多個步長的信息構造信息量更大更準確的插值函數的。在計算導數的代價大於計算函數值的代價時,應儘可能避免計算\(h'(\alpha)\),下面給出一個三次插值函數\(h_c(\alpha)\),如圖4(b)所示 \begin{equation} h_c(\alpha)=a\alpha^3+b\alpha^2+h'(0)\alpha+h(0) \end{equation} 其中 \begin{equation} \left[\begin{array}{c} a\\ b \end{array}\right] =\frac{1}{\alpha_{i-1}^2\alpha_i^2(\alpha_i-\alpha_{i-1})} \left[\begin{array}{cc} \alpha_{i-1}^2 & -\alpha_i^2\\ -\alpha_{i-1}^3 & \alpha_i^3 \end{array}\right] \left[\begin{array}{c} h(\alpha_i)-h(0)-h'(0)\alpha_i\\ h(\alpha_{i-1})-h(0)-h'(0)\alpha_{i-1} \end{array}\right] \end{equation} 對\(h_c(\alpha)\)求導,可得極值點\(\alpha_{i+1}\in[0,\alpha_i]\)的形式以下: \begin{equation} \alpha_{i+1}=\frac{-b+\sqrt{b^2-3ah'(0)}}{3a} \end{equation} 利用以上的三次插值函數求解下一個步長的過程不斷重複,直到步長知足條件爲止。若是出現\(a=0\)的狀況,三次插值函數退化爲二次插值函數,在實現該算法時須要注意這點。在此過程當中,若是\(\alpha_i\)過小或\(\alpha_{i-1}\)與\(\alpha_i\)太接近,須要重置\(\alpha_i=\alpha_{i-1}/2\),該保護措施(safeguards)保證下一次的步長不至於過小[6,5]。爲何會有這個做用呢?1)由於\(\alpha_{i+1}\in[0,\alpha_i]\),因此當\(\alpha_i\)很小時\(\alpha_{i+1}\)也很小;2)當\(\alpha_{i-1}\)與\(\alpha_i\)太靠近時有\(a\approx b\approx\infty\),根據\(\alpha_{i+1}\)的表達式可知\(\alpha_{i+1}\approx 0\)。 可是,在不少狀況下,計算函數值後只需付出較小的代價就能順帶計算出導數值或其近似值,這使得咱們能夠用更精確的三次Hermite多項式[6]進行插值,如圖4(c)所示 \begin{equation} \begin{array}{rl} H_3(\alpha)=&\left[1+2\frac{\alpha_i-\alpha}{\alpha_i-\alpha_{i-1}}\right]\left[\frac{\alpha-\alpha_{i-1}}{\alpha_i-\alpha_{i-1}}\right]^2h(\alpha_{i})\\ &+\left[1+2\frac{\alpha-\alpha_{i-1}}{\alpha_i-\alpha_{i-1}}\right]\left[\frac{\alpha_{i+1}-\alpha}{\alpha_i-\alpha_{i-1}}\right]^2h(\alpha_{i-1})\\ &+(\alpha-\alpha_i)\left[\frac{\alpha-\alpha_{i-1}}{\alpha_i-\alpha_{i-1}}\right]^2h'(\alpha_i)\\ &+(\alpha-\alpha_{i-1})\left[\frac{\alpha_i-\alpha}{\alpha_i-\alpha_{i-1}}\right]^2h'(\alpha_{i-1}) \end{array} \end{equation} 其中,三次Hermite多項式知足\(H_3(\alpha_{i-1})=h(\alpha_{i-1})\),\(H_3(\alpha_{i})=h(\alpha_{i})\),\(H_3'(\alpha_{i-1})=h'(\alpha_{i-1})\)和\(H_3'(\alpha_{i})=h'(\alpha_{i})\)。\(H(\alpha)\)的最小值只可能在兩側的端點或者極值點處,令\(H_3'(\alpha)=0\)可得下一個步長: \begin{equation} \alpha_{i+1}=\alpha_i-(\alpha_i-\alpha_{i-1})\left[\frac{h'(\alpha_i)+d_2-d_1}{h'(\alpha_i)-h'(\alpha_{i-1})+2d_2}\right] \end{equation} 其中 \begin{equation} d_1=h'(\alpha_i)+h'(\alpha_{i-1})-3\left[\frac{h(\alpha_i)-h(\alpha_{i-1})}{\alpha_i-\alpha_{i-1}}\right] \end{equation} \begin{equation} d_2=sign(\alpha_i-\alpha_{i-1})\sqrt{d_1^2-h'(\alpha_{i-1})h'(\alpha_i)} \end{equation} 下面給出二次插值及三次插值的Python代碼:學習

 1 def quadraticInterpolation(a,h,h0,g0):
 2     """
 3     #Functionality:Approximate h(a) with a quadratic function and return its stationary point
 4     #@Parameters
 5     #a:current stepsize
 6     #h:a function value about stepsize,h(a)=f(x_k+a*d)
 7     #h:h(0)=f(x_k)
 8     #g0:h'(0)=f'(0)
 9     """
10     numerator=g0*a**2
11     denominator=2*(g0*a+h0-h)
12     if abs(denominator)<1e-12:#indicates that a is almost 0
13         return a
14     return numerator/denominator
def cubicInterpolation(a0,h0,a1,h1,h,g):
    """
    #Functionality:Approximate h(x) with a cubic function and return its stationary point
    #This version of cubic interpolation computes h'(x) as few as possible,suitable for the case in which computing derivative is more expensive than computing function values
    #@Parameters
    #a0 and a1 are stepsize it previous two iterations
    #h0:h(a0)
    #h1:h(a1)
    #h:h(0)=f(x)
    #g:h'(0)
    """
    mat=matlib.matrix([[a0**2,-a1**2],[-a0**3,a1**3]])
    vec=matlib.matrix([[h1-h-g*a1],[h0-h-g*a0]])
    ab=mat*vec/(a0**2*a1**2*(a1-a0))
    a=ab[0,0]
    b=ab[1,0]
    if abs(a)<1e-12:#a=0 and cubic function is a quadratic one
        return -g/(2*b)
    return (-b+np.sqrt(b**2-3*a*g))/(3*a)
def cubicInterpolationHermite(a0,h0,g0,a1,h1,g1):
    """
    #Functionality:Approximate h(a) with a cubic Hermite polynomial function and return its stationary point
    #This version of cubic interpolation computes h(a) as few as possible,suitable for the case in which computing derivative is easier than computing function values
    #@Parameters
    #a0 and a1 are stepsize it previous two iterations
    #h0:h(a0)
    #g0:h'(a0)
    #h1:h(a1)
    #g1:h'(a1)
    """
    d1=g0+g1-3*(h1-h0)/(a1-a0)
    d2=np.sign(a1-a0)*np.sqrt(d1**2-g0*g1)
    res=a1-(a1-a0)*(g1+d2-d2)/(g1-g0+2*d2)
    return res

基於Armijo準則的線性搜索的算法描述以下[4] 對應的Armijo線性搜索的Python代碼以下:測試

 1 def ArmijoLineSearch(fun,dfun,theta,args,d,a0=1,c1=1e-3,a_min=1e-7,max_iter=1e5):
 2     """
 3     #Functionality:Line search under Armijo condition with quadratic and cubic interpolation
 4     #@Parameters
 5     #fun:objective Function
 6     #dfun:compute the gradient of fun
 7     #theta:a vector of parameters of the model
 8     #args:other variables needed for fun and func
 9     #d:search direction
10     #a0:initial stepsize
11     #c1:constant used in Armijo condition
12     #a_min:minimun of stepsize
13     #max_iter:maximum of the number of iterations
14     """
15     eps=1e-6
16     c1=min(c1,0.5)#c1 should<=0.5
17     a_pre=h_pre=g_pre=0
18     a_cur=a0
19     f_val=fun(theta,args) #h(0)=f(0)
20     g_val=np.sum(dfun(theta,args)*d.T) #h'(0)=f'(x)^Td
21     h_cur=g_cur=0
22     k=0
23     while a_cur>a_min and k<max_iter:
24         h_cur=fun(theta+a_cur*d,args)
25         g_cur=np.sum(dfun(theta+a_cur*d,args)*d.T)
26         if h_cur<=f_val+c1*a_cur*g_val: #meet Armijo condition
27             return a_cur
28         if not k: #k=0,use quadratic interpolation
29             a_new=quadraticInterpolation(a_cur,h_cur,f_val,g_val)
30         else: #k>0,use cubic Hermite interpolation
31             a_new=cubicInterpolationHermite(a_pre,h_pre,g_pre,a_cur,h_cur,g_cur)
32         if abs(a_new-a_cur)<eps or abs(a_new)<eps: #safeguard procedure
33             a_new=a_cur/2
34         a_pre=a_cur
35         a_cur=a_new
36         h_pre=h_cur
37         g_pre=g_cur
38         k+=1
39     return a_min #failed search

Wolfe Search

前面說到單憑Armijo準則(不考慮回溯策略)選出的步長可能過小,爲了排除這些微小的步長,咱們加上曲率的約束條件(如圖5所示) \begin{equation} h'(\alpha)=f'(x_k+\alpha d_k)^Td_k\geq c_2f'(x_k)^Td_k \end{equation}其中\(c_2\in(c_1,1)\),\(c_1\)爲Armijo準則中的常量。 當\(h'(\alpha)\)爲很小的負數甚至爲正數時,說明從\(x_k\)沿着\(d_k\)移動\(\alpha\)後的函數梯度方向與搜索方向的夾角接近90度,繼續向前移動已經不能很明顯減少函數值了,此時能夠中止沿着\(d_k\)繼續搜索;反之,說明繼續減少函數值的空間仍是很大的,能夠繼續向前搜索。Armijo準則與曲率約束二者合起來稱爲Wolfe準則[5]: \begin{equation} \left\lbrace \begin{array}{rl} f(x_k+\alpha d_k)&\leq f(x_k)+c_1\alpha f'(x_k)^Td_k\\ f'(x_k+\alpha d_k)^Td_k&\geq c_2f'(x_k)^Td_k \end{array} \right. \end{equation} 其中\(0<c_1<c_2<1\)。如圖6所示,知足Wolfe準則的步長也許離\(h(\alpha)\)的極值點較遠。咱們能夠修改曲率約束條件使得步長落到\(h(\alpha)\)的極值點的一個較寬的領域中,強Wolfe準則對步長\(\alpha\)的約束以下: \begin{equation} \left\lbrace \begin{array}{rl} f(x_k+\alpha d_k)&\leq f(x_k)+c_1\alpha f'(x_k)^Td_k\\ |f'(x_k+\alpha d_k)^Td_k|&\geq c_2|f'(x_k)^Td_k| \end{array} \right. \end{equation} 強Wolfe準則不容許\(h'(\alpha)\)爲太大的正數,能夠排除遠離極值點的區間。 那麼究竟是否存在知足強Wolfe準則的步長呢?假設\(h(\alpha)=f(x_k+\alpha d_k)\)連續可微,在整個\(\alpha>0\)的定義域上存在下界。由於\(0<c_1<1\),因此\(l(\alpha)=f(x_k)+\alpha c_1f'(x_k)^Td_k\)必然與\(h(\alpha)\)至少有一個交點。假設\(\alpha'\)爲最小的交點對應的步長,則有 \begin{equation} f(x_k+\alpha'd_k)=f(x_k)+\alpha'c_1f'(x_k)^Td_k \end{equation} 那麼對於知足\(\alpha\in(0,\alpha')\)的步長必然都知足Armijo準則。根據零值定理,存在\(\alpha''\in(0,\alpha')\)知足 \begin{equation} f(x_k+\alpha'd_k)-f(x_k)=\alpha'f'(x_k+\alpha''d_k)^Td_k \end{equation} 結合上面兩個關係式,由\(0<c_1<c_2\)和\(f'(x_k)^Td_k<0\),可得 \begin{equation} f'(x_k+\alpha''d_k)^Td_k=c_1f'(x_k)^Td_k>c_2f'(x_k)^Td_k \end{equation} 由此可知,\(\alpha''\)知足強Wolfe準則。若是\(h(\alpha)\)是一個較爲平滑的函數,那麼包含\(\alpha''\)的較小領域都會知足強Wolfe準則。 若是在線性搜索過程當中利用強Wolfe準則,能夠更精確得找到更靠近極值點的步長,在目前線性搜索中用得不少。基於強Wolfe準則的線性搜索包含兩個階段:第一個階段從初始步長開始,不斷增大步長,直到找到一個知足強Wolfe準則的步長或包含該步長的區間爲止;第二個階段是在已知包含知足強Wolfe準則步長區間的基礎上,不斷縮減區間,直到找到知足強Wolfe準則的步長爲止。基於強Wolfe準則的線性搜索算法描述以下5優化

在算法5中,\(\alpha_{new}\)的更新是由於在區間\((\alpha_{pre},\alpha_{cur})\)內沒有知足Wolfe準則的步長,因此要選取一個大於\(\alpha_{cur}\)的步長\(\alpha_{new}\)。在算法中,咱們是用二次插值函數計算\(\alpha_{new}\)的,因此要求\(0<c_1<0.5\)。固然,也能夠用其餘方法,好比讓\(\alpha_{cur}\)乘以一個大於1的常數,只要能較快找到一個包含知足Wolfe的區間便可。因此,該算法每次嘗試的步長\(\alpha_{cur}\)的逐漸遞增的;一旦找到了包含知足Wolfe準則的步長的區間,當即調用\(zoom\)函數不斷縮短區間,並返回知足Wolfe的步長。根據算法邏輯,咱們能夠推斷出\(\alpha_{pre}\)知足Armijo準則,但違背曲率約束,並且導數爲負數。由上述三個條件,可知\(\alpha_{pre}\)一定位於知足Wolfe準則的區間的左側的呈降低趨勢的曲線上,只要\(\alpha_{cur}\)位於該區間的右側便可。那麼怎樣判斷區間\((\alpha_{pre},\alpha_{cur})\)是否包含知足Wolfe準則的步長呢?下面給出三種\(\alpha_{cur}\)位於該區間的右側的充分條件:ui

  1. \(\alpha_{cur}\)不知足Armijo準則;
  2. \(h(\alpha_{cur})\geq h(\alpha_{pre})\);
  3. \(h'(\alpha_{cur})\geq 0\)

這一點結合圖7就很容易理解了,我在圖中分別用紅色和綠色點標註了\(\alpha_{pre}\)和\(\alpha_{cur}\)可能的位置,藍色帶數字的圓圈註明了\(\alpha_{cur}\)知足哪些條件。 基於Wolfe準則的線性搜索Python代碼以下:spa

 1 def WolfeLineSearch(fun,dfun,theta,args,d,a0=1,c1=1e-4,c2=0.9,a_min=1e-7,max_iter=1e5):
 2     """
 3     #Functionality:find a stepsize meeting Wolfe condition
 4     #@Parameters
 5     #fun:objective Function
 6     #dfun:compute the gradient of fun
 7     #theta:a vector of parameters of the model
 8     #args:other variables needed for fun and func
 9     #d:search direction
10     #a0:intial stepsize
11     #c1:constant used in Armijo condition
12     #c2:constant used in curvature condition
13     #a_min:minimun of stepsize
14     #max_iter:maximum of the number of iterations
15     """
16     eps=1e-16
17     c1=min(c1,0.5)
18     a_pre=0
19     a_cur=a0
20     f_val=fun(theta,args) #h(0)=f(x)
21     g_val=np.sum(dfun(theta,args)*d.T)
22     h_pre=f_val #h'(0)=f'(x)^Td
23     k=0
24     while k<max_iter and abs(a_cur-a_pre)>=eps:
25         h_cur=fun(theta+a_cur*d,args) #f(x+ad)
26         if h_cur>f_val+c1*a_cur*g_val or h_cur>=h_pre and k>0:
27             return zoom(fun,dfun,theta,args,d,a_pre,a_cur,c1,c2)
28         g_cur=np.sum(dfun(theta+a_cur*d,args)*d.T)
29         if abs(g_cur)<=-c2*g_val:#satisfy Wolfe condition
30             return a_cur
31         if g_cur>=0:
32             return zoom(fun,dfun,theta,args,d,a_pre,a_cur,c1,c2)
33         a_new=quadraticInterpolation(a_cur,h_cur,f_val,g_val)
34         a_pre=a_cur
35         a_cur=a_new
36         h_pre=h_cur
37         k+=1
38     return a_min

zoom函數的算法描述見6。zoom函數中須要傳入搜尋區間\([\alpha_{low},\alpha_{high}]\),其中\(\alpha_{low}<\alpha_{high}\)。本文中的zoom函數與文獻[5]中的內容略有差別,可是本文的zoom函數思路更簡單和清晰。由算法5中分析獲得的調用zoom函數的條件,知道\(\alpha_{low}\)必須知足Armijo準則,且位於全部知足Wolfe準則的步長的左側。咱們先取\([\alpha_{low},\alpha_{high}]\)區間的中值做爲下一個測試的步長\(\alpha_{new}\),若是剛好知足Wolfe準則,則直接返回。若是\(\alpha_{new}\)違反Armijo準則或大於\(h(\alpha_{low})\),顯然區間\([\alpha_{low},\alpha_{new}]\)包含知足Wolfe準則的步長,所以用\(\alpha_{new}\)替換\(\alpha_{high}\)以縮短區間長度;不然,\(\alpha_{new}\)必然也知足Armijo準則,若是\(h'(\alpha_{new})>0\),則\(\alpha_{new}\)與\(\alpha_{high}\)都在知足Wolfe準則的區間右側,用\(\alpha_{new}\)替代\(\alpha_{high}\),反之則用\(\alpha_{new}\)替代\(\alpha_{low}\)。上述的迭代過程不斷縮短步長,知道求得知足Wolfe準則的步長爲止。若是在有限迭代次數內搜索失敗,則返回必然知足Armijo準則的步長\(\alpha_{low}\)。

zoom函數對應的Python代碼以下:

 1 def zoom(fun,dfun,theta,args,d,a_low,a_high,c1=1e-3,c2=0.9,max_iter=1e4):
 2     """
 3     #Functionality:enlarge the interval to find a stepsize meeting Wolfe condition
 4     #@Parameters
 5     #fun:objective Function
 6     #dfun:compute the gradient of fun
 7     #theta:a vector of parameters of the model
 8     #args:other variables needed for fun and func
 9     #d:search direction
10     #[a_low,a_high]:interval containing a stepsize satisfying Wolfe condition
11     #c1:constant used in Armijo condition
12     #c2:constant used in curvature condition
13     #max_iter:maximum of the number of iterations
14     """
15     if a_low>a_high:
16         print('low:%f,high:%f'%(a_low,a_high))
17         raise Exception('Invalid interval of stepsize in zoom procedure')
18     eps=1e-16
19     h=fun(theta,args) #h(0)=f(x)
20     g=np.sum(dfun(theta,args)*d.T) #h'(0)=f'(x)^Td
21     k=0
22     h_low=fun(theta+a_low*d,args)
23     h_high=fun(theta+a_high*d,args)
24     if h_low>h+c1*a_low*g:
25         raise Exception('Left endpoint violates Armijo condition in zoom procedure')
26     while k<max_iter and abs(a_high-a_low)>=eps:
27         a_new=(a_low+a_high)/2
28         h_new=fun(theta+a_new*d,args)
29         if h_new>h+c1*a_new*g or h_new>h_low:
30             a_high=a_new
31             h_high=h_new
32         else:
33             g_new=np.sum(dfun(theta+a_new*d,args)*d.T)
34             if abs(g_new)<=-c2*g: #satisfy Wolfe condition
35                 return a_new 
36             if g_new*(a_high-a_low)>=0:
37                 a_high=a_new
38                 h_high=h_new
39             else:
40                 a_low=a_new
41                 h_low=h_new
42         k+=1
43     return a_low #a_low definitely satisfy Armijo condition

Newton's Method

牛頓法(Newton's method)[8]以迭代方式求解函數的根,其基本思想是從一個初始點出發,不斷在當前點\(x_k\)處用切線近似函數\(f(x)\),並求得該切線與\(x\)軸的交點做爲下一次的迭代初始點\(x_{k+1}\),直到找到\(f(x)=0\)的近似解爲止。Newton法可用於二次可微函數\(f(x)\)的最優化問題。 在\(x_k\)處用二階泰勒展開來對\(f(x_k)\)其進行逼近。 \begin{equation} f(x_{k}+\triangle x)\approx f(x_k)+f'(x_k)\triangle x+\frac{1}{2}{\triangle x}^TB_k\triangle x \end{equation} 如今,咱們的目標是在\(x^{k}\)附近求得使\(f(x)\)取得極小值的\(\triangle x\)。將上式對\(\triangle x\)求導可得函數\(f'(x)\)在\(x_{k+1}=x_k+\triangle x\)處的線性近似以下: \begin{equation} f'(x_{k+1})=f'(x_k)+B_k(x_{k+1}-x_k) \end{equation} 其中\(B_k=\nabla^2f(x_k)\)爲\(f(x)\)在\(x_k\)處對應的Hessian矩陣。因爲函數的極值點通常都對應\(f'(x)=0\),令\(f'(x_{k+1})=0\)並化簡可得迭代公式爲: \begin{equation} x_{k+1}=x_k-B_k^{-1}f'(x_k) \end{equation} 牛頓迭代法收斂速度很快,對於二次函數能夠一次性找到最優解。但用於求解優化問題時,須要付出很大的代價求得函數的一階導數、二階導數及其逆矩陣。此外,有的函數還存在不可導、Hessian矩陣不可逆、迭代點之間存在循環(即\(x_{k+t}=x_k\))等情形,這些都成爲了牛頓法的應用障礙。牛頓迭代法用於求解極值點\(f'(x)=0\)的步驟見算法7。固然,也可用牛頓法求最優步長,只需將算法7中的函數\(f(x)\)替換爲關於步長的函數\(h(\alpha)\)便可。

Quasi-Newton Method

擬牛頓(Quasi-Newton)[11]算法可用於求解函數的局部最優解,也就是那些導數爲0的駐點。牛頓法用於解決優化問題時,事先假設原函數可用二次函數近似,而後用一階和二階導數尋找局部最優解。而在擬牛頓算法中,不須要準確計算Hessian矩陣,取而代之的是運用下面的擬牛頓條件分析連續兩個梯度向量獲得的近似值矩陣\(B_k\): \begin{equation} f'(x_{k+1})-f'(x_k)\approx B_{k+1}(x_{k+1}-x_k) \end{equation} 擬牛頓算法的算法流程見8,其基本思想是利用矩陣\(B_k\)計算牛頓方向的近似值\(d_k\)。各類擬牛頓算法的主要差別在於近似Hessian矩陣的更新策略,表1列出了部分主流的擬牛頓算法的迭代更新規則,其中\(s_k=x_{k+1}-x_k=-\alpha_kB_k^{-1}f'(x_k)\),\(y_k=f'(x_{k+1})-f'(x_k)\)。 擬牛頓算法中最經常使用的是BFGS,其針對有限內存的機器的算法變種L-BFGS[4]在機器學習領域又備受青睞。BFGS須要存儲\(n\times n\)的矩陣\(H_k\)用於近似Hessian矩陣的逆矩陣;而L-BFGS僅須要存儲過去\(m\)(\(m\)通常小於10)對\(n\)維的更新數據\((x,f'(x_i))\)便可。L-BFGS的空間複雜度爲線性,特別適用於變量很是多的優化問題。BFGS的算法描述很容易寫出來,以下:

 1 def BFGS(fun,dfun,theta,args,H=None,mode=0,eps=1e-12,max_iter=1e4):
 2     """
 3     #Functionality:find the minimum of objective function f(x)
 4     #@Parameters
 5     #fun:objective function f(x)
 6     #dfun:compute the gradient of f(x)
 7     #args:parameters needed by fun and dfun
 8     #theta:start vector of parameters of the model
 9     #H:initial inverse Hessian approximation
10     #mode:index of line search algorithm
11     """
12     x_pre=x_cur=theta
13     g=dfun(x_cur,args)
14     I=matlib.eye(theta.size)
15     if not H:#initialize H as an identity matrix
16         H=I
17     k=0
18     while k<max_iter and np.sum(np.abs(g))>eps:
19         d=-g*H
20         step=LineSearch(fun,dfun,x_pre,args,d,1,mode)
21         x_cur=x_pre+step*d
22         s=step*d
23         y=dfun(x_cur,args)-dfun(x_pre,args)
24         ys=np.sum(y*s.T)
25         if abs(ys)<eps:
26             return x_cur
27         change=(ys+np.sum(y*H*y.T))*(s.T*s)/(ys**2)-(H*y.T*s+s.T*y*H)/ys
28         H+=change
29         g=dfun(x_cur,args)
30         x_pre=x_cur
31         k+=1
32     return x_cur

下面咱們分析如何構造下L-BFGS的算法[10,13]。假設咱們如今處於優化過程的第\(k(k\geq m)\)次迭代,參數爲\(x_k\),梯度\(g_k=f'(x_k)\),已經保存的\(m\)條更新數據爲\(s_k=x_{k+1}-x_k\)及\(y_k=g_{k+1}-g_k\)。咱們最終須要計算的是搜索方向\(d_k=-H_kg_k\),因而令\(V_k=(I-\rho_ky_ks_k^T)\),\(\rho_k=1/(y_k^Ts_k)\),將表1中BFGS的\(H_{k}\)的更新規則展開,咱們能夠獲得下式: \begin{equation} \begin{array}{rl} &H_{k}g_k\\ =&V_{k-1}^TH_{k-1}V_{k-1}g_k+s_{k-1}\rho_{k-1}s_{k-1}^Tg_k\\ =&V_{k-1}^TV_{k-2}^TH_{k-2}V_{k-2}V_{k-1}g_k+V_{k-1}^Ts_{k-2}\rho_{k-2}s_{k-2}^TV_{k-1}g_k+s_{k-1}\rho_{k-1}s_{k-1}^Tg_k\\ =&(V_{k-1}^TV_{k-2}^T\cdots V_{k-m}^T)H_{k-m}(V_{k-m}\cdots V_{k-2}V_{k-1})g_k\\ &+(V_{k-1}^TV_{k-2}^T\cdots V_{k-m+1}^T)s_{k-m}\rho_{k-m}s_{k-m}^T(V_{k-m+1}\cdots V_{k-1}V_k)g_k\\ &+(V_{k-1}^TV_{k-2}^T\cdots V_{k-m+2}^T)s_{k-m+1}\rho_{k-m+1}s_{k-m+1}^T(V_{k-m+2}\cdots V_{k-2}V_{k-1})g_k\\ &+ \cdots\\ &+V_{k-1}^Ts_{k-2}\rho_{k-2}s_{k-2}^TV_{k-1}g_k\\ &+ s_{k-1}\rho_{k-1}s_{k-1}^Tg_k \end{array} \end{equation} 上式很是有規律,這就爲迭代求解奠基了很好的基礎。咱們令\(q_0=g_k\),則當\(1\leq i\leq m\)時有 \begin{equation} q_i=(V_{k-i}\cdots V_{k-2}V_{k-1})g_k \end{equation} \begin{equation} a_i=\rho_{k-i}s_{k-i}^Tq_{i-1} \end{equation} 那麼能夠獲得以下的迭代規則: \begin{equation} \begin{array}{rl} q_i&=V_{k-i+1}q_{i-1}\\ &=q_{i-1}-\rho_{k-i+1}y_{k-i+1}s_{k-i+1}^Tq_{i-1}\\ &=q_{i-1}-a_{i-1}y_{k-i+1} \end{array} \end{equation} 到目前爲止,咱們已經能夠求解出\(H_{k}g_k\)全部項的右半部分,那左半部分如何處理?在這裏採用不斷提早最左端的公因式的方法完成迭代過程: \begin{equation} H_{k}g_k=P_1=V_{k-1}^TP_{2}+s_{k-1}a_1 \end{equation} \begin{equation} P_{2}=V_{k-2}^TP_{3}+s_{k-2}a_2 \end{equation} 重複該過程,很快就能夠發現規律: \begin{equation} \begin{array}{rl} P_{i}&=V_{k-i}^TP_{i+1}+s_{k-i}a_i\\ &=P_{i+1}+s_{k-i}(a_i-\rho_{k-i}y_{k-i}^TP_{i+1}) \end{array} \end{equation} 其中\(P_{m+1}=H_{k-m}q_m\)。 根據上述分析,咱們能夠獲得L-BFGS的求解搜索方向的算法9。根據算法9的整個流程,可知經過兩個循環\(m\)次的迭代運算便可出計算當前的搜索方向,須要存儲歷史數據\(\{s_{k-i},y_{k-i}|i=1,\cdots,m\}\)和臨時數據\(\{a_{k-i}|i=1,\cdots,m\}\),因此算法的時間和空間複雜度均爲\(O(mn)\)。若是目前處於迭代的初期,已有的歷史數據少於\(m\),那麼就用這些已有的數據,在後續迭代過程當中不斷新增歷史數據便可;若干當前的迭代次數不小於\(m\),那麼在每次計算出搜索方向後,便可用\(s_k\)和\(y_k\)替換\(s_{k-m}\)和\(y_{k-m}\)組成新的\(m\)對歷史更新數據。

在算法9中,須要給出矩陣\(H_{k-m}\)。在第一次迭代時,\(H_{k-m}\)被初始化爲單位陣,在隨後的迭代過程當中\(H_{k-m}=\gamma_kI\),其中 \begin{equation} \gamma_k=\frac{y_{k-1}^Ts_{k-1}}{y_{k-1}^Ty_{k-1}} \end{equation} 另外,在內存受限的系統中存儲\(n\times n\)不是很現實的想法。用上述的方法,咱們僅需存儲一個標量\(\gamma_k\)便可,這是一個簡單卻又高效的作法[13]。 最後,附上L-BFGS的Python版本代碼:

 1 def LBFGS(fun,dfun,theta,args,mode=0,eps=1e-12,max_iter=1e4):
 2     """
 3     #Functionality:find the minimum of objective function f(x) with LBFGS
 4     #@Parameters
 5     #fun:objective function f(x)
 6     #dfun:compute the gradient of f(x)
 7     #args:parameters needed by fun and dfun
 8     #theta:start vector of parameters of the model
 9     #H:initial inverse Hessian approximation
10     #mode:index of line search algorithm
11     """
12     x_pre=x_cur=theta
13     s_arr=[]
14     y_arr=[]
15     Hscale=1
16     k=0
17     while k<max_iter:
18         g=dfun(x_cur,args)
19         d=LBFGSSearchDirection(y_arr,s_arr,Hscale,-g)
20         step=LineSearch(fun,dfun,x_pre,args,d,1,mode)
21         s=step*d
22         x_cur=x_pre+s
23         y=dfun(x_cur,args)-dfun(x_pre,args)
24         ys=np.sum(y*s.T)
25         if np.sum(np.abs(s))<eps:
26             return x_cur
27         x_pre=x_cur
28         k+=1
29         y_arr,s_arr,Hscale=LBFGSUpdate(y,s,y_arr,s_arr)
30     return x_cur
31 
32     
33 def LBFGSSearchDirection(y_arr,s_arr,Hscale,g):
34     """
35     #Functionality:estimate search direction using with LBFGS
36     #@Parameters
37     #y_arr:m*dim matrix,where y_arr[i,:]=f'(x_{i+1})-f'(x_i)
38     #s_arr:m*dim matrix,where s_arr[i,:]=x_{k+1}-x_k
39     #Hscale:a scale to initilize the inverse of Hessian matrix
40     #g:a row vector representing -f'(x_{k})
41     """
42     histNum=len(s_arr)#number of update data stored
43     if not histNum:
44         return g
45     dim=s_arr[0].size
46     a_arr=[0 for i in range(histNum)]
47     rho=[0 for i in range(histNum)]
48     q=g
49     for i in range(1,histNum+1):
50         s=s_arr[histNum-i]
51         y=y_arr[histNum-i]
52         rho[histNum-i]=1/np.sum(s*y.T)
53         a_arr[i-1]=rho[histNum-i]*np.sum(s*q.T)
54         q-=(a_arr[i-1]*y)
55     P=Hscale*q
56     for i in range(histNum,0,-1):
57         y=y_arr[histNum-i]
58         s=s_arr[histNum-i]
59         beta=rho[histNum-i]*np.sum(y*P.T)
60         P+=s*(a_arr[i-1]-beta)
61     return P
62         
63 
64 def LBFGSUpdate(y,s,oldy,olds,m=1e2):
65     """
66     #Functionality:refresh the historical update data
67     #@Parameters
68     #y:f'(x_{k+1})-f'(x_k)
69     #s:x_{k+1}-x_k
70     #oldy:[y0,y1,...],which is a list
71     #olds:[s0,s1,...],which is a list
72     #m:number of historical data to store(default:100)
73     """
74     eps=1e-12
75     Hscale=np.sum(y*s.T/y*y.T) #a scale to initialize H_{k-m}
76     if Hscale<eps:#skip update
77         return oldy,olds,Hscale
78     
79     cur_m=len(oldy)
80     if cur_m>=m:
81         oldy.pop(0)
82         olds.pop(0)
83     oldy.append(copy.deepcopy(y))
84     olds.append(copy.deepcopy(s))
85     return oldy,olds,Hscale 

References

[1] Backtracking line search. http://en.wikipedia.org/wiki/Backtracking_line_search.

[2] Bisection method. http://en.wikipedia.org/wiki/Bisection_method.

[3] Gradient descent. http://en.wikipedia.org/wiki/Gradient_descent.

[4] Limited-memory bfgs. http://en.wikipedia.org/wiki/Limited-memory_BFGS.

[5] Line search methods. http://pages.cs.wisc.edu/~ferris/cs730/chap3.pdf.

[6] Line search methods:step length selection. http://terminus.sdsu.edu/SDSU/Math693a_f2013/Lectures/06/lecture.pdf.

[7] Math 408a line search methods. https://www.math.washington.edu/~burke/crs/408/lectures/L7-line-search.pdf.

[8] Newton’s method. http://en.wikipedia.org/wiki/Newton%27s_method.

[9] Nonlinear programming algorithms. http://www.math.bme.hu/~bog/GlobOpt/Chapter5.pdf.

[10] Oerview of quasi-newton optimization methods. https://homes.cs.washington.edu/~galen/files/quasi-newton-notes.pdf.

[11] Quasi-newton method. http://en.wikipedia.org/wiki/Quasi-Newton_method.

[12] Unconstrained minimization. http://www.ing.unitn.it/~bertolaz/2-teaching/2011-2012/AA-2011-2012-OPTIM/lezioni/slides-mND.pdf.

[13] Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528,1989.

相關文章
相關標籤/搜索