差分算法

差分算法的概述html

  Differential Evolution grew out of Ken Price's attempts to solve the Chebychev Polynomial fitting Problem that had been posed to him by Rainer Storn. A breakthrough happened, when Ken came up with the idea of using vector differences for perturbing the vector population. Since this seminal idea a lively discussion between Ken and Rainer and endless ruminations and computer simulations on both parts yielded many substantial improvements which make DE the versatile and robust tool it is today. The "DE community" has been growing since the early DE years of 1994 - 1996 and ever more researchers are working on and with DE. Those scientists who contributed actively to this homepage are listed at the bottom in alphabetical order. It is the strong wish of Ken and Rainer that DE will be developed further by scientists around the world and that DE may improve to help more users in their daily work. This wish is the reason why DE has not been patented in any way.
算法

——摘自https://www1.icsi.berkeley.edu/~storn/code.htmlapp

  差分進化算法源於Ken Price試圖解決Rainer Storn提出的切比雪夫多項式擬合問題。Ken Price提出利用向量差分來擾動向量種羣的想法時,一個突破產生了。因爲這一開創性的想法,Ken Price和Rainer Storn進行了激烈的討論和不斷地深思、仿真,產生了許多實質性地改進,使得DE算法成爲當今通用的強大的工具。less

生詞短語ide

grow out of      產生於;源於函數

perturb         擾動工具

seminal       意義重大的優化

rumination       深思
ui

substantial      實質的
this

yield        產生;提供

標準差分進化算法的理論

  DE算法首先在解的取值範圍內生成一個隨機的初始種羣,而後經過差分變異、交叉、選擇操做,產生新一代種羣。DE算法基於實數編碼,它首先在問題的可行解空間生成隨機初始化種羣。

初始種羣

  DE算法目標是進化NP個,D維參數向量,所謂的個體就是編碼全局優化的候選解

初始化種羣應該儘量更好的覆蓋整個搜索空間,在規定最小和最大參數邊界後在搜索空間中,按均勻分佈抽取個體。

例如:

  在第0代中,第i個個體中的第j個參數經過下面的式子產生:

  rand(0,1)表示[0,1]範圍內均勻分佈的隨機變量。

變異

  DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require the optimization problem to be differentiable, as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.

——維基百科

  在優化問題中,DE算法被用在多維實值函數,而不是用在梯度上。這意味着,DE算法不須要優化問題可微(像經典優化算法。例如,梯度降低法、準牛頓法)。

  在這裏我首先要聲明如下,DE算法要求是多維實值函數,換句話說,候選解必須是向量(個體就是向量,向量的每一個份量就是基因)。

  在DE算法中,種羣內個體的差分向量通過放縮以後,與種羣內另外的相異個體相加獲得變異向量。根據變異向量生成方法的不一樣,造成了多種變異策略。其中變異方式DE/rand/1的方程爲:

相關文章
相關標籤/搜索