標準差分進化算法matlab程序實現(轉載) 差分進化算法 DE-Differential Evolution matlab練習程序(差別演化DE) CEC2017 benchmark functio

標準差分進化算法matlab程序實現

自適應差分演化算法方面的Matlab和C++代碼及論文

差分進化算法 DE-Differential Evolution

matlab練習程序(差別演化DE)

【DE算法】差分進化算法原理及matlab代碼

差分進化算法

CEC2017 benchmark function調用接口

王勇http://ist.csu.edu.cn/YongWang.htm   http://www.escience.cn/people/yongwang1/index.html;jsessionid=7CE276B8D77C7CE70AE2C2D2F8DF1A7A-n1html

Rammohan Mallipeddi    Ponnuthurai N. Suganthan  Quan-Ke Pan  Mehmet Fatih Tasgetirenhttps://www.researchgate.net/publication/223921753_Differential_evolutionalgorithm_with_ensemble_of_parameters_and_mutation_trategiesredis

終於開始了研究生階段的學習生活,繁瑣的入學相關事宜也處理得差很少了。在9月5號,咱們實驗室召開了第一次小組會議,導師也相繼給咱們指定了各自的研究方向,不出意外,我接下來將圍繞智能優化算法在太陽能電池方面的應用展開學習和研究。算法

從6號到9號,我用三天時間看完了兩篇關於標準差分進化算法的英文文獻,雖然在完成本科畢業設計的時候也簡單接觸了差分進化算法,可是並無本身動手實現,只是粗略地瞭解了算法流程,而此次則是仔細閱讀了文獻中的僞代碼,參考了一些別人的代碼。在這個過程當中,我發現不少網上的代碼有的沒有進行封裝,或者有的會出現或多或少的bug,因此最後本身嘗試用matlab對別人的代碼按照本身對差分進化算法的理解作了必定的修改實現,若是你們發現有不正確的地方,歡迎留言指正。下面給出兩篇英文文獻名以及具體的matlab代碼。session

差分進化算法英文文獻:《Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces》、《Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization》app

標準差分進化算法matlab程序實現:dom

1、主函數:DE_Std.mide

%根據文章《Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization》的算法:ALGORITHMIC DESCRIPTION OF DE
%@written by Zhan Qian,2015-5-24
%測試函數求值用函數testFun(x,FunIndex)
%變異向量用函數mutation(X,bestX,F,mutationStrategy)
%交叉向量用函數crossover(X,V,CR,crossStrategy)
%mutation
%mutationStrategy=1:DE/rand/1,
%mutationStrategy=2:DE/best/1,
%mutationStrategy=3:DE/rand-to-best/1,
%mutationStrategy=4:DE/best/2,
%mutationStrategy=5:DE/rand/2.
%crossover
%crossStrategy=1:binomial crossover
%crossStrategy=2:Exponential crossover
clear
maxIteration=1000;%最大迭代次數
Generation=0;%進化代數,或者當前迭代代數
Xmax=30;%搜索上界,能夠根據須要改成向量形式
Xmin=-30;%搜索下界
Dim=30;%個體維數
NP=50;%population size,種羣規模
F=0.5;%scaling factor 縮放因子
CR=0.3;%crossover rate 交叉機率
FunIndex=3;%測試方程索引,不一樣值對應不一樣的測試函數
mutationStrategy=1;%變異策略
crossStrategy=2;%交叉策略
%%
%step1 初始化
%X represent population
%Generation=0;
X=(Xmax-Xmin)*rand(NP,Dim)+Xmin;%X行表明個體i,列表明個體i的維度j
 
%%
%step2 mutation,crossover,selection
while Generation<maxIteration
%求bestX
    for i=1:NP
        fitnessX(i)=testFun(X(i,:),FunIndex);%fitnessX表示X的適應值
    end
    [fitnessbestX,indexbestX]=min(fitnessX);
    bestX=X(indexbestX,:);%bestX表示最優值對應的位置
%%
%step2.1 mutation
%mutationStrategy=1:DE/rand/1,
%mutationStrategy=2:DE/best/1,
%mutationStrategy=3:DE/rand-to-best/1,
%mutationStrategy=4:DE/best/2,
%mutationStrategy=5:DE/rand/2,
%產生爲每個個體Xi,G 產生一個變異向量Vi,G。 G表明進化代數
    V=mutation(X,bestX,F,mutationStrategy);
 %%   
%step2.2 crossover
%crossStrategy=1:binomial crossover
%crossStrategy=2:Exponential crossover
%產生爲每個個體Xi,G 產生一個交叉向量Ui,G。 G表明進化代數
    U=crossover(X,V,CR,crossStrategy);
%%    
%step2.3 selection
    for i=1:NP
        fitnessU(i)=testFun(U(i,:),FunIndex);
        if fitnessU(i)<=fitnessX(i)
            X(i,:)=U(i,:);
            fitnessX(i)=fitnessU(i);
            if fitnessU(i)<fitnessbestX
                bestX=U(i,:);
                fitnessbestX=fitnessU(i);
            end
        end
    end
%%
    Generation=Generation+1;
    bestfitnessG(Generation)=fitnessbestX;
end
 
%%
%畫圖
%plot(bestfitnessG);
optValue=num2str(fitnessbestX);
Location=num2str(bestX);
disp(strcat('the optimal value','=',optValue));
disp(strcat('the best location','=',Location));
 
 

   2、變異:mutation.m函數

%爲了保證多樣性,在產生新的種羣個體的過程當中,產生的nrandI個互不相等的隨機數,與i皆不相等;
%即:每產生的第 i 個新個體所用的隨機選到的nrandI箇舊個體不能是第 i 箇舊個體。
 
function V=mutation(X,bestX,F,mutationStrategy)
NP=length(X);
for i=1:NP
    %在[1 NP]中產生nrandI個互不相等的隨機數,且與i皆不相等
    nrandI=5;
    r=randi([1,NP],1,nrandI);
    for j=1:nrandI
        equalr(j)=sum(r==r(j));
    end
    equali=sum(r==i);
    equalval=sum(equalr)+equali;
    while(equalval>nrandI) %若產生的隨機數有相等的或與i相等的——須要從新生成隨機數
        r=randi([1,NP],1,nrandI);
        for j=1:nrandI
            equalr(j)=sum(r==r(j));
        end
        equali=sum(r==i);
        equalval=sum(equalr)+equali;
    end
    
    switch mutationStrategy
        case 1
            %mutationStrategy=1:DE/rand/1;
            V(i,:)=X(r(1),:)+F*(X(r(2),:)-X(r(3),:));
        case 2
            %mutationStrategy=2:DE/best/1;
            V(i,:)=bestX+F*(X(r(1),:)-X(r(2),:));
        case 3
            %mutationStrategy=3:DE/rand-to-best/1;
            V(i,:)=X(i,:)+F*(bestX-X(i,:))+F*(X(r(1),:)-X(r(2),:));
        case 4
            %mutationStrategy=4:DE/best/2;
            V(i,:)=bestX+F*(X(r(1),:)-X(r(2),:))+F*(X(r(3),:)-X(r(4),:));
        case 5
            %mutationStrategy=5:DE/rand/2;
            V(i,:)=X(r(1),:)+F*(X(r(2),:)-X(r(3),:))+F*(X(r(4),:)-X(r(5),:));
        otherwise
            error('沒有所指定的變異策略,請從新設定mutationStrategy的值');
    end
   
    
end
        

  3、交叉:crossover.moop

function U=crossover(X,V,CR,crossStrategy)
[NP,Dim]=size(X);
switch crossStrategy
    %crossStrategy=1:binomial crossover
    case 1
        for i=1:NP
            jRand=randi([1,Dim]);%jRand∈[1,Dim]
            for j=1:Dim
                k=rand;
                if k<=CR||j==jRand %j==jRand是爲了確保至少有一個U(i,j)=V(i,j)
                    U(i,j)=V(i,j);
                else
                    U(i,j)=X(i,j);
                end     
            end    
        end
    %crossStrategy=2:Exponential crossover
    case 2
        for i=1:NP
            j=randi([1,Dim]);%j∈[1,Dim]
            L=0;
            U(i,:)=X(i,:);
            k=rand;
            while(k<CR && L<Dim)
                U(i,j)=V(i,j);
                j=j+1;
                if(j>Dim)
                    j=1;
                end
                L=L+1;
            end
        end
    otherwise
        error('沒有所指定的交叉策略,請從新設定crossStrategy的值');
end
        

  4、測試函數:testFun.mpost

function y=testFun(x,index)
%x表明參數,index表明測試的函數的選擇
%該測試函數爲通用測試函數,能夠移植
%目錄
%  函數名            位置                   最優值
%1.Sphere             0                       0
%2.Camel             多個      
%3.Rosenbrock
switch index
    case 1 %Sphere函數
        y=sum(x.^2);
    case 2 %Camel函數,Dim只能取2
        if length(x)>2
            error('x的維度超出了2');
        end
        xx=x(1);yy=x(2);y=(4-2.1*xx^2+xx^4/3)*xx^2+xx*yy+(-4+4*yy^2)*yy^2;
    case 3 %Rosenbrock函數
        y=0;
        for i=2:length(x)
        	y=y+100*(x(i)-x(i-1)^2)^2+(x(i-1)-1)^2;
        end
    otherwise
        disp('no such function, please choose another');
end

  

 

 

第一個網頁(共2個)

點擊打開連接 http://www1.osu.cz/~tvrdik/?page_id=18

 

Algorithms of global optimization and their applications (English)

 

Global Optimization, Evolutionary Algorithms and their Application to Computational Statistics

 

 

Stochastic algorithms of global optimization

In 1993 we started to deal with stochastic algorithms of global optimization. We modified the Controlled Random Search algorithm (Price 1976) by randomizing the reflection in simplex. The modified CRS algorithm was applied to estimating parameters of non-linear regression models and also in the shape optimization problem. For the testing of the stochastic algorithms a set of fourteen difficult non-linear regression tasks was collected. This collection can be downloaded here, the description of the tasks and references are in the file nlmod14.pdf (zip), data in text format are in the file models.zip.

 Differential Evolution:

Competitive Diferential Evolution for Constrained Problems – CEC 2010
Source code in Matlab       download readme.txt (source code zip)

Adaptive Diferential Evolution Algorithms, Handbook of Optimization, 2012
Source code in Matlab or C       download readmeHO.txt (source code zip)

 

Hybrid DE for Optimal Clustering 2015   

Manuscript of paper      download asoc2014sub_tvkr  

Matlab source code   readmeClust     download   clustan_b6e6rl_source    

 

Matlab Program Library for Box-constrained Continuous Problems (including a new adaptive stochastic algorithm for the estimation of parameters
in nonlinear regression models)  here

 

Selected papers:

 Tvrdík, J., Estimation of Regression Parameters by Controlled Random Search Algorithm (in Czech), In: Antoch, J. and Dohnal, G. (eds), Proceedings of ROBUST 94, 153-159, JČMF Praha, 1994, ISBN 80-7015-492-6

Křivý, I., Tvrdík, J., The Controlled Random Search Algorithm in Optimizing of Regression Models, Computational Statistics and Data Analysis, 20, 229-234, 1995

Haslinger J., Jedelský D., Kozubek T., Tvrdík J., Genetic and Random Search Methods in Optimal Shape Design Problems, Journal of Global Optimizations 16, 109-131, 2000

Křivý I., Tvrdík J., Stochastic Algorithms in Estimating Regression Models, COMPSTAT 1996, Proceedings in Computational Statistics (ed. A. Prat), 325-330, Physica Verlag, 1996

Tvrdík, J., Křivý, I., Simple Evolutionary Heuristics for Global Optimization, Computational Statistics and Data Analysis, 30, 345-352, 1999
download allmut99.pdf (zip)

Křivý I., Tvrdík J., Krpec R.: Stochastic Algorithms in Nonlinear Regression, Comput. Statist, Data Anal. 33, 278-290, 2000

Mišík, L., Tvrdík, J., Křivý, I. : On Convergence of a Class of Stochastic Algorithms, In: Proceedings of ROBUST 2000 (J.Antoch a G. Dohnal eds), 198-209, JČMF Praha, 2001, ISBN 80-7015-792-5
download rob00new.pdf (zip)

Tvrdík, J., Křivý, I., Mišík, L., Evolutionary Algorithm with Competing Heuristics, In: Ošmera, P. (ed.) Proceedings of MENDEL 2001, 7th International Conference on Soft Computing, 58-64, Technical University, Brno, 2001,
download mendel01.pdf (zip)

Tvrdík, J., Mišík, L., Křivý, I., Competing Heuristics in Evolutionary Algorithms, 2nd Euro-International Symposium on Computational Intelligence June 16 – 19, 2002, Košice, Slovakia, published In SINCAK et al. Intelligent Technologies-Theory and Applications. Amsterdam, The Nethelands: IOS Press, 2002. 159-165.
download kos02cln.pdf (zip)

Tvrdík, J., Křivý, I., Mišík, L., Evolutionary Algorithms with Competing Heuristics in Computational Statistics, COMPSTAT 2002, Proceedings in Computational Statistics (ed. Haerdle W., Roenz B.), 349-354, Physica Verlag, Heidelberg
download comp02.pdf (zip)

Tvrdík, J., Generalized controlled random search and competing heuristics. In MENDEL 2004, 10th Internetional Conference on Soft Computing (Matoušek R. and Ošmera P. eds). University of Technology, Brno, 2004. 228-233.
download mendel04.pdf

Tvrdík, J. Competition and Cooperation in Evolutionary Algorithms: A Comparative Study. In MENDEL 2005, 11th International Coference on Soft Computing. Brno : University of Technology, Brno, 2005. 108-113.
download mendel05.pdf       extended CD version men05_CD.pdf

Tvrdík, J., Křivý, I., Mišík, L., Adaptive population-based search: application to estimation of nonlinear regression parameters. Computational Statistics and Data Analysis 52(2), 713-724 (2007)
download preprint CSDA-06SAS03e.pdf

Tvrdík, J. Competitive Differential Evolution. In MENDEL 2006, 12th International Coference on Soft Computing. Brno : University of Technology, Brno, 2006. 7-12.
download mendel06.pdf       extended CD version mendel06_CD.pdf

Tvrdík, J. Differential Evolution with Competitive Setting of its Control Parameters. TASK Quarterly 11, 169-179 (2007)
download reprint task07.pdf

Tvrdík, J. Adaptation in Differential Evolution: A Numerical Comparison. APPL SOFT COMPUT. 2009, 9, 1149-1155. ASOC2009_reprint.pdf

Tvrdík, J. Self-adaptive Variants of Differential Evolution with Exponential Crossover. Analele Universitatii de Vest, Timisoara.Seria Matematica-Informatica. 2009, 47, 151-168.
download reprint analele09.pdf

TVRDÍK, J., POLÁKOVÁ, R. Competitive Differential Evolution for Constrained Problems. 2010 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC). IEEE, 2010. s. 1632-1639.

POLÁKOVÁ, R., TVRDÍK, J. Various Mutation Strategies  in Enhanced  Competitive Differential Evolution for Constrained Optimization. 2011 IEEE Symposium on Differential Evolution. IEEE, 2011. s. 17-24. [2011-04-11]. ISBN 978-1-61284-070-3

TVRDÍK, J., KŘIVÝ, I. Hybrid Adaptive  Differential Evolution in  Partitional Clustering. MENDEL 2011 17th International Conference on Soft Computing. Brno: University of Technology, 2011. s. 1-8. [2011-06-15]. ISBN 978-80-214-4302-0                                                                                                                                                                               download preprint mendel11tvkr.pdf

 

 

第二個網頁(共2個)

http://irafm.osu.cz/sago/

 

Self-adaptive Algorithms for Global Optimization – MATLAB and C++ Library Including Parallel Implementation for Many Processors

Introduction

The Matlab program library has been established in the frame of the project201/05/0284 of the Czech Grant Agency at University of Ostrava in 2006.The extension by C++ version comes from next period and the parallel versionof C++ library is the result of the project supported by the European RegionalDevelopment Fund in the IT4Innovations Centre of Excellence project(CZ.1.05/1.1.00/02.0070)

Project team:

  • Josef Tvrdík, Department of Comp. Science, University of Ostrava,
  • Viktor Pavliska, Institute for Research and Applications of Fuzzy Modeling,University of Ostrava,
  • Radek Valášek, Institute for Research and Applications of Fuzzy Modeling,University of Ostrava,
  • Radka Poláková, Institute for Research and Applications of Fuzzy Modeling,University of Ostrava,
  • Hashim Habiballa, Department of Comp. Science, University of Ostrava,

The aim of this library is to make new self-adaptive stochastic algorithmsaccessible for potential users and facilitate the use of the self-adaptivealgorithms in practical problems. The self-adaptation of the algorithms meansthat the user need not set up their control parameters by preliminarytrial-and-error numerical experiment.

The library includes source codes of algorithms:

These programs are free software; you can redistribute it and/or modify itunder the terms of the GNU General Public License as published by the FreeSoftware Foundation; either version 2 of the License, or (at your option) anylater version. The programs are distributed in the hope that they will beuseful, but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General PublicLicense for more details, http://www.fsf.org/.

If you use a routine of this library, please, cite in your paper or report thisreference:

Tvrdík J., Pavliska V., Valášek, R., Poláková, R., Habiballa, H.(2014) Self-adaptive Algorithms forGlobal Optimization – MATLAB and C++ Library Including Parallel Implementationfor Many Processors, irafm.osu.cz/sago

Contact address for Matlab routines: josef.tvrdik@osu.czradka.polakova@osu.cz

Contact address for C++ routines: viktor.pavliska@osu.czradek.valasek@osu.cz

 

 

差分進化算法DE與遺傳算法GA很是相似,下面是差分進化算法的步驟。

 

算法步驟以下:

  1. 初始化

  2. 變異

  3. 交叉

d.選擇


測試函數:

Rastrigr函數  

                 

全局最優勢: ,       

matlab代碼以下:

 

[plain]  view plain  copy
 
  1. function DE(Gm,F0)  
  2.   
  3. t0 = cputime;  
  4. %差分進化算法程序  
  5. %F0是變異率 %Gm 最大迭代次數  
  6. Gm = 10000;  
  7. F0 = 0.5;  
  8. Np = 100;  
  9. CR = 0.9;  %交叉機率  
  10. G= 1; %初始化代數  
  11. D = 10; %所求問題的維數  
  12. Gmin = zeros(1,Gm); %各代的最優值  
  13. best_x = zeros(Gm,D); %各代的最優解  
  14. value = zeros(1,Np);  
  15.   
  16. %產生初始種羣  
  17. %xmin = -10; xmax = 100;%帶負數的下界  
  18. xmin = -5.12;  
  19. xmax = 5.12;  
  20. function y = f(v)  
  21.     %Rastrigr 函數  
  22. y = sum(v.^2 - 10.*cos(2.*pi.*v) + 10);  
  23. end  
  24.   
  25. X0 = (xmax-xmin)*rand(Np,D) + xmin;  %產生Np個D維向量  
  26. XG = X0;  
  27.   
  28. %%%%%%%%%%%%%----這裏未作評價,不判斷終止條件----%%%%%%%%%%%%%%%%%%%%%%%%  
  29.   
  30. XG_next_1= zeros(Np,D); %初始化  
  31. XG_next_2 = zeros(Np,D);  
  32. XG_next = zeros(Np,D);  
  33.   
  34. while G <= Gm   
  35. G   
  36. %%%%%%%%%%%%%%%%%%%%%%%%----變異操做----%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  
  37.     for i = 1:Np  
  38.         %產生j,k,p三個不一樣的數  
  39.         a = 1;  
  40.         b = Np;  
  41.         dx = randperm(b-a+1) + a- 1;  
  42.         j = dx(1);  
  43.         k = dx(2);  
  44.         p = dx(3);  
  45.         %要保證與i不一樣  
  46.         if j == i  
  47.             j  = dx(4);  
  48.             else if k == i  
  49.                  k = dx(4);  
  50.                 else if p == i  
  51.                     p = dx(4);  
  52.                     end  
  53.                 end  
  54.          end  
  55.           
  56.         %變異算子  
  57.         suanzi = exp(1-Gm/(Gm + 1-G));  
  58.         F = F0*2.^suanzi;  
  59.         %變異的個體來自三個隨機父代  
  60.          
  61.         son = XG(p,:) + F*(XG(j,:) - XG(k,:));         
  62.         for j = 1: D  
  63.             if son(1,j) >xmin  & son(1,j) < xmax %防止變異超出邊界  
  64.                 XG_next_1(i,j) = son(1,j);  
  65.             else  
  66.                 XG_next_1(i,j) = (xmax - xmin)*rand(1) + xmin;  
  67.             end  
  68.         end  
  69.     end  
  70.    %%%%%%%%%%%%%%%%%%%%%%%---交叉操做----%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  
  71.       
  72.       
  73.     for i = 1: Np  
  74.         randx = randperm(D);% [1,2,3,...D]的隨機序列     
  75.         for j = 1: D  
  76.               
  77.             if rand > CR & randx(1) ~= j % CR = 0.9   
  78.                 XG_next_2(i,j) = XG(i,j);  
  79.             else  
  80.                 XG_next_2(i,j) = XG_next_1(i,j);  
  81.             end  
  82.         end  
  83.     end  
  84.       
  85.     %%%%%%%%%%%%%%%%%%----選擇操做---%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  
  86.     for i = 1:Np  
  87.         if f(XG_next_2(i,:)) < f(XG(i,:))  
  88.               
  89.             XG_next(i,:) = XG_next_2(i,:);  
  90.         else  
  91.             XG_next(i,:) = XG(i,:);  
  92.         end  
  93.     end  
  94.       
  95.     %找出最小值  
  96.     for i = 1:Np  
  97.         value(i) = f(XG_next(i,:));  
  98.     end  
  99.     [value_min,pos_min] = min(value);  
  100.       
  101.     %第G代中的目標函數的最小值  
  102.     Gmin(G) = value_min;     
  103.     %保存最優的個體  
  104.     best_x(G,:) = XG_next(pos_min,:);     
  105.       
  106.     XG = XG_next;      
  107.     trace(G,1) = G;  
  108.     trace(G,2) = value_min;  
  109.     G = G + 1;  
  110.     
  111. end  
  112.   [value_min,pos_min] = min(Gmin);  
  113.   best_value = value_min  
  114.   best_vector =  best_x(pos_min,:)    
  115.   fprintf('DE所耗的時間爲:%f \n',cputime - t0);  
  116.   %畫出代數跟最優函數值之間的關係圖    
  117.   plot(trace(:,1),trace(:,2));  
  118.     
  119. end  

 

 

 

 

 

結果:

以上轉載自:http://blog.csdn.net/hehainan_86/article/details/38685231

 

 

DE算法的求解步驟:
(1)基本參數的設置,包括NP, F, CR
(2)初始化種羣
(3)計算種羣適應度值
(4)終止條件不知足時,進行循環,依次執行變異、交叉、選擇運算,直到終止運算。

DE算法的流程圖:

以上轉載自:http://blog.csdn.net/misayaaaaa/article/details/54407548

相關文章
相關標籤/搜索