Paper

Average Convergence Rate of Evolutionary Algorithms II: Continuous Optimization

The average convergence rate (ACR) measures how fast the approximation error of an evolutionary algorithm converges to zero per generation. It is defined as the geometric average of the reduction rate of the approximation error over consecutive generations. This paper makes a theoretical analysis of the ACR in continuous optimization. The obtained results are summarized as follows. According to the limit property, the ACR is classified into two categories: (1) linear ACR whose limit inferior value is larger than a positive and (2) sublinear ACR whose value converges to zero. Then, it is proven that the ACR is linear for evolutionary programming using positive landscape-adaptive mutation, but sublinear for that using landscape-invariant or zero landscape-adaptive mutation. The relationship between the ACR and the decision space dimension is also classified into two categories: (1) polynomial ACR whose value is larger than the reciprocal of a polynomial function of the dimension for any generation, and (2) exponential ACR whose value is less than the reciprocal of an exponential function of the dimension for an exponential long period. It is proven that for easy problems such as linear functions, the ACR of the (1+1) adaptive random univariate search is polynomial. But for hard functions such as the deceptive function, the ACR of both the (1+1) adaptive random univariate search and evolutionary programming is exponential.

Results in Papers With Code
(↓ scroll down to see all results)