An Efficient Hybrid Evolution Strategy Algorithm with Direct Search Method for Global Optimization

An Efficient Hybrid Evolution Strategy Algorithm with Direct Search Method for Global Optimization

Noureddine Boukhari, Fatima Debbat, Nicolas Monmarché, Mohamed Slimane
DOI: 10.4018/IJOCI.2019070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The main purpose of this article is to demonstrate how evolution strategy optimizers can be improved by incorporating an efficient hybridization scheme with restart strategy in order to jump out of local solution regions. The authors propose a hybrid (μ, λ)ES-NM algorithm based on the Nelder-Mead (NM) simplex search method and evolution strategy algorithm (ES) for unconstrained optimization. At first, a modified NM, called Adaptive Nelder-Mead (ANM) is used that exhibits better properties than standard NM and self-adaptive evolution strategy algorithm is applied for better performance, in addition to a new contraction criterion is proposed in this work. (μ, λ)ES-NM is balancing between the global exploration of the evolution strategy algorithm and the deep exploitation of the Nelder-Mead method. The experiment results show the efficiency of the new algorithm and its ability to solve optimization problems in the performance of accuracy, robustness, and adaptability.
Article Preview
Top

Introduction

Metaheuristics, especially evolutionary algorithms have now become capable of solving the global optimization of complex systems. In the past two decades the sharp increase of research interests in heuristic optimization algorithms, including Genetic Algorithm (GA), Evolution Strategy Algorithm (ES), Tabu Search (TS), Particle Swarm (PSO) (Mishra, Bisht, Singh, & Chang, 2018) and Simulated Annealing (SA), made them dominate the optimization research field. The reason is twofold. The first is the property of global convergence, which is of crucial importance to real-world problems (Kalsi, Kaur, & Chang, 2017). The second is that they are derivative-free algorithms and have no objective function requirement such as continues property, which make them more widely applicable to solve optimization problems whose objective function is computed by a ‘‘black box’’ (Gao & Han, 2010).

Recently, there has been an emergence of hybrid algorithms (Chen, et al., 2018), combining global heuristic methods together with traditional local exploitation algorithms. They are also called Memetic Algorithms, in which one or more local search phases are included in algorithms every evolutionary cycle, and are considered to be both more efficient (i.e. requiring orders of magnitude fewer evaluations to find optima) and more effective (i.e. identifying higher quality solutions) than traditional evolutionary algorithms for some problem domains (Krasnogor & Smith, 2005).

To date, evolutionary strategy algorithm has been successful in optimizing various continuous nonlinear functions in practice. As it is the case for Genetic Algorithms, it consists in following the evolution of a population of potential solutions through the same three stochastic principles, selection, recombination and mutation. However, contrarily to Genetic Algorithms, the major process is the mutation process and the selection is made deterministic (Dumas, Druez, & Lecerf, 2009). It is known from the literature that although the ES algorithm does eventually locate the desired solution, practical use of an evolutionary computation technique in solving multimodal optimization problems is severely limited by the high computational cost of the slow convergence rate (Smith, 1998). The convergence rate of ES is also typically slower than those of local search techniques, as they do not simply depend on local information to determine a most promising search direction. Hybridization of ES with stochastic local search has been investigated in many studies (Coelho, et al., 2016; Coelho, et al., 2016; Freitas, Guimarães, Silva, & Souza, 2013) as exploitation tool in continuous optimization., but only a few algorithms combining it with gradient and direct search techniques have been proposed (Kramer, 2010; Chen, Liu, & Jia, 2009; Giacobini, et al., 2003; Kramer, 2014; Kramer, 2016). In the cases where there are no derivatives available for fitness function, direct search methods are useful to provide more robust convergence.

This paper proposes a hybrid algorithm (μ, λ)ES-NM which is based on evolutionary strategy algorithm with Nelder Mead simplex search method (Nelder & Mead, 1965). The NM method is a simple direct search technique that has been widely used in various unconstrained optimization scenarios as in (Xiao & Duan, 2012; Gao & Han, 2010; Ali & Tawhid, 2016; Pourtakdoust & Zandavi, 2016). One of the reasons for its popularity is that this method is easy to implement in practice and does not need the derivatives of the function under exploration. It is often important when gradient information is not always available.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing