Contour Gradient Optimization

Contour Gradient Optimization

Zhou Wu, Tommy W. S. Chow, Shi Cheng, Yuhui Shi
Copyright: © 2013 |Pages: 28
DOI: 10.4018/jsir.2013040101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Inspired by the local cooperation behavior in the real world, a new evolutionary algorithm Contour Gradient Optimization algorithm (CGO) is proposed for solving optimization problems. CGO is a new type of global search algorithm that emulates the cooperation among neighbors. Each individual in CGO evolves in its neighborhood environment to find a better region. Each individual moves with a velocity measured by the field of its nearest individuals. The field includes the attractive forces from its better neighbor in the higher contour level and the repulsive force from its worse neighbor in the lower contour level. Intensive simulations were performed and the results show that CGO is able to solve the tested multimodal optimization problems globally. In this paper, CGO is thoroughly compared with six different widely used optimization algorithms under sixteen different benchmark functions. The comparative analysis shows that CGO is comparatively better than these algorithms in the respect of accuracy and effectiveness.
Article Preview
Top

1. Introduction

Optimization is the process of finding the most promising solution for a given problem. It requires maximizing or minimizing a desired objective function globally in a multi-dimensional search space. Optimization has already been one of the most important topics in science and engineering research. In research areas such as machine learning, artificial intelligence, and complex systems, there are plenty of instances that can be regarded as optimization problems. Generally, a continuous optimization problem can be defined as jsir.2013040101.m01, where jsir.2013040101.m02 is the number of the decision variables. Each variable jsir.2013040101.m03 satisfies a boundary constraint jsir.2013040101.m04.

Conventional optimization techniques, such as gradient-based methods, can be successfully applied to some relatively less complicated applications. But they are easily getting trapped in the local optima for many multimodal problems in real world. The stochastic algorithms that have been proposed for globally solving optimization problems can be divided into two major categories. The first category is based on the local search strategy such as the simulated annealing algorithm (SA). SA (Kirkpatrick, Gelatt, & Vecchi, 1983) was firstly derived from the analogy to heating and cooling process of materials. Many variants of SA algorithms were subsequently proposed to improve the performance of SA (Ingber, 1996; Wu & Chow, 2007). The second category is based on the global search strategy such as Genetic Algorithm (GA) (Vose, 1999; Kumar & Rockett, 1998; Maruyama & Lgarashi, 2008), Evolution Strategy (ES) (Beyer, 2001), Differential Evolution (DE) (Storn & Price, 1997; Kim et al., 2007; Lampinen & Storn, 2004), Particle Swarm Optimization (PSO) (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995; Shi, & Eberhart, 1998; Eberhart & Shi, 2001), and Self-Organizing Migrating Algorithm (SOMA) (Zelinka, 2004). The second category, also called Evolutionary Algorithms (EAs), has become increasingly popular for solving highly complicated optimization problems. Recently, there are other newly developed optimization algorithms using neural networks (Huhse et al., 2002; Milano, Koumoutsakos, & Schmidhuber, 2004). An algorithm called self-organizing potential field network (SOPFN) is proposed with significant performance improvement on solving multimodal problems (Xu & Chow, 2010). SOPFN combines the self-organizing map (SOM) (Kohonen, 1997) and vector potential field algorithm (VPF) (Masoud & Masoud, 2000; Khatib, 1986) to model the search space as a self-organizing potential field. Compared with local search methods, global search methods are not easily getting trapped in the local optima. But the global search algorithms need much more computation time to find the optimum, as they costly explore the whole search space. For example, the original PSO (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995) is proposed to evolve each individual using the successful histories cooperatively. In the original PSO, each individual called a particle adapts its position towards its own historical best position and the best position found so far by its companion.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing