An Overview of Mutation Strategies in Particle Swarm Optimization

An Overview of Mutation Strategies in Particle Swarm Optimization

Waqas Haider Bangyal, Jamil Ahmad, Hafiz Tayyab Rauf
Copyright: © 2020 |Pages: 22
DOI: 10.4018/IJAMC.2020100102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The Particle swarm optimization (PSO) algorithm is a population-based intelligent stochastic search technique encouraged from the intrinsic manner of bee swarm seeking for their food source. With flexibility for numerical experimentation, the PSO algorithm has been mostly used to resolve diverse kind of optimization problems. The PSO algorithm is frequently captured in local optima meanwhile handling the complex real-world problems. Many authors improved the standard PSO algorithm with different mutation strategies but an exhausted comprehensive overview about mutation strategies is still lacking. This article aims to furnish a concise and comprehensive study of problems and challenges that prevent the performance of the PSO algorithm. It has tried to provide guidelines for the researchers who are active in the area of the PSO algorithm and its mutation strategies. The objective of this study is divided into two sections: primarily to display the improvement of the PSO algorithm with mutation strategies that may enhance the performance of the standard PSO algorithm to great extent and secondly, to motivate researchers and developers to use the PSO algorithm to solve the complex real-world problems. This study presents a comprehensive survey of the various PSO algorithms based on mutation strategies. It is anticipated that this survey would be helpful to study the PSO algorithm in detail for researchers.
Article Preview
Top

Introduction

In past, managing the real-life complex optimization challenges was the area of engineers and mathematicians. They formulated various mathematical approaches to solve the complex optimization problems. Initial techniques resolved the problems accurately by computing each solution; however fast techniques for heuristic findings of such problems have been appeared due to enormous computational time complexities of these exact approaches. Nowadays, one of most effective techniques is swarm intelligence (S.I) which is motivated from the cooperative nature of a few unintelligent species who perform collectively for the purpose to solve the problems.

Swarm Intelligence (S.I), proposed by (Beni & Wang, 1989), is one of the most progressive fields of Soft computing, which deals with optimization techniques based on multi-agent systems. Different colonies of social insects behave mutually like wasps, ants, drone, and termite similarly other species of animals like the flock of birds, the bee swarm, animal herds and sheep (Bonabeau, Dorigo, & Theraulaz, 1999). S.I model is referred to as evolutionary methods inspired by natural swarm system. Swarm intelligence model has been applied in many real-time applications (Bangyal, Ahmed, & Rauf, 2019). Most of the optimization problems required modern problem-solving approaches in combination with multi-agent systems while the traditional approach uses some specialized and sophisticated tools to solve each variety of problem (Hart, Duda, & Stork, 2000).

Stochastic based optimization techniques find for optimum solution by incorporating the randomness in the productive fashion (Hoos & Stützle, 2004). If optimization algorithms deliver the similar findings while performing the similar items, such techniques named as deterministic. (Feldman, 2012) If deterministic acts inconsistently, it attains the fact of chaos. On the other hand, by applying randomness in S.I techniques perform the enormous job due to this phenomenon influences the exploration and exploitation in given search space (Crepinsek, Liu, & Mernik, 2013). These partners of stochastic search global algorithms show two fundamentals for problem-solving, exploitation concerns to travel that concentrate on searching the potential neighbourhood while exploration focus to travel for determining the complete search space.

Primarily, S.I approaches used the randomness with the aim of finding the novel points by travelling the swarm particles in given search space. Similarly, various uniform random distribution numbers can be beneficial. However the Gold Burk (Goldberg, 1989) research is more nearest to optimization problems in evolutionary computing. The performance of evolutionary algorithms is greatly affected by applying genetic operators especially mutation and crossover. Mutation operator has been traditionally considered as the searching operator to recover lost genetic information (Hinterding, 2000). Mutation operator introduces new building blocks in the genetic structure by randomly modifying the current vector solution (Bangyal et al, 2018).

Although evolutionary algorithms contain some common characteristics example given, they all are population-based and converges to the solution population. Ant colony optimization (ACO) (Dorigo, Birattari, & Stutzle, 2006) is the most popular ants-nature-inspired algorithm considered to solve well known constrained optimization problems such as a traveling salesperson and vehicle routing problem. Each ant in a population drops pheromone to acknowledge the signal of the best path to the other group of ants. Bee colony optimization (BCO) (Karaboga, 2005) takes inspiration from bee’s foraging behaviours to explore new nectar’s source locations. The BAT algorithm (Yang, 2010) navigates best solution by echolocation which involves pulse emission and frequency tuning. Cuckoo search algorithm (Yang & Deb, 2013) inspired by cuckoo species consider the nesting area as search space and cuckoo’s egg as solution vector. The firefly algorithm takes inspiration from firefly species that uses fire intensity to move into new search areas. Firefly algorithm (Sarangi, Panda, Priyadarshini, & Sarangi, 2016) divide swarm into sub swarm based on brightens intensity of firefly.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing