An Enhanced Recursive Firefly Algorithm for Informative Gene Selection

An Enhanced Recursive Firefly Algorithm for Informative Gene Selection

Nassima Dif, Zakaria Elberrichi
Copyright: © 2019 |Pages: 13
DOI: 10.4018/IJSIR.2019040102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Feature selection is the process of identifying good performing combinations of significant features among many possibilities. This preprocess improves the classification accuracy and facilitates the learning task. For this optimization problem, the authors have used a metaheuristics approach. Their main objective is to propose an enhanced version of the firefly algorithm as a wrapper approach by adding a recursive behavior to improve the search of the optimal solution. They applied SVM classifier to investigate the proposed method. For the authors experimentations, they have used the benchmark microarray datasets. The results show that the new enhanced recursive FA (RFA) outperforms the standard version with a reduction of dimensionality for all the datasets. As an example, for the leukemia microarray dataset, they have a perfect performance score of 100% with only 18 informative selected genes among the 7,129 of the original dataset. The RFA was competitive compared to other state-of-art approaches and achieved the best results for CNS, Ovarian cancer, MLL, prostate, Leukemia_4c, and lymphoma datasets.
Article Preview
Top

1. Introduction

Performance improvement is the challenge in supervised classification. The main objective is to generate a good pattern for the classification task. This can be done, either by improving the learning algorithm, or the quality of the data. Different types of preprocessing, like replacement of noisy and missing values, discretization or feature selection, can improve the quality of data.

Microarrays have been a source of data for a wide range of biomedical investigations. They are useful to distinguish or to diagnostic different types of disease (Saeys, Inza, & Larranaga, 2007). A simple classification task consists to separate healthy patients from cancer patients (Bolon-Canedo, Sanchez-Marono, Alonso-Betanzos, Benitez, & Herrera, 2014). These datasets are characterized by a large number of genes associated to a low number of samples. This difference can causes over fitting problems to the classifier, and require a high computational run time. In addition, this type of datasets is noisy and complex (Alshamlan, Badr & Alohali,2015) which disrupts the classification task and reduce the performance of the classifier. The selection of the most relevant genes and performance improvement in microarray dataset is a challenging task because an important number of genes are irrelevant (Dashtban & Balafar, 2017).

Feature (or gene) selection aims to select a subset of m pertinent attributes were m < N, N is the number of features in the original set. By excluding, the irrelevant, noisy and redundant features (Bolon-Canedo, Sanchez-Maro, & Alonso-Betanzos, 2015), it reduces the dimensionality of the learning dataset, which makes the classification model more appropriate, and reduce the learning time complexity.

Several approaches have been proposed in the literature, to solve the feature selection problem. We distinguish three types of methods: wrapper, filter and hybrid or embedded methods. The filter methods select the most important feature using statistical measures to calculate the pertinence of the selected genes subset. They are characterized by their low computational time (Apolloni, Leguizamón, & Alba, 2016), because they don’t use the classifier. In the other hand, the wrapper approaches connect the learning algorithm with some optimization algorithms such as metaheuristics to select the best subsets (Lv, Peng, Chen, & Sun, 2016). Because they use the performance of the classifier as a value for the fitness function, these methods are better performing compared to the filter ones. The third methods called embedded, combine between the characteristics of the wrapper and the filter methods. The process of selecting the best subsets is done in parallel with the learning process, such as the case of tree algorithms; they are specific to a particular learning algorithm (Zhang, &Deng, 2007). Table1 resume some approaches used in gene selection.

The task of finding the relevant subsets is an NP-hard problem due to the large number of subsets to examine. Therefore, it requires the use of optimization methods like metaheuristics.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing