Feature Selection

Feature Selection

Noelia Sánchez-Maroño, Amparo Alonso-Betanzos
Copyright: © 2009 |Pages: 7
DOI: 10.4018/978-1-59904-849-9.ch096
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Many scientific disciplines use modelling and simulation processes and techniques in order to implement non-linear mapping between the input and the output variables for a given system under study. Any variable that helps to solve the problem may be considered as input. Ideally, any classifier or regressor should be able to detect important features and discard irrelevant features, and consequently, a pre-processing step to reduce dimensionality should not be necessary. Nonetheless, in many cases, reducing the dimensionality of a problem has certain advantages (Alpaydin, 2004; Guyon & Elisseeff, 2003), as follows: • Performance improvement. The complexity of most learning algorithms depends on the number of samples and features (curse of dimensionality). By reducing the number of features, dimensionality is also decreased, and this may save on computational resources—such as memory and time—and shorten training and testing times. • Data compression. There is no need to retrieve and store a feature that is not required. • Data comprehension. Dimensionality reduction facilitates the comprehension and visualisation of data. • Simplicity. Simpler models tend to be more robust when small datasets are used. There are two main methods for reducing dimensionality: feature extraction and feature selection. In this chapter we propose a review of different feature selection (FS) algorithms, including its main approaches: filter, wrapper and hybrid – a filter/wrapper combination.
Chapter Preview
Top

Introduction

Many scientific disciplines use modelling and simulation processes and techniques in order to implement non-linear mapping between the input and the output variables for a given system under study. Any variable that helps to solve the problem may be considered as input. Ideally, any classifier or regressor should be able to detect important features and discard irrelevant features, and consequently, a pre-processing step to reduce dimensionality should not be necessary. Nonetheless, in many cases, reducing the dimensionality of a problem has certain advantages (Alpaydin, 2004; Guyon & Elisseeff, 2003), as follows:

  • Performance improvement. The complexity of most learning algorithms depends on the number of samples and features (curse of dimensionality). By reducing the number of features, dimensionality is also decreased, and this may save on computational resources—such as memory and time—and shorten training and testing times.

  • Data compression. There is no need to retrieve and store a feature that is not required.

  • Data comprehension. Dimensionality reduction facilitates the comprehension and visualisation of data.

  • Simplicity. Simpler models tend to be more robust when small datasets are used.

There are two main methods for reducing dimensionality: feature extraction and feature selection. In this chapter we propose a review of different feature selection (FS) algorithms, including its main approaches: filter, wrapper and hybrid – a filter/wrapper combination.

Top

Feature Selection

The advantages described in the Introduction section denote the importance of dimensionality reduction. Feature selection is also useful when the following assumptions are made:

  • There are inputs that are not required to obtain the output.

  • There is a high correlation between some of the input features.

A feature selection algorithm (FSA) looks for an optimal set of features, and consequently, a paradigm that describes the FSA is heuristic search. Since each state of the search space is a subset of features, FSA can be characterised in terms of the following four properties (Blum & Langley, 1997):

Key Terms in this Chapter

Wrapper Method: A feature selection method that uses a learning machine as a “black box” to score subsets of features according to their predictive value.

Sequential Backward (Forward) Selection (SBS/SFS): A search method that starts with all the features (an empty set of features) and removes (adds) a single feature at each step with a view to improving -or minimally degrading- the cost function.

Hybrid Method: A feature selection method that combines the advantages of wrappers and filters methods to deal with high dimensionality data.

Feature Selection: A dimensionality reduction method that consists of selecting a subset of relevant features from a complete set while ignoring the remaining features.

Dimensionality Reduction: The process of reducing the number of features under consideration. The process can be classified in terms of feature selection and feature extraction.

Feature Extraction: A dimensionality reduction method that finds a reduced set of features that are a combination of the original ones.

Filter Method: A feature selection method that relies on the general characteristics of the training data to select and discard features. Different measures can be employed: distance between classes, entropy, etc

Complete Chapter List

Search this Book:
Reset