Extreme Learning With Metaheuristic Optimization for Exchange Rate Forecasting

Extreme Learning With Metaheuristic Optimization for Exchange Rate Forecasting

Kishore Kumar Sahu, Sarat Chandra Nayak, Himansu Sekhar Behera
Copyright: © 2022 |Pages: 25
DOI: 10.4018/IJSIR.295099
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Model with better learning ability and lower structural complexity is desirous for accurate exchange rate forecasting. Faster convergence to optimal solutions has always been a goal for the researcher in building forecasting models. And this is achieved by extreme learning machines (ELMs) due to their single hidden layer architecture and superior generalization ability. ELM is a simple training algorithm used to find the hidden-output layer weights by a random selection of input-hidden layer weights. Metaheuristics algorithms like Fireworks algorithm (FWA), Chemical reaction optimization (CRO), and Teaching learning-based optimization (TLBO) are employed to pre-train the ELM owing to their fewer optimizing parameters. This article aims to pre-train ELM using the said metaheuristics separately, ensuring the optimal solution of a single feedforward network (SLFN) with improved accuracy. The pre-trained ELMs provide accurate results. The same was verified using other primitive optimization algorithms
Article Preview
Top

1. Introduction

Prediction of exchange rates is interesting and crucial for financial managers, economic traders, international companies, and all sorts of financial stakeholders. However, nonlinear structures that exist in exchange rate data make its forecast difficult. Several models attributed with cognitive intelligence as of artificial neural network (ANN), fuzzy enabled inference systems (FIS), and evolutionary computations (EC)have shown their efficiency in capturing the inherent nonlinearities of stock closing indices (Guresen et al., 2011)(Thinyane & Millin, 2011), macroeconomic variables (Onder et al., 2013), and exchange rate time series (Y. Q. Zhang & Wan, 2007)(Alves Portela Santos et al., 2007)(Ozkan & others, 2012)(Galeshchuk, 2016)as alternatives to conventional models. ANN-based models are data-driven and can approximate nonlinear datasets like financial time series with no prior information about the functional forms. It can learn from data, do not necessitate supposition about data dissemination, handle inconsistent and chaotic data, and process numeric data (Haykin, 2009). Usually, economic data are in numeric form, thus ideal for ANN processing without losing any information. The study conducted in (Semaan et al., 2014) compared the regression method's forecast accuracy and ANN on exchange rate forecasting. The result showed an improvement in accuracy while adopting ANN. Another case study on neural network methods (Yao & Tan, 2000) was conducted for exchange rate forecasting. Usually, the exchange rate time series contains irregular data, highly fluctuating, and prone to large errors (Rout et al., 2017). In past decades, many exciting statistical and linear models are proposed for exchange rate forecasting based on the linear and stationary properties of financial time series(Engle, 1982)(Bollerslev, 1986)(George E P Box et al., 2008)(AmirAskari & Menhaj, 2016). However, it is not the case in reality, and those models produced poor performance (Kadilar et al., 2009)(Ahmed et al., 2013).

The structure of the networks and the training algorithm so employed mostly determines the connectionist model's performance, the most commonly used training algorithm being the Gradient descent-based. On the other hand, this technique suffers from several drawbacks like slower convergence rate, oscillating near local minima, inaccurate learning rate, and many more factors that make the model take more time to compute and adds overheads associated with computation (Fernández-Navarro et al., 2012). ELM model was suggested by Huang et al. (Huang et al., 2006)(Huang et al., 2011) that overcome the above-said limitations. ELM takes random weights set for input-hidden neuron connections. The output connection weight sets are evaluated analytically instead of fine-tuning the same iteratively. Lots of research and experimentation have been done in the last few years using ELM. It finds use in several real-life problems like time series forecasting (Grigorievskiy et al., 2014), sales forecasting (Sun et al., 2008), financial time series forecasting (R. Dash et al., 2014), electricity load forecasting (R. Zhang et al., 2013)(Yap & Yap, 2012), power system economic dispatch (Yang et al., 2013), and so on. However, the main issue associated with ELM is the insertion of random, non-optimal chosen weight sets and the bias of the hidden unit neurons. These non-optimal parameters may affect the output weights, hence the model performance.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 3 Issues (2023)
Volume 13: 4 Issues (2022)
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 4 Issues (2016)
Volume 6: 4 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing