Big Data Business Intelligence in Bank Risk Analysis

Big Data Business Intelligence in Bank Risk Analysis

Nayem Rahman, Shane Iverson
Copyright: © 2015 |Pages: 23
DOI: 10.4018/IJBIR.2015070104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper provides an overview of big data technologies and best practices from the standpoint of business intelligence (BI) applications in the banking industry. The authors discussed current challenges in banking industry that could be addressed by using big data technologies. Based on their research, they provided a list of big data tools and technologies in terms of an ecosystem that are suitable for real-time data processing and capable in bank fraud detection and prevention, and other bank risk analysis. They highlighted how business intelligence could be leveraged with the help of emerging big data technologies.
Article Preview
Top

1. Introduction

Business organizations continuously make efforts to improve their decision making capabilities using business intelligence (Schlesinger & Rahman, 2015). In the banking industry business intelligence is not only used for making decisions concerning operations of the bank, but also as a way of complying with government-mandated banking regulations and as a way of providing their customers with fraud detection, prevention, and mitigation services. Government organizations responsible for protecting banking systems, cutting funding to criminal organizations, and preventing corporate bribes, are adding pressure on corporations to fight fraud. Banks feel the brunt of this regulation pressure because they are the money holding and lending institutions for those corporations who use their services. And while US banks are not legally bound to reimburse their corporate customers for losses suffered as a result of fraud (United States Court of Appeals for the Eighth Circuit, 2014), if they choose not to absorb the losses, banks risk reputational damage and potential loss of corporate clients. As further complication to this problem, automation of business payments and global internet connectivity is presenting opportunities for fraudsters both inside and outside corporations. Because of the growing volume of electronic transactions and a resulting reduction of staff in accounts payable, fewer people are involved in observing bank transactions per volume than ever before (NICE systems Ltd., 2012). This means that bank fraud could go undetected for longer periods of time (Sadasivam et al., 2016) costing corporations, banks, and tax payers’ huge sums of money as evidenced by the 2008 financial crisis.

Given this environment, banks are investing much time and money into projects driven by regulatory or legal requirements, up to fifty percent of projects by some estimates. New rules are increasing capital requirements costing the banks 2.5 to 3.5 percent in pre-tax return-on-equity (Busch, 2013). These increased regulations put enormous pressures on banks to demonstrate risk analysis to auditors, while inability to do so could result in regulatory fines.

Central to this issue is the ability for banks to calculate risk more precisely and from a variety of data sources. Much of this data is unstructured and resides in huge volumes from a few terabytes to hundreds of terabytes. This data comes from many sources including mobile, social media, video capture, sensors and surveillance and has a new name called ‘big data’ as opposed to “normal data” (Rahman & Rutz, 2015). However, intense computational power and vast storage is required to perform risk analysis calculations from this type of data.

Banks face many challenges as these data volumes grow. Larger volumes of information mean that data storage and retrieval response times get longer and effective utilization of this data becomes more costly (Rahman and Rutz, 2015). Conventional databases, which banks currently use, simply cannot handle the volume, variety, and velocity of this type of data (Cloud Security Alliance, 2013; Chardonnens et al., 2013). For big data, which is mostly unstructured and huge in volume, a completely new set of computing technologies has emerged (Rahman et al., 2014). Traditionally used to store, process, and manage structured data, conventional databases use of structured data focuses only on specific details from an entire data set. Big data, on the other hand, tends to be used to make comparisons and contrasts on large scales thereby establishing norms and trends across a whole set of data. These norms and trends can then be used to map patterns and in the case of bank fraud, that information can be used to spot outliers (fraudsters). But what really makes big data useful is that not only can it be used to spot trends and outliers across data sets, but it can also do it with a variety source types such as call center phone recordings and all the Twitter and Facebook messages and all the emails sent to or from a bank. From this vast picture of activity, patterns begin to emerge that cannot be seen through traditional means.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 1 Issue (2022)
Volume 12: 2 Issues (2021)
Volume 11: 2 Issues (2020)
Volume 10: 2 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing