Effective Statistical Methods for Big Data Analytics

Effective Statistical Methods for Big Data Analytics

Cheng Meng, Ye Wang, Xinlian Zhang, Abhyuday Mandal, Wenxuan Zhong, Ping Ma
Copyright: © 2017 |Pages: 20
DOI: 10.4018/978-1-5225-2498-4.ch014
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With advances in technologies in the past decade, the amount of data generated and recorded has grown enormously in virtually all fields of industry and science. This extraordinary amount of data provides unprecedented opportunities for data-driven decision-making and knowledge discovery. However, the task of analyzing such large-scale dataset poses significant challenges and calls for innovative statistical methods specifically designed for faster speed and higher efficiency. In this chapter, we review currently available methods for big data, with a focus on the subsampling methods using statistical leveraging and divide and conquer methods.
Chapter Preview
Top

1. Introduction

The rapid development of technologies in the past decade has enabled researchers to generate and collect data with unprecedented sizes and complexities in all fields of science and engineering, from academia to industry. These data pose significant challenges on knowledge discovery. We illustrate these challenges with examples from three different areas below.

  • Higgs Boson Data: Discovery of the long-awaited Higgs boson was announced in July 2012 and was confirmed six months later, leading to a Nobel Prize awarded in 2013 (www.nobelprize.org). A Toroidal LHC Apparatus (ATLAS), a particle detector experiment constructed at the Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) is one of the two LHCs that confirmed the existence of Higgs boson. The ATLAS generates the astronomically large amount of raw data about particle collision events, roughly one petabyte of raw data per second (Scannicchio, 2010). To put it into more tangible terms, one petabyte is enough to store the DNA of the entire population of the USA; one petabyte of average MP3-encoded songs (on mobile phones, roughly one megabyte per minute) would require 2,000 years to play. However, the analysis of the data at the scale of even tens or hundreds of petabytes is almost unmanageable using conventional techniques since the computation cost becomes intimidating or even not affordable at all.

  • Biological Experiments: RNA-Seq experiments have been used extensively to study transcriptomes (Mortazavi et al., 2008, Nagalakshmi et al., 2008). They serve as one of the best tools so far for novel transcripts detection and transcript quantification in ultra-high resolution, by obtaining tens of millions of short reads. When mapped to the genome and/or to the contigs, RNA-Seq data are summarized by a super-large number of short-read counts. These counts provide a digital measure of the presence and/or prevalence of transcripts under consideration. In any genome-wide analysis, such as the bias correction model proposed by (Li et al., 2010), the sample size goes easily to millions, which renders the standard statistical computation infeasible.

  • State Farm Distracted Driver Detection Data: Huge datasets are often generated by commercial companies nowadays. A dataset has been released by State Farm, the insurance company. State Farm is interested in testing whether dashboard cameras can automatically detect drivers engaging in distracted behaviors. Two-dimensional dashboard driver images, each taken in a car with a driver doing something in the car (texting, eating, talking on the phone, applying makeups, reaching behind, etc.) are provided. The goal of statistical analysis is to predict the likelihood of what the driver is doing in the picture, i.e. whether computer vision can spot each driver’s distracted behavior, such as if they are not driving attentively, not wearing their seatbelt, or taking a selfie with their friends in the backseat. In this case, the complexity of big data, i.e. the raw data being in the form of images, poses the first problem before performing any statistical analysis: converting imaging data into the matrix form is needed. In this example, the testing data itself consists of 22,424 images of 26 drivers in 10 scenarios, each with 60 to 90 images, and totaling the size of about 5 GB. The explosion of data generated can be imagined as the time recorded and the number of drivers increases.

This implication of big data goes well beyond the above. Facebook and Twitter generate millions of posts every second; Walmart stores and Amazon are recording thousands of millions of transactions 24 hours 7 day, etc. Super large and complicated data sets provide us with unprecedented opportunities for data-driven decision-making and knowledge discoveries. However, the task of analyzing such data calls for innovative statistical methods for addressing the new challenges emerging every day due to the explosion of data.

Complete Chapter List

Search this Book:
Reset