A Novel Hybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance

A Novel Hybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance

Duggirala Raja Kishor, N.B. Venkateswarlu
Copyright: © 2016 |Pages: 28
DOI: 10.4018/IJACI.2016070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Expectation Maximization (EM) is a widely employed mixture model-based data clustering algorithm and produces exceptionally good results. However, many researchers reported that the EM algorithm requires huge computational efforts than other clustering algorithms. This paper presents an algorithm for the novel hybridization of EM and K-Means techniques for achieving better clustering performance (NovHbEMKM). This algorithm first performs K-Means and then using these results it performs EM and K-Means in the alternative iterations. Along with the NovHbEMKM, experiments are carried out with the algorithms for EM, EM using the results of K-Means and Cluster package of Purdue University. Experiments are carried out with datasets from UCI ML repository and synthetic datasets. Execution time, Clustering Fitness and Sum of Squared Errors (SSE) are computed as performance criteria. In all the experiments the proposed NovHbEMKM algorithm is taking less execution time by producing results with higher clustering fitness and lesser SSE than other algorithms including the Cluster package.
Article Preview
Top

Introduction

Cluster analysis is the identification of groups of observations that are cohesive within the same group and are separated from other groups (Chris Fraley, & Adrian E. Raftery, 2002; Hannah Inbarani H. & Selva Kumar S., 2015). Clustering methods find their wide applications in the fields like artificial intelligence, natural language processing, machine translation, medical imaging, signal processing and other more practical applications, such as store-customer grouping, web-document categorizing, genes and protein classifying and etc.

Clustering methods can be partitional, like K-means, PAM, CLARA, CLARANs, (Sami Ayramo & Tommi Karkkainen, 2006; Nizar Banu P.K., & Andrews S., 2015), hierarchical, like CURE, ROCK, CHAMELEON, (Sudipto Guha, Rajeev Rastogi & Kyuseok Shim, 2000; Sudipto Guha, Rajeev Rastogi & Kyuseok Shim, 2001; Marjan Kuchaki Rafsanjani, Zahra Asghari Varzaneh & Nasibeh Emami Chukanlo, 2012; Nizar Banu P.K., & Andrews S., 2015) or model-based, like Fuzzy c-means, SOM, Mixture model clustering, classes (Jiawei Han, & Micheline Kamber, (2007; Pang-Ning Tan, Michael Steinbach, & Vipin Kumar, 2007).

Partitional clustering methods attempt to break a population of data into k clusters such that the partition optimizes a given criterion. Hierarchical clustering algorithms produce a nested sequence of clusters, with a single all-inclusive cluster at the top and singleton clusters at the bottom. Model-based or prototype-based clustering methods attempt to optimize the fit between the given data and some mathematical model (Chris Fraley, & Adrian E. Raftery, 2002). Study of finite mixture models have been proposed, as part of model-based clustering. These mixture models assume that each group of the data is generated by an underlying probability distribution (Yeung K.Y., Fraley C., Murua A., Raftery A.E., & Ruzzo W.L., 2001). Cluster analysis based on probability models can provide insights into when the data conforms to a model (Chris Fraley, & Adrian E. Raftery, 2002). The probability distributions are often taken to be multivariate Gaussian distributions, since this type of distribution is well understood and has been shown to produce good results in many instances (Pang-Ning Tan, Michael Steinbach, & Vipin Kumar, 2007). In a mixture-model, a data item belongs to every cluster with some weight.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024)
Volume 14: 1 Issue (2023)
Volume 13: 6 Issues (2022): 1 Released, 5 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 4 Issues (2018)
Volume 8: 4 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing