Parallelizing information-theoretic clustering methods

Ron Bekkerman, Martin Scholz

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Facing a problem of clustering amultimillion-data-point collection, amachine learning practitioner may choose to apply the simplest clustering method possible, because it is hard to believe that fancier methods can be applicable to datasets of such scale. Whoever is about to adopt this approach should first weigh the following considerations: Simple clustering methods are rarely effective. Indeed, four decades of research would not have been spent on data clustering if a simple method could solve the problem. Moreover, even the simplest methods may run for long hours on a modern PC, given a large-scale dataset. For example, consider a simple online clustering algorithm (which, we believe, is machine learning folklore): first initialize k clusters with one data point per cluster, then iteratively assign the rest of data points into their closest clusters (in the Euclidean space). If k is small enough, we can run this algorithm on one machine, because it is unnecessary to keep the entire data in RAM. However, besides being slow, it will produce low-quality results, especially when the data is highly multi-dimensional. State-of-the-art clustering methods can scale well, which we aim to justify in this chapter. With the deployment of large computational facilities (such as Amazon.com's EC2, IBM's BlueGene, and HP's XC), the Parallel Computing paradigm is probably the only currently available option for tackling gigantic data processing tasks. Parallel methods are becoming an integral part of any data processing system, and thus getting special attention (e.g., universities introduce parallel methods to their core curricula; see Johnson et al., 2008).

Original languageEnglish
Title of host publicationScaling up Machine Learning
Subtitle of host publicationParallel and Distributed Approaches
PublisherCambridge University Press
Pages262-280
Number of pages19
Volume9780521192248
ISBN (Electronic)9781139042918
ISBN (Print)9780521192248
DOIs
StatePublished - 1 Jan 2011
Externally publishedYes

Bibliographical note

Publisher Copyright:
© Cambridge University Press 2012.

ASJC Scopus subject areas

  • Computer Science (all)

Fingerprint

Dive into the research topics of 'Parallelizing information-theoretic clustering methods'. Together they form a unique fingerprint.

Cite this