2018 IEEE 34th International Conference on Data Engineering (ICDE)
Download PDF

Abstract

In the big data era, k-means clustering has been widely adopted as a basic processing tool in various contexts. However, its computational cost could be prohibitively high when the data size and the cluster number are large. The processing bottleneck of k-means lies in the operation of seeking the closest centroid in each iteration. In this paper, a novel solution towards the scalability issue of k-means is presented. In the proposal, k-means is supported by an approximate k-nearest neighbors graph. In the k-means iteration, each data sample is only compared to clusters that its nearest neighbors reside. Since the number of nearest neighbors we consider is much less than k, the processing cost in this step becomes minor and irrelevant to k. The processing bottleneck is therefore broken. The most interesting thing is that k-nearest neighbor graph is constructed by calling the fast k-means itself. Compared with existing fast k-means variants, the proposed algorithm achieves hundreds to thousands times speed-up while maintaining high clustering quality. As it is tested on 10 million 512-dimensional data, it takes only 5.2 hours to produce 1 million clusters. In contrast, it would take 3 years for traditional k-means to fulfill the same scale of clustering.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Similar Articles