HKUST Library Institutional Repository Banner

HKUST Institutional Repository >
Computer Science and Engineering >
CSE Conference Papers >

Please use this identifier to cite or link to this item:
Title: Making large-scale Nyström approximation possible
Authors: Li, Mu
Kwok, James Tin-Yau
Keywords: Nyström method
Very large data sets
Low-rank approximation
Spectral embedding
Issue Date: 2010
Citation: Proceedings International Conference on Machine Learning, Haifa, Israel, 21-24 June 2010, p. 631-638
Abstract: The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, in order to ensure an accurate approximation, a sufficiently large number of columns have to be sampled. On very large data sets, the SVD step on the resultant data submatrix will soon dominate the computations and become prohibitive. In this paper, we propose an accurate and scalable Nyström scheme that first samples a large column subset from the input matrix, but then only performs an approximate SVD on the inner submatrix by using the recent randomized low-rank matrix approximation algorithms. Theoretical analysis shows that the proposed algorithm is as accurate as the standard Nyström method that directly performs a large SVD on the inner submatrix. On the other hand, its time complexity is only as low as performing a small SVD. Experiments are performed on a number of large-scale data sets for low-rank approximation and spectral embedding. In particular, spectral embedding of a MNIST data set with 3.3 million examples takes less than an hour on a standard PC with 4G memory.
Appears in Collections:CSE Conference Papers

Files in This Item:

File Description SizeFormat
icml10.pdf946KbAdobe PDFView/Open

All items in this Repository are protected by copyright, with all rights reserved.