HKUST Library Institutional Repository Banner

HKUST Institutional Repository >
Computer Science and Engineering >
CSE Conference Papers >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1783.1/6823
Title: Accelerated gradient methods for stochastic optimization and online learning
Authors: Hu, Chonghai
Kwok, James Tin-Yau
Pan, Weike
Keywords: Accelerated gradient methods
Stochastic optimization
Online Learning
Issue Date: Dec-2009
Citation: Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, 7-9 December 2009, Vancouver, B. C., Canada.
Abstract: Regularized risk minimization often involves non-smooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., ℓ1-regularizer). Gradient methods, though highly scalable and easy to implement, are known to converge slowly. In this paper, we develop a novel accelerated gradient method for stochastic optimization while still preserving their computational simplicity and scalability. The proposed algorithm, called SAGE (Stochastic Accelerated GradiEnt), exhibits fast convergence rates on stochastic composite optimization with convex or strongly convex objectives. Experimental results show that SAGE is faster than recent (sub)gradient methods including FOLOS, SMIDAS and SCD. Moreover, SAGE can also be extended for online learning, resulting in a simple algorithm but with the best regret bounds currently known for these problems.
Rights: © 2009 NIPS Foundation.
URI: http://hdl.handle.net/1783.1/6823
Appears in Collections:CSE Conference Papers

Files in This Item:

File Description SizeFormat
nips09.pdf190KbAdobe PDFView/Open

All items in this Repository are protected by copyright, with all rights reserved.