Bibliography.bib ceHALG@line0.0 title = Universal Gradient Methods for Stochastic Convex Optimization, date = July 11, 2024 We develop universal gradient methods for Stochastic Convex Optimization (SCO). Our algorithms automatically adapt not only to the oracle’s noise but also to the Hölder smoothness of the objective function without a priori knowledge of the particular setting. The key ingredient is a novel strategy for adjusting step-size coefficients in the Stochastic Gradient Method (SGD). Unlike AdaGrad, which accumulates gradient norms, our Universal Gradient Method accumulates appropriate combinations of gradient- and iterate differences. The resulting algorithm has state-of-the-art worst-case convergence rate guarantees for the entire Hölder class including, in particular, both nonsmooth functions and those with Lipschitz continuous gradient. We also present the Universal Fast Gradient Method for SCO enjoying optimal efficiency estimates. name = Anton Rodomanov, affiliation = CISPA Helmholtz Center for Information Security, email = anton.rodomanov@cispa.de name = Ali Kavis, affiliation = Institute for Foundations of Machine Learning (IFML), UT Austin, email = kavis@austin.utexas.edu name = Yongtao Wu, affiliation = Laboratory for Information and Inference Systems (LIONS), EPFL, email = yongtao.wu@epfl.ch name = Kimon Antonakopoulos, affiliation = Laboratory for Information and Inference Systems (LIONS), EPFL, email = kimon.antonakopoulos@epfl.ch name = Volkan Cevher, affiliation = Laboratory for Information and Inference Systems (LIONS), EPFL, email = volkan.cevher@epfl.ch convex optimization, stochastic optimization, gradient methods, universal methods, adaptive algorithms, efficiency estimates, Hölder smoothness