Clément Lezane, Alexandre d’Aspremont, Oblivious stochastic convex optimization
Full Text: PDF
DOI: 10.23952/jnva.10.2026.2.9
Volume 10, Issue 2, 1 April 2026, Pages 403-434
Abstract. In stochastic convex optimization problems, most existing adaptive methods rely on prior knowledge about the diameter bound D when the smoothness or the Lipschitz constant is unknown. This often significantly affects performance as only a rough approximation of D is usually known in practice. Here, we bypass this limitation by combining mirror descent with dual averaging techniques and we show that, under oblivious step-sizes regime, our algorithms converge without any prior knowledge on the parameters of the problem. We introduce three oblivious stochastic algorithms to address different settings. The first algorithm is designed for objectives in relative scale, the second one is an accelerated version tailored for smooth objectives, whereas the last one is for relatively-smooth objectives. All three algorithms work without prior knowledge of the diameter of the feasible set, the Lipschitz constant or smoothness of the objective function. We use these results to revisit the problem of solving large-scale semidefinite programs using randomized first-order methods and stochastic smoothing. We extend our framework to relative scale and demonstrate the efficiency and robustness of our methods on large-scale semidefinite programs.
How to Cite this Article:
C. Lezane, A. d’Aspremont, Oblivious stochastic convex optimization, J. Nonlinear Var. Anal. 10 (2026), 403-434.
