Skip to content

Zhirayr Tovmasyan, Grigory Malinovsky, Laurent Condat, Peter Richtárik, Revisiting stochastic proximal point methods: Generalized smoothness and similarity

Full Text: PDF
DOI: 10.23952/jnva.10.2026.3.01

Volume 10, Issue 3, 1 June 2026, Pages 471-505

 

Abstract. The growing prevalence of nonsmooth optimization problems in machine learning has spurred significant interest in generalized smoothness assumptions. Among these, the (L_0, L_1)-smoothness assumption has emerged as one of the most prominent. While proximal methods are well-suited and effective for nonsmooth problems in deterministic settings, their stochastic counterparts remain underexplored. This work focuses on the stochastic proximal point method (SPPM), valued for its stability and minimal hyperparameter tuning—advantages often missing in stochastic gradient descent (SGD). We propose a novel \phi-smoothness framework and provide a comprehensive analysis of SPPM without relying on traditional smoothness assumptions. Our results are highly general, encompassing existing findings as special cases. Furthermore, we examine SPPM under the widely adopted expected similarity assumption, thereby extending its applicability to a broader range of scenarios. Our theoretical contributions are illustrated and validated by practical experiments.

 

How to Cite this Article:
Z. Tovmasyan, G. Malinovsky, L. Condat, P. Richtárik, Revisiting stochastic proximal point methods: Generalized smoothness and similarity, J. Nonlinear Var. Anal. 10 (2026), 471-505.