Skip to content

Mengxiang Zhang, Shengjie Li, Convergence analysis of a proximal stochastic gradient algorithm with adaptive sampling for non-convex and non-smooth composite optimization problems

Full Text: PDF
DOI: 10.23952/jnva.9.2025.4.07

Volume 9, Issue 4, 1 August 2025, Pages 569-598

 

Abstract. This paper examines the convergence and computational complexity of a proximal stochastic gradient algorithm that adaptively incorporates sampling techniques for solving large-scale, non-convex, and non-smooth problems, with a particular emphasis on problems that involve the combination of two non-convex functions. This an area that has been scarcely explored by current methods. By adjusting adaptively the sampling size (or mini-batch size) throughout the algorithm’s iterations, this method aims to balance the trade-off between stochastic gradient noise and convergence stability. It maintains a convergence rate similar to that of the proximal gradient method. Moreover, when the objective function is a Kurdyka-Ɓojasiewicz (KL) function, we demonstrate the convergence rate of the expected function value on a case-by-case basis, achieving linear convergence under optimal conditions. Finally, some preliminary numerical results validate the effectiveness and robustness of the proposed method.

 

How to Cite this Article:
M. Zhang, S. Li, Convergence analysis of a proximal stochastic gradient algorithm with adaptive sampling for non-convex and non-smooth composite optimization problems, J. Nonlinear Var. Anal. 9 (2025), 569-598.