Full Text: PDF
Volume 7, Issue 5, 1 October 2023, Pages 715-726
Abstract. Deriving convergence rates constitutes a crucial and profound field of investigation, carrying significant implications in both theoretical and practical contexts. This study focuses on establishing new convergence rates for nonlinear inverse problems concerning the identification of variable parameters in an abstract variational problem. We employ the energy least squares and output least squares methods to study the inverse problem in an optimization framework. The convergence rates are given in terms of the renowned Bregman distance associated with a convex regularizer. An intriguing aspect of the derived convergence rates is that they do not necessitate any smallness condition, making them applicable to a wide array of practical models.
How to Cite this Article:
D.N. Hào, A.A. Khan, S. Reich, Convergence rates for nonlinear inverse problems of parameter identification using Bregman distances, J. Nonlinear Var. Anal. 7 (2023), 715-726.