Skip to content

Ren Liu, Xiaoqun Zhang, Semi-implicit back propagation

Full Text: PDF
DOI: 10.23952/jnva.7.2023.4.08

Volume 7, Issue 4, 1 August 2023, Pages 581-605

 

Abstract. Deep neural network (DNN) has been attracting a great attention in various applications. Network training algorithms play essential roles for the effectiveness of DNN. Although stochastic gradient descent (SGD) and other explicit gradient-based methods are the most popular algorithms, there are still many challenges such as gradient vanishing and explosion occurring in training a complex and deep neural networks. Motivated by the idea of error back propagation (BP) and proximal point methods (PPM), we propose a semi-implicit back propagation method for neural network training. Similar to the BP, the update on the neurons are propagated in a backward fashion and the parameters are optimized with proximal mapping. The implicit update for both hidden neurons and parameters allows to choose large step size in the training algorithm. Theoretically, we demonstrate the convergence of the proposed method under some standard assumptions. The experiments on illustrative examples, and two real data sets: MNIST and CIFAR-10, demonstrate that the proposed semi-implicit BP algorithm leads to better performance in terms of both loss decreasing and training/test accuracy, with a detail comparison to SGD/Adam and a similar algorithm proximal back propagation (ProxBP).

 

How to Cite this Article:
R. Liu, X. Zhang, Semi-implicit back propagation, J. Nonlinear Var. Anal. 7 (2023), 581-605.