论文部分内容阅读
Online gradient methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.
Online rate methods are widely used for training the weight of neural networks and for other engineering computations. In certain cases, the resulting weight may become very large, causing difficulties in the implementation of the network by electronic circuits. In this paper we introduce a punishing term into the error function of the training procedure to prevent this situation. The corresponding convergence of the iterative training procedure and the boundedness of the weight sequence are proved. A supporting numerical example is also provided.