论文部分内容阅读
In this paper,we extend the deterministic learning theory to sampled-data nonlinear systems.Based on the Euler approximate model,the adaptive neural network identifier with a normalized learning algorithm is proposed.It is proven that by properly setting the sampling period,the overall system can be guaranteed to be stable and partial neural network weights can exponentially converge to their optimal values under the satisfaction of the partial persistent excitation(PE)condition.Consequently,locally accurate learning of the nonlinear dynamics can be achieved,and the knowledge can be represented by using constant-weight neural networks.Furthermore,we present a performance analysis for the learning algorithm by developing explicit bounds on the learning rate and accuracy.Several factors that influence learning,including the PE level,the learning gain,and the sampling period,are investigated.Simulation studies are included to demonstrate the efectiveness of the approach.
In this paper, we extend the deterministic learning theory to sampled-data nonlinear systems. Based on the Euler approximate model, the adaptive neural network identifier with a normalized learning algorithm is proposed. It is proven that by properly setting the sampling period, the overall system can be guaranteed to be stable to partial optimal excitation (PE) condition. Conclusion, locally accurate learning of the nonlinear dynamics can be achieved, and the knowledge can be represented by using constant-weight neural networks. Future co., we present a performance analysis for the learning algorithm by developing explicit bounds on the learning rate and accuracy. Secondary factors that influence learning, including the PE level, the learning gain, and the sampling period , are investigated. Simulation studies are included to demonstrate the efectiveness of the approach.