论文部分内容阅读
用构造性的方法证明对任何定义在多维欧氏空间紧集上的勒贝格可积函数以及它的导数可以用一个单隐层的神经网络同时逼近.这个方法自然地得到了网络的隐层设计和收敛速度的估计,所得到的结果描述了网络收敛速度与隐层神经元个数之间的关系,同时也推广了已有的关于一致度量下的稠密性结果.
Constructively prove that for any Lebesgue integrable function and its derivatives defined on a compact set of multidimensional Euclidean space, a single hidden layer neural network can be approximated at the same time. This method naturally obtains the hidden layer of the network The results of the design and convergence speed estimation, the results obtained describe the relationship between the network convergence speed and the number of hidden neurons, and also generalize the existing results of the denseness in the consistent measure.