论文部分内容阅读
In this paper we discuss the learning convergence of the cerebellar model articulation controller (CMAC) in cyclic learning. We prove the following results. First, if the training samples are noiseless, the training algorithm converges if and only if the learning rate is chosen from (0, 2). Second, when the training samples have noises, the learning algorithm will converge with a probability of one if the learning rate is dynandcally decreased. Third, in the case with noises, with a small but fixed learning rate ε.the mean square error of the weight sequences generated by the CMAC learning algorithm will be bounded by O(ε). Some simulation experlinents are carried out totest these results.
In this paper we discuss the learning convergence of the cerebellar model articulation controller (CMAC) in cyclic learning. First, if the training samples are noiseless, the training algorithm converges if and only if the learning rate is chosen from (0, 2). Second, when the training samples have noises, the learning algorithm will converge with a probability of one if the learning rate is dynandcally decreased. Third, in the case with noises, with a small but fixed learning rate ε. the mean square error of the weight sequences generated by the CMAC learning algorithm will be bounded by O (ε). Some simulation experlinents are carried out totest these results.