论文部分内容阅读
The training efficiency and test accuracy are important factors in judging the scalability of distributed deep learning.In this dissertation,the impact of noise introduced in the mixed national institute of standards and technology database (MNIST) and CIFAR-10 datasets is explored,which are selected as benchmark in distributed deep learning.The noise in the training set is manually divided into cross-noise and random noise,and each type of noise has a different ratio in the dataset.Under the premise of minimizing the influence of parameter interactions in distributed deep learning,we choose a compressed model (SqueezeNet) based on the proposed flexible communication method.It is used to reduce the communication frequency and we evaluate the influence of noise on distributed deep training in the synchronous and asynchronous stochastic gradient descent algorithms.Focusing on the experimental platform TensorFlowOnSpark,we obtain the training accuracy rate at different noise ratios and the training time for different numbers of nodes.The existence of cross-noise in the training set not only decreases the test accuracy and increases the time for distributed training.The noise has positive effect on destroying the scalability of distributed deep learning.