An adaptive stopping criterion for backpropagation learning in feedforward neural network
Share
Abstract
In training artificial neural networks, Backpropagation has been frequently used and known to provide powerful tools for classification. Due to its capability to model linear and non-linear systems, it is widely applied to various areas, offering solutions and help to human experts. However, BP still has shortcomings and a lot of studies had already been done to overcome it. But one of the important elements of BP, the stopping criterion, was given a little attention. The Fisher's Iris data set was used to this study as input for standard B.P. Three experiments, using the different training set sizes, were conducted to measure the effectiveness of the proposed stopping criterion. The accuracy of the networks, trained in different data set sizes were also tested by using the corresponding testing sets. The experiments have shown that the proposed stopping criterion enabled the network to recognize its minimum acceptable error rate allowing it to learn to its maximum potential based on the presented patterns. The ubiquitous stopping criterion presented in this paper proved that the number of iterations to train the network should not be dictated by human since the accuracy of the network depends heavily on the number and quality of the training data.