Enhancing Back Propagation Neural Network Algorithm With Adaptive Gain on Classification Problems

The standard back propagation algorithm for training artificial neural networks utilizes two terms, a learning rate and a momentum factor. The major limitations of this standard algorithm are the existence of temporary, local minima resulting from the saturation behaviour of the activation function, and the slow rates of convergence. Previous research demonstrated that in 'Feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'Gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems.

Provided by: Science and Development Network (SciDev.Net) Topic: Networking Date Added: Jun 2011 Format: PDF

Find By Topic