Download now Free registration required
The traditional Back-Propagation Neural Network (BPNN) Algorithm is widely used in solving many real time problems in world. But BPNN possesses a problem of slow convergence and convergence to local minima. Previously, several modifications are suggested to improve the convergence rate of Gradient Descent Back-propagation algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology, activation function and 'Gain' value in the activation function. This paper proposed an algorithm for improving the current working performance of Back-propagation algorithm by adaptively changing the momentum value and at the same time keeping the 'Gain' parameter fixed for all nodes in the neural network.
- Format: PDF
- Size: 583.8 KB