Software Investigate

Hybrid Optimized Back Propagation Learning Algorithm for Multi-Layer Perceptron

Download now Free registration required

Executive Summary

Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-Newton method .This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron. In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, this hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice.

  • Format: PDF
  • Size: 819.69 KB