Analysis of Optimization Techniques for Feed Forward Neural Networks Based Image Compression

This paper reviews various optimization techniques available for training Multi-Layer Perception (MLP) artificial neural networks for compression of images. These optimization techniques can be classified into two categories: Derivative-based and Derivative free optimization. The former is based on the calculation of gradient and includes Gradient Descent, Conjugate gradient, Quasi-Newton, Levenberg-Marquardt Algorithm and the latter cover techniques based on evolutionary computation like Genetic Algorithms, Particle Swarm Optimization. The core of this paper is to investigate the most efficient and effective training algorithm for use in image compression.

Provided by: International Journal of Computer Science and Information Technologies Topic: Networking Date Added: Mar 2012 Format: PDF

Find By Topic