Download Now Free registration required
This paper reviews various optimization techniques available for training Multi-Layer Perception (MLP) artificial neural networks for compression of images. These optimization techniques can be classified into two categories: Derivative-based and Derivative free optimization. The former is based on the calculation of gradient and includes Gradient Descent, Conjugate gradient, Quasi-Newton, Levenberg-Marquardt Algorithm and the latter cover techniques based on evolutionary computation like Genetic Algorithms, Particle Swarm Optimization. The core of this paper is to investigate the most efficient and effective training algorithm for use in image compression.
- Format: PDF
- Size: 530.68 KB