Networking

An Extension of Multi-Layer Perceptron Based on Layer-Topology

Free registration required

Executive Summary

There are a lot of extensions made to the classic model of multi-layer perceptron (MLP). A notable amount of them has been designed to hasten the learning process without considering the quality of generalization. The paper proposes a new MLP extension based on exploiting topology of the input layer of the network. Experimental results show the extended model to improve upon generalization capability in certain cases. The new model requires additional computational resources to compare to the classic model, nevertheless the loss in efficiency isn't regarded to be significant.

  • Format: PDF
  • Size: 984.9 KB