Date Added: Jun 2010
The feed forward neural network which is a model of the cerebral neural network has in-built fault tolerance. The conventional back-propagation algorithm reduces errors between the learning examples and the output of a Multilayer Neural Network (MNN). However, it is not assured that the MNN behaves in the same manner when faults occur. For these reasons the study of fault tolerance in Artificial Neural Networks (ANN) is valuable. The method proposed here improves the fault tolerance of the feed forward network to stuck-at-faults of weights by manipulating the activation function.