Artificial Neural Network (ANN) are highly interconnected and highly parallel systems. Back Propagation is a common method of training artificial neural networks so as to minimize objective function. This paper describes the implementation of back propagation algorithm. The error generated at the output is fed back to the input and weights of the neurons are updated by supervised learning and it is a generalization of delta rule. The sigmoid function is used as a activation function. The design is simulated using MATLAB R2008a version. Maximum accuracy has been achieved.