Branch Prediction With Neural Networks: Hidden Layers and Recurrent Connections
As Moore's law forces modern computer microarchitectures to increasingly rely on speculation, accurately predicting branches becomes more important in keeping the pipeline full. Previous work has shown that the perceptron, a simple linear discriminator, can be used as a powerful branch predictor. This paper expands on that research by experimenting with three other branch predictors, a neural network with one hidden layer (A feed-forward network), a neural network with one hidden layer and recurrent (Feedback) connections (Aka an Elman network), and a combined predictor, using a 2-bit saturating counter to vote between a perceptron and a feed-forward network. The authors show how the usually real-valued networks can be approximated with integer math using a look-up table to approximate the activation function and its derivative.