Institute of Electrical & Electronic Engineers
Random Forest (RF) is one of the state-of-art supervised learning methods in Machine Learning and inherently consists of two steps: the training and the evaluation step. In applications where the system needs to be updated periodically, the training step becomes the bottleneck of the system, imposing hard constraints on its adaptability to a changing environment. In this paper, a novel FPGA architecture for accelerating the RF training step is presented, exploring key features of the device. By combing a fine-grain data-flow processing at low-level and by exploiting parallelism available at high level inherent in the algorithm, significant acceleration factors are achieved.