Institute of Electrical & Electronic Engineers
Support Vector Machines (SVM) is a powerful supervised learning tool. Its training phase, however, is a time-consuming task and heavily dependent on the training dataset size and dimensionality. In this paper, the authors propose a scalable FPGA architecture for the acceleration of SVM training, which exploits the heterogeneous nature of the device and the diversities of the precision requirements among the dataset attributes. The maximum parallelization potential is obtained by maintaining the usage of DSPs and logic resources at the initial ratio of the FPGA device.