Neural Network-Based Accelerators for Transcendental Function Approximation
The general-purpose approximate nature of Neural Network (NN) based accelerators has the potential to sustain the historic energy and performance improvements of computing systems. The authors propose the use of NN-based accelerators to approximate mathematical functions in the GNU C Library (glibc) that commonly occur in application benchmarks. Using their NN-based approach to approximate cos, exp, log, pow, and sin they achieve an average Energy-Delay Product (EDP) that is 68x lower than that of traditional glibc execution. In applications, their NN-based approach has an EDP 78% of that of traditional execution at the cost of an average Mean Squared Error (MSE) of 1.56.