Binary Information Press
The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard Support Vector Machines (SVMs) do not provide such probabilities. One method for fitting probabilities to the output of multiclass SVM is to fit Gaussians to the class-conditional densities, where a single tied variance is estimated for both Gaussians. However, the single parameter derived from the variances may not accurately model the true posterior probability without formula of parsing form and definite convergence. Instead, this paper first train binary SVMs using Geometric Distance-Based SVM (GDB-SVM), which takes the distance between a point and classified hyperplane as classification rule, then train the parameters of an additional sigmoid function to map the SVM outputs into probabilities.