Software

Low-Rank Kernel Learning With Bregman Matrix Divergences

Free registration required

Executive Summary

In this paper, the authors study low-rank matrix nearness problems, with a focus on learning low-rank positive semi-definite (Kernel) matrices for machine learning applications. They propose efficient algorithms that scale linearly in the number of data points and quadratically in the rank of the input matrix. Existing algorithms for learning kernel matrices often scale poorly, with running times that are cubic in the number of data points. They employ Bregman matrix divergences as the measures of nearness - these divergences are natural for learning low-rank kernels since they preserve rank as well as positive semi-definiteness. Special cases of the framework yield faster algorithms for various existing learning problems, and experimental results demonstrate that the algorithms can effectively learn both low-rank and full-rank kernel matrices.

  • Format: PDF
  • Size: 381.09 KB