The Minimum Transfer Cost Principle for Model-Order Selection
The goal of model-order selection is to select a model variant that generalizes best from training data to unseen test data. In unsupervised learning without any labels, the computation of the generalization error of a solution poses a conceptual problem which the authors address in this paper. They formulate the principle of "Minimum transfer costs" for model-order selection. This principle renders the concept of cross-validation applicable to unsupervised learning problems. As a substitute for labels, they introduce a mapping between objects of the training set to objects of the test set enabling the transfer of training solutions.