Learning and Evaluating Classifiers Under Sample Selection Bias
Classifier learning methods commonly assume that the training data consist of randomly drawn examples from the same distribution as the test examples about which the learned model is expected to make predictions. In many practical situations, however, this assumption is violated, in a problem known in econometrics as sample selection bias. This paper formalizes the sample selection bias problem in machine learning terms and study analytically and experimentally how a number of well-known classifier learning methods are affected by it. The paper also presents a bias correction method that is particularly useful for classifier evaluation under sample selection bias.