Learning Mixture Models With the Latent Maximum Entropy Principle
The paper presents a new approach to estimating mixture models based on a new inference principle the author has proposed: the Latent Maximum Entropy principle (LME). LME is different both from Jaynes' maximum entropy principle and from standard maximum likelihood estimation. The paper demonstrates the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the EM algorithm can be developed. The experiments show that estimation based on LME generally yields better results than maximum likelihood estimation, particularly when inferring latent variable models from small amounts of data.