Date Added: Aug 2010
Deep-layer machine learning architectures continue to emerge as a promising biologically-inspired framework for achieving scalable perception in artificial agents. State inference is a consequence of robust perception, allowing the agent to interpret the environment with which it interacts and map such interpretation to desirable actions. However, in existing deep learning schemes, the perception process is guided purely by spatial regularities in the observations, with no feedback provided from the target application (e.g. classification, control). In this paper, the authors propose a simple yet powerful feedback mechanism, based on adjusting the sample presentation distribution, which guides the perception model in allocating resources for patterns observed.