Optimizing Quality-of-Information in Cost-Sensitive Sensor Data Fusion
This paper investigates maximizing quality of information subject to cost constraints in data fusion systems. The authors consider data fusion applications that try to estimate or predict some current or future state of a complex physical world. Examples include target tracking, path planning, and sensor node localization. Rather than optimizing generic network-level metrics such as latency or throughput, they achieve more resource-efficient sensor network operation by directly optimizing an application-level notion of quality, namely prediction error. This is done while accommodating cost constraints. Unlike prior cost-sensitive prediction/regression schemes, their solution considers more complex prediction problems that arise in sensor networks where phenomena behave differently under different conditions, and where both ordered and unordered prediction attributes are used.