Dynamic Activation Policies for Event Capture with Rechargeable Sensors
In this paper, the authors first consider the single-sensor problem. Using dynamic control theory, they consider a full-information model in which, independent of its activation schedule, the sensor will know whether an event has occurred in the last time slot or not. In this case, the problem is framed as a Markov Decision Process (MDP), and they develop a simple and optimal greedy policy for the solution. They then further consider a partial-information model where the sensor knows about the occurrence of an event only when it is active. This problem falls into the class of Partially Observable Markov Decision Processes (POMDP).