Sensor Management for Tracking in Sensor Networks

We study the problem of tracking an object moving through a network of wireless sensors. In order to conserve energy, the sensors may be put into a sleep mode with a timer that determines their sleep

Sensor Management for Tracking in Sensor Networks

We study the problem of tracking an object moving through a network of wireless sensors. In order to conserve energy, the sensors may be put into a sleep mode with a timer that determines their sleep duration. It is assumed that an asleep sensor cannot be communicated with or woken up, and hence the sleep duration needs to be determined at the time the sensor goes to sleep based on all the information available to the sensor. Having sleeping sensors in the network could result in degraded tracking performance, therefore, there is a tradeoff between energy usage and tracking performance. We design sleeping policies that attempt to optimize this tradeoff and characterize their performance. As an extension to our previous work in this area [1], we consider generalized models for object movement, object sensing, and tracking cost. For discrete state spaces and continuous Gaussian observations, we derive a lower bound on the optimal energy-tracking tradeoff. It is shown that in the low tracking error regime, the generated policies approach the derived lower bound.


💡 Research Summary

The paper tackles the classic energy‑accuracy trade‑off in wireless sensor networks (WSNs) that are tasked with continuously tracking a moving object. Because each sensor is battery‑powered, it is desirable to put sensors into a low‑power “sleep” mode when their measurements are not expected to significantly improve the tracking estimate. The crucial constraint is that a sleeping sensor cannot be communicated with or awakened externally; therefore, the sleep duration must be decided at the moment the sensor transitions to sleep, using all information that is currently available (the posterior distribution of the object’s state, the motion model, and the sensing model).

Problem formulation
The authors model the situation as a partially observable Markov decision process (POMDP). The hidden state (X_t) denotes the object’s location (or a discrete state in a finite state space). Each sensor (i) at time (t) chooses an action (U_{i,t}\in{0,1}) (0 = sleep, 1 = active). When active, the sensor produces a measurement (Y_{i,t}) that follows a continuous Gaussian distribution (\mathcal{N}(h_i(X_t),R_i)). When asleep, the sensor provides no observation and remains inaccessible until a pre‑set timer expires. The object evolves according to a generalized Markov transition kernel (P(X_{t+1}\mid X_t)).

The overall cost is a convex combination of (a) an energy term (E_t=\sum_i c_i\mathbf{1}{U_{i,t}=1}) (cost incurred for each active sensor) and (b) a tracking term (L_t=\mathbb{E}


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...