Stochastic Sensor Scheduling for Networked Control Systems
Optimal sensor scheduling with applications to networked estimation and control systems is considered. We model sensor measurement and transmission instances using jumps between states of a continuous-time Markov chain. We introduce a cost function for this Markov chain as the summation of terms depending on the average sampling frequencies of the subsystems and the effort needed for changing the parameters of the underlying Markov chain. By minimizing this cost function through extending Brockett’s recent approach to optimal control of Markov chains, we extract an optimal scheduling policy to fairly allocate the network resources among the control loops. We study the statistical properties of this scheduling policy in order to compute upper bounds for the closed-loop performance of the networked system, where several decoupled scalar subsystems are connected to their corresponding estimator or controller through a shared communication medium. We generalize the estimation results to observable subsystems of arbitrary order. Finally, we illustrate the developed results numerically on a networked system composed of several decoupled water tanks.
💡 Research Summary
The paper tackles the fundamental problem of allocating limited communication resources among multiple sensor‑actuator loops in a networked control system. Each sensor’s measurement and transmission events are modeled as transitions between “ON” (active) and “OFF” (idle) states of a continuous‑time Markov chain (CTMC). The transition rates λi(t) are treated as controllable inputs; higher rates correspond to more frequent sampling. A cost functional is introduced that combines two competing objectives: (1) a performance term penalizing the deviation between the average sampling frequency of each subsystem and a prescribed target frequency fi∗, weighted by importance factors wi; and (2) an effort term that penalizes rapid changes in the transition rates, reflecting the physical or communication cost of reconfiguring the scheduling policy.
By extending Brockett’s recent optimal‑control framework for Markov chains, the authors derive a linear‑quadratic (LQ) optimal control law for the CTMC rates. The optimal rates take a feedback form λi∗(t)=λi0−Ki(𝔼
Comments & Academic Discussion
Loading comments...
Leave a Comment