Periodic Chandrasekhar recursions

Periodic Chandrasekhar recursions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper extends the Chandrasekhar-type recursions due to Morf, Sidhu, and Kailath “Some new algorithms for recursive estimation in constant, linear, discrete-time systems, IEEE Trans. Autom. Control 19 (1974) 315-323” to the case of periodic time-varying state-space models. We show that the S-lagged increments of the one-step prediction error covariance satisfy certain recursions from which we derive some algorithms for linear least squares estimation for periodic state-space models. The proposed recursions may have potential computational advantages over the Kalman Filter and, in particular, the periodic Riccati difference equation.


💡 Research Summary

The paper addresses a fundamental computational bottleneck in the recursive estimation of periodic time‑varying state‑space models. While the Kalman filter (KF) and its periodic counterpart (PKF) provide optimal linear‑minimum‑mean‑square‑error (LMMSE) estimates, they require solving a periodic Riccati difference equation (PRDE) at every time step. For an (n)-dimensional state vector this entails (O(n^{3})) operations and substantial memory, which quickly becomes prohibitive for large‑scale or real‑time applications such as aerospace guidance, power‑grid monitoring, or periodic signal processing.

The authors revisit the Chandrasekhar recursion originally introduced by Morf, Sidhu, and Kailath (1974) for time‑invariant systems. The key idea of the Chandrasekhar approach is to replace the direct propagation of the full error‑covariance matrix (P_{k}) with a low‑rank representation based on the “lagged” increment (\Delta_{k}^{(S)} = P_{k+S} - P_{k}). In the constant‑coefficient case this increment satisfies a linear recursion that involves only matrices of size (r \times r) where (r) is the rank of the process noise or the number of measurements—typically far smaller than (n). Consequently, the computational load drops from cubic to quadratic (or even linear) in (n).

The central contribution of the present work is to extend this Chandrasekhar framework to periodic systems. The authors consider a discrete‑time model

\


Comments & Academic Discussion

Loading comments...

Leave a Comment