Distributed Recursive Least-Squares: Stability and Performance Analysis
The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged’ system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying independence setting assumptions facilitate deriving accurate closed-form expressions for the MSE steady-state values. The problems of mean- and MSE-sense stability of D-RLS are also investigated, and easily-checkable sufficient conditions are derived under which a steady-state is attained. Without resorting to diminishing step-sizes which compromise the tracking ability of D-RLS, stability ensures that per sensor estimates hover inside a ball of finite radius centered at the true parameter vector, with high-probability, even when inter-sensor communication links are noisy. Interestingly, computer simulations demonstrate that the theoretical findings are accurate also in the pragmatic settings whereby sensors acquire temporally-correlated data.
💡 Research Summary
This paper introduces a Distributed Recursive Least‑Squares (D‑RLS) algorithm tailored for cooperative estimation in ad‑hoc wireless sensor networks (WSNs). Starting from the exponentially‑weighted least‑squares (EWLS) cost, the authors reformulate the problem as a separable constrained optimization by assigning each node a local copy of the global parameter vector and enforcing consensus among neighboring nodes. The key methodological contribution is the use of the Alternating‑Minimization Algorithm (AMA) rather than the more common Alternating‑Direction Method of Multipliers (AD‑MoM). AMA updates the ordinary Lagrangian multipliers and the local estimates while avoiding the augmented‑Lagrangian term, which dramatically reduces per‑iteration computational load: matrix inversions are required only once per node (as in standard RLS) instead of twice per iteration as in AD‑MoM‑based distributed RLS. Moreover, the algorithm eliminates the need for “bridge sensors,” allowing every node to perform identical operations regardless of network topology.
The algorithm proceeds in three steps at each time instant: (1) multiplier updates using the difference between a node’s current estimate and its neighbors’ auxiliary variables, scaled by a penalty constant c; (2) local RLS‑style update of the node’s estimate by minimizing the ordinary Lagrangian with respect to its own variable; (3) auxiliary variable updates that enforce consensus, which lead to the simple relation v = −u for the multipliers. Communication noise on each link is explicitly modeled as zero‑mean, temporally and spatially white, with known covariance matrices R_η. Sensor observations are assumed linear in the unknown parameter vector s₀, with additive measurement noise.
To analyze performance, the authors construct an “averaged” stochastic system that approximates the exact D‑RLS dynamics as time grows. Under the standard independence assumptions (measurement noise, regression vectors, and communication noise are mutually independent across time and nodes), they derive closed‑form expressions for the steady‑state mean‑square error (MSE) at both the network level and individual nodes. These expressions involve the forgetting factor λ, the regularization matrix Φ₀, the graph Laplacian L, the penalty constant c, and the communication‑noise covariances. The analysis shows that the state covariance matrix fully captures the evolution of the error dynamics, enabling precise prediction of performance.
Stability is examined in two senses. Mean‑stability is guaranteed if the spectral radius ρ(I − c L)·λ < 1, which can be satisfied by appropriate choices of c and λ. MSE‑stability is proved by constructing a Lyapunov function for the averaged system; the resulting sufficient condition ensures that the error covariance converges to a finite fixed point, implying that each node’s estimate remains within a bounded ball around the true parameter with high probability, even when communication links are noisy. Notably, these guarantees hold without resorting to diminishing step‑sizes, preserving the algorithm’s ability to track slowly time‑varying parameters.
Extensive simulations validate the theory. Experiments with both white and temporally correlated data, and with varying signal‑to‑noise ratios on the communication links, demonstrate that the analytically predicted steady‑state MSE matches the empirical results. When λ < 1, the algorithm successfully tracks non‑stationary parameter changes while maintaining stability, confirming that the forgetting factor provides the desired tracking capability without sacrificing convergence.
In summary, the paper delivers (i) a low‑complexity, fully distributed RLS scheme based on AMA, (ii) rigorous mean‑ and MSE‑stability conditions that are easy to verify, (iii) explicit steady‑state MSE formulas that incorporate all major system parameters, and (iv) empirical evidence of robustness to realistic noise and correlation effects. The work broadens the applicability of LS‑based distributed estimation to a wide range of IoT, smart‑grid, and environmental‑monitoring scenarios where centralized processing is infeasible and communication imperfections are unavoidable. Future directions suggested include extensions to nonlinear models, asynchronous updates, and mobile sensor networks.
Comments & Academic Discussion
Loading comments...
Leave a Comment