This work introduces a novel two-stage distributed framework to globally estimate constant parameters in a networked system, separating shared information from local estimation. The first stage uses dynamic average consensus to aggregate agents' measurements into surrogates of centralized data. Using these surrogates, the second stage implements a local estimator to determine the parameters. By designing an appropriate consensus gain, the persistence of excitation of the regressor matrix is achieved, and thus, exponential convergence of a local Gradient Estimator (GE) is guaranteed. The framework facilitates its extension to switched network topologies, quantization, and the heterogeneous substitution of the GE with a Dynamic Regressor Extension and Mixing (DREM) estimator, which supports relaxed excitation requirements.
The focus of this work is on estimating unknown constant parameters using linear regression data. This formulation naturally emerges in numerous applications, including adaptive control, cooperative robotics, power grids and sensor networks where reliable parameter identification is crucial for the stability and performance of control strategies (Olfati-Saber et al., 2007;Brouillon et al., 2024). In particular, linear regression models appear in robot manipulator control relating torques, joint coordinates and the parameters of the robot (Gaz et al., 2019).
Two architectural categories can be distinguished: the centralized one, where agents transmit their information to a central unit that estimates the parameters, and the distributed one, where agents estimate the global parameters locally through collective interaction. In the latter setting, each agent has access only to partial information.
In the centralized case, different algorithms have been proposed as estimators. For example, algorithms based on optimal filtering methods, such as the Kalman filter in discrete time (Kalman, 1960;Maybeck, 1982) and the Kalman-Bucy filter in continuous time (Golovan and Matasov, 2002) have been used in the form of recursive least-squares iterations. They model the unknown parameters as constant states and thus, provide unbiased estimates with minimum variance under additive Gaussian measurement noise. However, the convergence properties remain limited since the error covariance evolves according to a Riccati equation guaranteeing asymptotic convergence, but not exponential (Zhang and Tian, 2016). To solve this inconvenience, algorithms relying on gradientbased methods were introduced. For example, the Gradient Estimator (GE) (Ioannou and Sun, 1996;Sastry and Bodson, 2011) is widely used in both continuousand discrete-time designs, and is usually formulated for noiseless measurements. For noisy measurements, the resulting estimation error can be explicitly analyzed, and a closed-form expression for the covariance can be derived (Moser et al., 2015). Exponential convergence to the true parameters is guaranteed only when the persistence of excitation (PE) condition is maintained (Anderson, 2003). However, PE is a sufficient but not necessary condition for correct parameter estimation. To relax this requirement, the Dynamic Regressor Extension and Mixing (DREM) algorithm has been proposed (Aranovskiy et al., 2016;Ortega et al., 2021). DREM reformulates the original regression into decoupled scalar equations, enabling parameter convergence under excitation conditions weaker than PE (Ortega et al., 2020).
Unfortunately, centralized architectures have significant limitations, such as high bandwidth requirements, the computational power required by the central unit, and limited resilience to communication failures, among others (Kia et al., 2019). These limitations are exacerbated when the number of agents increases. To face these challenges, distributed estimators have been introduced. Examples include distributed extensions of Kalman filter based parameter estimators (Lendek et al., 2007;Ryu and Back, 2023), which only guarantee asymptotic convergence under the same conditions as their centralized counterparts. A widely studied alternative is the so called “con-sensus+innovations” framework (Kar and Moura, 2013;Lorenz-Meyer et al., 2025;Papusha et al., 2014). In this approach, each agent runs a tightly coupled gradientbased estimator with consensus enforcement (See Figure 1-Top). Convergence relies on the notion of cooperative persistence of excitation (cPE) (Chen et al., 2014;Yan and Ishii, 2025;Zheng and Wang, 2016;Matveev et al., 2021), which ensures that the excitation condition is satisfied collectively at the network level. Variants based on least mean squares consensus adaptive filters have also been considered for different network topologies (Xie and Guo, 2018). A drawback of this framework is its tightly coupled structure, where consensus and parameter estimation are closely intertwined. In such approaches, convergence analyses depend on joint Lyapunov arguments that simultaneously treat both disagreement and estimation errors. Modifying or replacing the underlying estimator requires substantial changes to the entire algorithm and a new convergence analysis. Likewise, examining effects such as quantization in communication is not straightforward.
Lately, hierarchical architectures in multi-agent settings have been investigated for applications such as fault estimation (Liu et al., 2018), controller design (Cheng et al., 2023), and consensus algorithms (Chen et al., 2020), as they provide enhanced flexibility and heterogeneous local strategies. Such architectures also facilitate the integration of consensus and estimation layers, since they can be analyzed and implemented separately (Chen et al., 2020). In this context, hierarchical distributed parameter estimation schemes based on the DREM framework have been
This content is AI-processed based on open access ArXiv data.