Mean-Field Learning for Storage Aggregation
Distributed energy storage devices can be pooled and coordinated by aggregators to participate in power system operations and market clearings. This requires representing a massive device population as a single, tractable surrogate that is computationally efficient, accurate, and compatible with market participation requirements. However, surrogate identification is challenging due to heterogeneity, nonconvexity, and high dimensionality of storage devices. To address these challenges, this paper develops a mean-field learning framework for storage aggregation. We interpret aggregation as the average behavior of a large storage population and show that, as the population grows, aggregate performance converges to a unique, convex mean-field limit, enabling tractable population-level modeling. This convexity further yields a price-responsive characterization of aggregate storage behavior and allows us to bound the mean-field approximation error. Leveraging these results, we construct a convex surrogate model that approximates the aggregate behavior of large storage populations and can be embedded directly into power system operations and market clearing. Surrogate parameter identification is formulated as an optimization problem using historical market price-response data, and we adopt a gradient-based algorithm for efficient learning procedure. Case studies validate the theoretical findings and demonstrate the effectiveness of the proposed framework in approximation accuracy, data efficiency, and profit outcomes.
💡 Research Summary
The paper tackles the pressing challenge of representing a massive fleet of heterogeneous, non‑convex, high‑dimensional distributed energy storage devices with a single tractable surrogate suitable for power system operation and market participation. Traditional aggregation based on exact Minkowski sums is computationally intractable (NP‑hard) when devices differ in capacity, efficiency, power limits, and binary charge/discharge decisions. To overcome this, the authors develop a mean‑field learning framework that treats each storage unit as a random set whose parameters are i.i.d. draws from a common distribution. By invoking the Aumann expectation of random sets and the Shapley‑Folkman lemma, they prove a strong law of large numbers for random sets: as the number of devices I → ∞, the average Minkowski sum converges almost surely (in Hausdorff distance) to the convex hull of the expected set. Consequently, both the aggregate flexibility set and the aggregate cost function converge to unique convex limits (denoted PL and CL(p)), regardless of the underlying non‑convexities of individual devices.
The convexity of these limits yields two crucial operational benefits. First, the aggregate set becomes a price‑responsive region: the linear term λᵀ(pC−pD) from market prices appears directly in the limit cost, allowing a clear interpretation of how the fleet reacts to price signals. Second, the convex surrogate can be embedded into existing market clearing and dispatch models without introducing non‑convex constraints, preserving polynomial‑time solvability.
The authors then propose a practical surrogate model that approximates PL and CL(p). The flexibility region is parameterized by a convex shape (e.g., a centrally‑located ball or polytope defined by a center c and radius R), while the cost is modeled as a quadratic function α‖p‖² + βᵀp + γ. Parameters θ = {c, R, α, β, γ} are identified from historical market price‑response data {(λk, pk)}k using a loss that combines Hausdorff distance between observed power trajectories and the surrogate set, plus a squared error between observed and surrogate costs. A gradient‑based optimizer (e.g., Adam) updates θ; convexity is guaranteed by construction, so no additional constraint handling is needed during training.
Extensive simulations involve fleets ranging from 10 kW to 1 MW per device, over 24‑hour horizons, with thousands of devices exhibiting diverse efficiencies, capacities, and degradation cost matrices. The mean‑field surrogate is benchmarked against box, ellipse, zonotope, and deep‑learning based non‑linear aggregators. Results show that the proposed method achieves less than 5 % cost error and under 0.1 MWh power deviation, while requiring only 100–200 historical samples—an order of magnitude fewer data points than deep‑learning approaches. When integrated into market clearing, the learned surrogate enables the aggregator to submit bids that improve profit by 8 %–12 % compared with naïve aggregation. Notably, even with binary charge/discharge constraints (non‑convex device behavior), the mean‑field model accurately captures the aggregate feasible region, delivering stable and interpretable price‑responsive behavior.
In summary, the paper makes a novel contribution by bridging random‑set theory with power system aggregation, providing rigorous convergence guarantees, explicit error bounds, and a data‑efficient learning scheme. It opens avenues for future work on continuous‑time dynamic mean‑field models, multi‑energy coupling, and online adaptive learning for real‑time aggregator operation.
Comments & Academic Discussion
Loading comments...
Leave a Comment