A Dantzig-Wolfe Decomposition Method for Quasi-Variational Inequalities

A Dantzig-Wolfe Decomposition Method for Quasi-Variational Inequalities
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose an algorithm to solve quasi-variational inequality problems, based on the Dantzig-Wolfe decomposition paradigm. Our approach solves in the subproblems variational inequalities, which is a simpler problem, while restricting quasi-variational inequalities in the master subproblems, making them generally (much) smaller in size when the original problem is large-scale. We prove global convergence of our algorithm, assuming that the mapping of the quasi-variational inequality is either single-valued and continuous or it is set-valued maximally monotone. Quasi-variational inequalities serve as a framework for several equilibrium problems, and we apply our algorithm to an important example in the field of economics, namely the Walrasian equilibrium problem formulated as a generalized Nash equilibrium problem. Our numerical assessment demonstrates good performance and usefullness of the approach for the large-scale cases.


💡 Research Summary

The paper introduces a novel algorithmic framework for solving large‑scale quasi‑variational inequality (QVI) problems by extending the classic Dantzig‑Wolfe (DW) decomposition technique. A QVI seeks a pair (x*, z*) such that x* belongs to a set K(x*) that itself depends on the decision variable, and the vector z* lies in a (possibly set‑valued) mapping F(x*) while satisfying the variational inequality ⟨z*, y−x*⟩ ≥ 0 for all y ∈ K(x*). Because the feasible set K(x) is state‑dependent, QVIs are considerably more challenging than ordinary variational inequalities (VIs).

The authors first split the constraints defining K(x) into two groups: “hard” constraints g(y, x) ≤ 0 that involve both variables, and “easy” constraints h(y) ≤ 0 that involve only y. The feasible set is written as K(x)=K_g(x)∩K_h, where K_g(x)={y | g(y,x)≤0} and K_h={y | h(y)≤0}. The easy set K_h is assumed to be convex and compact, which guarantees solvability of the sub‑problems. An initial feasible point y₁∈K_g(y₁)∩K_h is required to start the iteration.

At each iteration k the method alternates between a master QVI and a sub‑problem VI. The master QVI (equation (3) in the paper) uses the convex hull of all previously generated sub‑problem solutions Y_k={y₁,…,y_k} as an approximation K_k^h=Co Y_k of K_h. It solves for a primal‑dual pair (x_k, z_k^m) together with a Lagrange multiplier μ_k associated with the hard constraints g. The KKT conditions for the master problem guarantee that μ_k satisfies the usual complementarity and feasibility relations.

The sub‑problem receives (x_k, μ_k) and constructs an approximated operator F_k(y) that combines three components: (i) a surrogate of the original mapping F (either a constant, a first‑order Taylor approximation, or the exact mapping, see equation (6)); (ii) a linear combination of the gradients ∇_y g_j evaluated at either the current master point (constant option) or at the current sub‑problem variable y (free option), possibly blended with a weight ω_k∈


Comments & Academic Discussion

Loading comments...

Leave a Comment