Tight Inapproximability for Welfare-Maximizing Autobidding Equilibria
We examine the complexity of computing welfare- and revenue-maximizing equilibria in autobidding second-price auctions subject to return-on-spend (RoS) constraints. We show that computing an autobidding equilibrium that approximates the welfare-optimal one within a factor of $2 - ε$ is NP-hard for any constant $ε> 0$. Moreover, deciding whether there exists an autobidding equilibrium that attains a $1/2 + ε$ fraction of the optimal welfare – unfettered by equilibrium constraints – is NP-hard for any constant $ε> 0$. This hardness result is tight in view of the fact that the price of anarchy (PoA) is at most $2$, and shows that deciding whether a non-trivial autobidding equilibrium exists – one that is even marginally better than the worst-case guarantee – is intractable. For revenue, we establish a stronger logarithmic inapproximability, while under the projection games conjecture, our reduction rules out even a polynomial approximation factor. These results significantly strengthen the APX-hardness of Li and Tang (AAAI ‘24). Furthermore, we refine our reduction in the presence of ML advice concerning the buyers’ valuations, revealing again a close connection between the inapproximability threshold and PoA bounds. Finally, we examine relaxed notions of equilibrium attained by simple learning algorithms, establishing constant inapproximability for both revenue and welfare.
💡 Research Summary
The paper investigates the computational difficulty of finding welfare‑maximizing and revenue‑maximizing equilibria in second‑price auctions where bidders use uniform‑scale autobidding agents subject to a Return‑on‑Spend (RoS) constraint. The authors establish a series of tight inapproximability results that close the gap between known algorithmic guarantees (the price of anarchy, PoA ≤ 2) and previously known hardness (APX‑hardness).
Welfare Maximization. By reducing from the label‑cover problem, the authors encode each label as a binary choice of a high or low bidding multiplier. They construct a suite of gadgets—label‑assignment, NAND, NOT, and edge‑autobidders—that simulate logical constraints within the autobidding market while keeping the monetary stakes negligible. In the completeness case (all label‑cover edges satisfied) there exists an autobidding equilibrium whose liquid welfare equals |E| plus the number of satisfied edges; in the soundness case (few edges satisfied) any equilibrium’s welfare is at most |E| + ε·|E|. Since the PoA of second‑price autobidding with RoS is exactly 2, this reduction shows that achieving any factor better than 2 − ε for any constant ε > 0 is NP‑hard, and deciding whether an equilibrium can beat the trivial ½‑approximation by any constant margin is also NP‑hard. Consequently, computing a “non‑trivial” equilibrium (one that improves over the worst‑case guarantee) is NP‑hard.
Revenue Maximization. Using the same label‑cover framework, the authors design edge‑autobidders whose surplus is deliberately spent on an extra item, turning the total revenue into a direct proxy for the number of satisfied edges. They prove that even a logarithmic‑factor approximation (Ω(log nk)) for revenue is NP‑hard. Assuming the Projection Games Conjecture, the reduction yields polynomial‑factor inapproximability, i.e., no poly‑approximation exists unless the conjecture fails. This starkly separates revenue from welfare: while welfare is limited by a constant PoA, revenue resists any sub‑logarithmic approximation.
Robustness under Machine‑Learning Advice. The paper further studies a model where the auctioneer receives a γ‑approximate signal about bidders’ valuations (Balseiro et al. 2021). The signal can be used as a reserve price, improving the PoA to 2/(1 + γ). Nevertheless, the authors show that even with such advice, welfare maximization remains NP‑hard to approximate within 2/(1 + γ) − ε, and revenue maximization remains NP‑hard within 1/(γ + ε). Thus the hardness thresholds track the best possible PoA bounds.
Learning Dynamics. Recognizing that practical systems rely on simple learning algorithms rather than exact equilibria, the authors define two relaxed notions: (1) a time‑average RoS‑satisfying sequence, and (2) a responsive learning sequence that forces a bidder’s multiplier to increase after sustained surplus. They prove that under (1) achieving a revenue approximation better than e/(e − 1) − ε is NP‑hard, and under (2) achieving a welfare approximation better than 2e/(2e − 1) − ε is NP‑hard. These results demonstrate that even permissive learning dynamics cannot guarantee constant‑factor approximations.
Technical Contributions. The core technical novelty lies in the construction of low‑stakes logical gadgets within the autobidding market, enabling a clean reduction from label‑cover while preserving the welfare/revenue structure. The authors also carefully handle the interaction between surplus extraction and item allocation to translate satisfied constraints into measurable economic outcomes. Their analysis bridges game‑theoretic equilibrium concepts (FIXP, PPAD) with classic hardness of approximation, and extends the framework to incorporate ML‑based valuation signals.
Overall, the paper delivers a comprehensive hardness landscape for autobidding equilibria: welfare cannot be approximated better than the PoA bound of 2, revenue cannot be approximated within any sub‑logarithmic factor, and these barriers persist even with valuation advice or under realistic learning dynamics. The results substantially strengthen prior APX‑hardness findings and provide a rigorous theoretical warning that high‑quality equilibria may be computationally unattainable in large‑scale online advertising platforms.
Comments & Academic Discussion
Loading comments...
Leave a Comment