Learning Valuation Functions
In this paper we study the approximate learnability of valuations commonly used throughout economics and game theory for the quantitative encoding of agent preferences. We provide upper and lower bounds regarding the learnability of important subclasses of valuation functions that express no-complementarities. Our main results concern their approximate learnability in the distributional learning (PAC-style) setting. We provide nearly tight lower and upper bounds of $\tilde{\Theta}(n^{1/2})$ on the approximation factor for learning XOS and subadditive valuations, both widely studied superclasses of submodular valuations. Interestingly, we show that the $\tilde{\Omega}(n^{1/2})$ lower bound can be circumvented for XOS functions of polynomial complexity; we provide an algorithm for learning the class of XOS valuations with a representation of polynomial size achieving an $O(n^{\eps})$ approximation factor in time $O(n^{1/\eps})$ for any $\eps > 0$. This highlights the importance of considering the complexity of the target function for polynomial time learning. We also provide new learning results for interesting subclasses of submodular functions. Our upper bounds for distributional learning leverage novel structural results for all these valuation classes. We show that many of these results provide new learnability results in the Goemans et al. model (SODA 2009) of approximate learning everywhere via value queries. We also introduce a new model that is more realistic in economic settings, in which the learner can set prices and observe purchase decisions at these prices rather than observing the valuation function directly. In this model, most of our upper bounds continue to hold despite the fact that the learner receives less information (both for learning in the distributional setting and with value queries), while our lower bounds naturally extend.
💡 Research Summary
The paper investigates the approximate learnability of valuation functions that are central to economics and game theory. Valuations map subsets of items to non‑negative real numbers and capture agents’ preferences. The authors consider a hierarchy of valuation classes—subadditive, XOS (fractionally subadditive), submodular, OX S, and gross‑substitutes—and study their learnability in two main frameworks: (i) a distribution‑based PAC‑style model called PMAC (Probably Mostly Approximately Correct) where the learner receives polynomially many i.i.d. labeled examples drawn from an unknown distribution, and (ii) the “learning everywhere with value queries” model of Goemans et al., where the learner can adaptively query the exact value of any set.
The main contributions are:
-
Tight bounds for XOS and subadditive valuations.
- Upper bound: Any XOS function can be approximated within a factor O(√n) by the square‑root of a linear function. This structural insight reduces PMAC‑learning of XOS to learning linear separators, yielding an O(√n) multiplicative approximation using standard PAC algorithms. An analogous O(√n log n) bound is shown for subadditive valuations.
- Lower bound: An information‑theoretic argument proves that any algorithm using only polynomially many samples must incur an Ω(√n / log n) approximation factor. Hence the √n factor is essentially optimal for both classes.
-
Complexity‑dependent learning for XOS.
- When an XOS valuation can be represented with at most R MAX‑SUM trees, the authors give an algorithm that learns it to an O(R^ε) factor in time n^{O(1/ε)} for any ε>0. If R is polynomial in n, this yields an O(n^ε) approximation in n^{O(1/ε)} time. The key technical tool is a new structural result showing that an XOS function can be well‑approximated by the L‑th root of a degree‑L polynomial over the natural 0‑1 feature representation of sets.
-
Improved results for subclasses of OX S and gross substitutes.
- For OX S and XOS functions with a bounded number of leaves per tree, or a bounded number of trees, the paper obtains constant‑ or O(R^ε)‑approximation algorithms, dramatically improving over the generic √n bound.
- The previously known Ω(n^{1/3}) lower bound for submodular learning also applies to the much simpler class of gross‑substitutes valuations.
-
Implications for the value‑query model.
- The structural approximations developed for the distributional setting translate directly into the “learning everywhere with value queries” model, giving new upper bounds for XOS, OX S, and related classes, and matching lower bounds (℧(√n)) for several of them.
-
A realistic price‑query model.
- The authors introduce a model where the learner can only set a price p for a bundle S and observe whether the agent purchases it (i.e., whether p ≤ f*(S)). This reflects many economic interactions where direct valuation queries are unavailable. They show that almost all of their upper bounds (both PMAC and value‑query) survive unchanged under this weaker feedback, while the lower bounds automatically extend.
Methodologically, the paper blends combinatorial optimization insights (e.g., representation of XOS as MAX of linear functions, OX S as SUM of unit‑demand valuations) with learning‑theoretic techniques (PAC analysis, sample‑complexity lower bounds, reductions to linear classification). The novel structural lemmas—particularly the √n approximation by a linear square‑root and the polynomial‑root approximation for bounded‑complexity XOS—are the technical heart of the work.
Overall, the study clarifies the trade‑off between the expressive power of valuation classes and their learnability. While broad classes such as XOS and subadditive inherently require a √n multiplicative error, restricting the representation size (e.g., polynomial‑size XOS) enables substantially better approximations in polynomial time. By extending the analysis to a price‑only feedback model, the paper also bridges theory and practice, showing that many of the algorithmic guarantees remain robust even when the learner’s information is severely limited. This makes the results highly relevant for applications in auction design, pricing, and market analysis where valuations must be inferred from observed purchase behavior.
Comments & Academic Discussion
Loading comments...
Leave a Comment