Envy-Free Allocation of Indivisible Goods via Noisy Queries

Envy-Free Allocation of Indivisible Goods via Noisy Queries
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We introduce a problem of fairly allocating indivisible goods (items) in which the agents’ valuations cannot be observed directly, but instead can only be accessed via noisy queries. In the two-agent setting with Gaussian noise and bounded valuations, we derive upper and lower bounds on the required number of queries for finding an envy-free allocation in terms of the number of items, $m$, and the negative-envy of the optimal allocation, $Δ$. In particular, when $Δ$ is not too small (namely, $Δ\gg m^{1/4}$), we establish that the optimal number of queries scales as $\frac{\sqrt m }{(Δ/ m)^2} = \frac{m^{2.5}}{Δ^2}$ up to logarithmic factors. Our upper bound is based on non-adaptive queries and a simple thresholding-based allocation algorithm that runs in polynomial time, while our lower bound holds even under adaptive queries and arbitrary computation time.


💡 Research Summary

The paper introduces a novel model for fair division of indivisible items when agents’ valuations cannot be observed directly but only through noisy queries. Focusing on the two‑agent case with additive utilities, the authors assume each query returns a Gaussian‑perturbed observation of an agent’s value for a single item, i.e., y_{ν,i,t} ∼ N(u_{ν,i},σ²) for ν∈{a,b}. The goal is to allocate the m items between the two agents so that the resulting allocation is envy‑free (EF), meaning that neither agent prefers the other’s bundle. Because noise makes exact valuation recovery impossible, the authors introduce a “gap” parameter Δ>0, defined as the negative envy of the optimal allocation: OptEnvy = min_A Envy(A) ≤ –Δ. This assumption guarantees that a truly EF allocation exists and quantifies how far the optimal allocation is from violating EF.

The main contributions are threefold:

  1. Baseline analysis – A naïve algorithm that builds confidence intervals for each item and allocates based on a simple threshold requires O(m³/Δ²) queries. While this shows the Δ^{-2} dependence, the m‑cubic factor is far from optimal.

  2. Upper bound (algorithmic result) – In Section 4 the authors present a non‑adaptive, polynomial‑time algorithm that achieves an EF allocation with high probability using only
    q = O( m^{2.5} / Δ² · polylog m )
    queries, provided Δ ≫ m^{1/4}·log²m. The algorithm proceeds as follows:

    • Query design: Each item is sampled uniformly; if the query budget q is smaller than m, a random subset of items is sampled multiple times while the rest are sampled once. This strategy is non‑adaptive (the set of queries is fixed in advance).
    • Estimation: For each sampled item i, the algorithm computes the empirical means (\hat{u}{a,i}) and (\hat{u}{b,i}). Gaussian noise guarantees that these means concentrate around the true values with variance σ² divided by the number of samples.
    • Threshold selection: A global threshold τ is chosen so that the probability that (\hat{u}{a,i} - \hat{u}{b,i}) exceeds τ is balanced with the opposite event. This balances the two possible directions of envy.
    • Allocation rule: If (\hat{u}{a,i} - \hat{u}{b,i} ≥ τ) the item is given to agent a; otherwise to agent b.

    The analysis hinges on three technical components: (i) concentration of the empirical means, (ii) precise calculation of allocation probabilities induced by τ, and (iii) bounding the additional error introduced by sub‑sampling when q<m. By carefully tuning τ and using the gap Δ, the authors show that the expected envy contributed by each item is O(Δ/m), and the total variance across items scales as O(√m). Consequently, the number of queries needed to suppress the stochastic fluctuations below the deterministic gap is Θ(m^{2.5}/Δ²) up to logarithmic factors. This matches the expression given in the abstract, which can be rewritten as √m·(Δ/m)^{-2}.

  3. Lower bound (information‑theoretic result) – Section 5 proves that any algorithm, even if it may adaptively choose queries and perform unlimited computation, must use at least Ω(m^{2.5}/Δ²) queries to guarantee an EF allocation with high probability. The proof constructs a hard instance consisting of two families of items:

    • Fine‑difference items: A large set of items where the two agents’ true values differ by a tiny ε ≪ Δ. To avoid violating EF, the algorithm must correctly identify which side of the ε‑gap each item belongs to, which requires many samples.
    • Strong‑difference items: A smaller set of items where the values are strongly biased toward one agent or the other. These create large random fluctuations in the total envy unless the algorithm allocates enough of the fine‑difference items to compensate.

    By reducing the problem to a multiple‑hypothesis testing scenario, the authors show that distinguishing the correct allocation among the exponentially many possibilities forces a query complexity of Ω(m^{2.5}/Δ²). Importantly, this lower bound holds irrespective of whether queries are adaptive or non‑adaptive, and regardless of computational power.

Additional sections discuss extensions. Section 6 generalizes the results to arbitrary noise variance σ², showing that the query complexity scales linearly with σ². Section 7 sketches how the analysis could be extended to more than two agents, where one would need to consider the worst‑case pairwise gap Δ_{ij}. The authors also note that envy‑freeness can be translated into proportionality, so the same bounds apply to achieving proportional allocations under noisy queries.

In summary, the paper establishes the precise scaling law for the query complexity of achieving envy‑free allocations under Gaussian‑noisy valuation queries: Θ(m^{2.5}/Δ²) (up to polylogarithmic factors) when the gap Δ is not too small (Δ ≫ m^{1/4}·log²m). The upper bound is realized by a simple, non‑adaptive, polynomial‑time algorithm based on uniform sampling and a global threshold, while the lower bound shows that no algorithm can fundamentally beat this rate, even with adaptive queries and unlimited computation. This work opens a new line of research on fair division under imperfect information, bridging concepts from multi‑armed bandits, pure‑exploration, and algorithmic fairness.


Comments & Academic Discussion

Loading comments...

Leave a Comment