Online submodular welfare maximization: Greedy is optimal

Online submodular welfare maximization: Greedy is optimal

We prove that no online algorithm (even randomized, against an oblivious adversary) is better than 1/2-competitive for welfare maximization with coverage valuations, unless $NP = RP$. Since the Greedy algorithm is known to be 1/2-competitive for monotone submodular valuations, of which coverage is a special case, this proves that Greedy provides the optimal competitive ratio. On the other hand, we prove that Greedy in a stochastic setting with i.i.d.items and valuations satisfying diminishing returns is $(1-1/e)$-competitive, which is optimal even for coverage valuations, unless $NP=RP$. For online budget-additive allocation, we prove that no algorithm can be 0.612-competitive with respect to a natural LP which has been used previously for this problem.


💡 Research Summary

The paper investigates the online welfare maximization problem where agents have submodular valuation functions, focusing on two distinct models: an adversarial online setting and a stochastic i.i.d. setting. In the adversarial model, items arrive one by one and must be irrevocably assigned upon arrival. The authors prove that for coverage valuations—a canonical subclass of monotone submodular functions—no (randomized) online algorithm can achieve a competitive ratio better than 1/2 against an oblivious adversary unless NP = RP. The proof builds a reduction from the hard maximum coverage problem, constructing two distributions over input instances such that any algorithm’s expected welfare is at most half of the optimum. Since the classic Greedy algorithm is already known to be 1/2‑competitive for monotone submodular valuations, this result establishes Greedy as optimal in the adversarial online regime.

The second part of the work studies a stochastic environment where items are drawn independently from a known distribution. Agents’ valuations satisfy the diminishing‑returns property, which generalizes submodularity to the continuous domain. The authors show that the Greedy algorithm attains a (1 − 1/e) competitive ratio in this setting. The analysis uses a continuous-time primal‑dual (Lagrangian) framework: at each infinitesimal step Greedy captures a constant fraction of the remaining optimal marginal gain, leading to a differential inequality whose solution yields the (1 − 1/e) factor. Moreover, they prove that improving this bound would imply a better-than‑(1 − 1/e) approximation for maximum coverage, contradicting known hardness results unless NP = RP. Hence Greedy is also optimal for stochastic online welfare maximization with diminishing returns.

Finally, the paper turns to the online budget‑additive allocation problem, where each advertiser has a budget and the value of an allocated item is additive up to that budget. A natural linear programming (LP) relaxation has been the benchmark for designing algorithms. The authors construct a family of hard instances and demonstrate that no online algorithm can be more than 0.612‑competitive with respect to this LP bound. This result tightens previous analyses and indicates that the LP relaxation overestimates the achievable welfare in the online setting.

Overall, the contributions are threefold: (1) a tight 1/2 hardness for adversarial online submodular welfare, confirming Greedy’s optimality; (2) a tight (1 − 1/e) hardness for stochastic i.i.d. arrivals with diminishing returns, again showing Greedy’s optimality; and (3) a new 0.612 hardness for online budget‑additive allocation relative to the standard LP relaxation. These findings sharpen our understanding of the limits of online algorithms in combinatorial allocation problems and provide clear guidance for both theoreticians and practitioners designing allocation mechanisms under uncertainty.