Near-Optimal Coalition Structures in Polynomial Time

Reading time: 11 minute
...

📝 Original Info

  • Title: Near-Optimal Coalition Structures in Polynomial Time
  • ArXiv ID: 2512.21657
  • Date: 2025-12-25
  • Authors: Angshul Majumdar

📝 Abstract

We study the classical coalition structure generation problem and compare the anytime behaviour of three algorithmic paradigms: dynamic programming, MILP branch-and-bound, and sparse relaxations based on greedy or ℓ 1 -type methods. Under a simple random "sparse synergy" model for coalition values, we prove that sparse relaxations recover coalition structures whose welfare is arbitrarily close to optimal in polynomial time with high probability. In contrast, broad classes of dynamic-programming and MILP algorithms require exponential time before attaining comparable solution quality. This establishes a rigorous probabilistic anytime separation in favour of sparse relaxations, even though exact methods remain ultimately optimal.

📄 Full Content

The coalition structure generation (CSG) problem asks for a partition of a finite set of agents into disjoint coalitions that maximises a given social welfare functional. It is a central topic in cooperative game theory and arXiv:2512.21657v1 [cs.GT] 25 Dec 2025 multiagent systems, with applications in distributed resource allocation, task allocation and teamwork formation [1,2]. As the number of partitions grows super-exponentially, CSG is NP-hard and exact algorithms are exponential in the worst case [1].

Two main exact paradigms have been investigated. Dynamic-programming algorithms exploit structural decompositions of the coalition space and provide worst-case guarantees on optimality and runtime [3]. MILP formulations encode CSG as a set-partitioning problem solved via generic branch-and-bound techniques, which can be effective in practice but still explore exponentially large trees, and their anytime behaviour is only partially understood.

Meanwhile, high-dimensional statistics has developed approximate methods that recover sparse combinatorial structure via convex relaxations or greedy pursuit. Classic examples include ℓ 1 -penalised estimators [4], linear-programming decoders [5], and orthogonal matching pursuit (OMP) [6].

Under mild structural and stochastic assumptions, these methods identify near-optimal sparse solutions in polynomial time with high probability.

The present paper brings these lines of work together in the context of standard CSG. We retain the deterministic formulation but analyse anytime behaviour when coalition values follow a simple random model. Within this framework we show that ℓ 1 -and OMP-type relaxations can, with high probability, reach coalition structures whose value is close to optimal in polynomial time, whereas broad classes of dynamic-programming and MILP algorithms typically require exponential time to attain comparable solution quality.

We consider a finite set of agents N = {1, . . . , n}. A coalition is any nonempty subset S ⊆ N , and a coalition structure is a partition

The value of a coalition structure is

and the coalition structure generation problem (CSG) is

The number of possible coalition structures (Bell numbers) grows super-exponentially in n [1]. Exact dynamic-programming algorithms compute optimal values for all subsets S ⊆ N and then reconstruct an optimal partition in O(3 n ) time [1,3]. MILP formulations encode CSG as a set-partitioning problem with binary variables indicating coalition membership; branch-and-bound solvers then explore the search tree.

We view an algorithm as producing a sequence of feasible coalition structures with nondecreasing lower bounds on the optimal value. Dynamic-programming methods are essentially “all-or-nothing”: they must process a large fraction of subsets before producing meaningful solutions. MILP solvers, while anytime in principle, may still require exponential exploration before improving naive solutions.

By contrast, we study relaxations that operate in a lower-dimensional decision space by representing coalition structures through sparse vectors and applying greedy or convex optimisation methods, such as OMP and ℓ 1 -penalised programmes. These algorithms naturally produce improving feasible partitions as iterations proceed.

In this section we formalise and analyse the anytime behaviour of three broad classes of algorithms for the classical CSG problem: (i) dynamic-programming (DP) algorithms, (ii) mixed-integer linear programming (MILP) approaches based on branch-and-bound with polynomial-time relaxations, and (iii) low-dimensional relaxations obtained by greedy or convex sparsity-promoting procedures. The focus is on rigorous comparison under a simple random model for coalition values that preserves the standard deterministic problem formulation. Throughout, N = {1, . . . , n} and v : 2 N → R is the characteristic function.

We assume that coalition values are generated according to the following “sparse synergy” model. Let T = {T 1 , . . . , T k } be a fixed family of pairwise disjoint template coalitions, with k ≤ n and |T j | ≥ 1. For each T j , assign a positive weight w j > 0. For any coalition S ⊆ N , define

where ξ(S) are independent, mean-zero noise terms satisfying the sub-Gaussian tail bound

for some σ > 0. Hence T j act as “true” synergy patterns, each contributing w j whenever it is fully contained in a chosen coalition. We write OPT = max P V (P) for the optimal welfare.

The optimal solution under (3.1) is the coalition structure P ⋆ that consists exactly of the k templates (and singletons for remaining agents), with value k j=1 w j up to the noise. Crucially, this is a standard CSG instance; the randomness enters only through the value oracle. All probabilities are with respect to the draw of (ξ(S)) S⊆N .

An anytime CSG algorithm produces, as time t increases, a sequence of feasible coalition structures P t with nondecreasing values V (P t ). We compare algorithms by the rate at which V (P t ) approaches OPT as a function of computational time.

We distinguish three algorithmic classes.

Class A DP .. Dynamic-programming algorithms that compute values v(S)

and optimal sub-partitions for all subsets S in an order that is subset-closed and size-monotone: if S is processed at time t, then all S ′ ⊂ S are processed at some time t ′ ≤ t. This captures standard DP schemes [1,3].

Class A M ILP .. MILP algorithms based on branch-and-bound where each node bound is obtained by solving a polynomial-time convex relaxation of the set-partitioning formulation. We allow generation of cutting planes provided separation is polynomial-time. These are generic algorithmic frameworks into which practical solvers fall.

Class A sparse .. Algorithms that operate on a set of k candidate coalitions {T 1 , . . . , T k } (or a superset) and maintain a sparse incidence vector x ∈ {0, 1} M over a collection of candidate coalitions. In each iteration, a greedy or ℓ 1 -regularised step is taken to increase the total value

while ensuring feasibility by merging overlapping coalitions when necessary.

Orthogonal matching pursuit (OMP) and ℓ 1 -penalised linear programs are canonical examples [4,6].

Let γ = min j̸ =j ′ (w j -w j ′ ) + denote the minimal positive gap between (distinct) weights. Set W = max j w j .

We assume the margin condition

This ensures that, with high probability, the noise cannot reverse the ordering of true versus spurious coalitions.

By standard sub-Gaussian concentration, for every fixed S,

Applying a union bound over all S that contain at most one template (at most n2 n such coalitions), we have with probability at least 1 -1/n that |ξ(S)| ≤ 2σ log(2n) for all S.

(3.4)

A sparse that in each iteration selects a coalition S of maximal residual value among its current candidates identifies all T j in at most k iterations, and the resulting coalition structure P satisfies

In particular, if γ ≥ 4σ log(2n), then V ( P) ≥ (1 -ε)OPT with ε = 2σ log(2n)/γ in time polynomial in n.

Proof. Fix a realisation satisfying (3.4). For any S containing T j and no other template,

For any S ′ containing no template,

By (3.2), w j -4σ log(2n) > 0. Hence for each j and any S ′ without a template, v(S) -v(S ′ ) ≥ w j -4σ log(2n) > 0.

Thus in each iteration the maximal-value coalition must contain some unselected T j . Since template coalitions are disjoint, selecting one does not reduce the value of another. After at most k iterations, all templates are selected.

The value difference from the optimal j w j is the sum of noise terms, each bounded by 2σ log(2n), hence at most 2kσ log(2n), proving the claim.

We next show that DP algorithms in A DP cannot produce near-optimal coalition structures without processing an exponential number of subsets.

Let F be the collection of all subsets S that contain exactly one template T j . Construct the instance so that the T j are placed among agents such that every such S has size at least n/2 (for example, distribute templates across disjoint halves).

Theorem 3.2 (DP lower bound). Under (3.1), with probability at least 1 -1/n, any A DP algorithm must process at least 2 αn subsets, for some α > 0, before producing a coalition structure P t with V (P t ) ≥ OPT -σ log(2n).

Proof. Consider a DP that processes subsets in increasing size. Since each T j is embedded in a subset of size at least n/2, no coalition S containing any T j is encountered before all subsets of size < n/2 are processed. The number of

Before encountering any such S, all feasible coalition structures P that the algorithm can construct from recorded sub-partitions must exclude all T j , hence V (P) is at most the noise level, bounded in absolute value by 2σ log(2n). Comparing with OPT = j w j and using w j ≥ γ, we obtain V (P) ≤ OPT -σ log(2n). The same argument applies to any subset-closed ordering; if subsets of size < n/2 are not fully processed, then some subset containing T j has an unprocessed proper subset, violating subset-closedness.

The result follows.

We finally consider algorithms in A M ILP . The MILP formulation for CSG is max

where {C 1 , . . . , C M } are all feasible coalitions. The LP relaxation allows

x ∈ [0, 1] M . For our model, all C not containing a template have small values, while each C containing a template has value w j + ξ(C).

Let S be the set of coalitions containing at most one template, and suppose all T j belong to S. The LP relaxation at the root can assign fractional weight on many spurious coalitions in S that overlap templates, achieving a relaxation value far above any integral assignment excluding templates. This creates a substantial integrality gap.

Theorem 3.3 (MILP lower bound). Under (3.1), with probability at least 1 -1/n, any algorithm in A M ILP must explore at least 2 βn nodes of the branch-and-bound tree, for some β > 0, before producing a feasible coalition structure P t with V (P t ) ≥ OPT -σ log(2n).

Proof. Consider the branch-and-bound tree defined by branching on variables x i . At the root, the LP relaxation chooses fractional x i supported on many overlapping coalitions, including those that partially cover different T j simultaneously. The relaxation value can thus exceed any integral feasible assignment by at least γ -4σ log(2n) > 0 due to (3.2). Hence the root cannot be fathomed (pruned) and similar reasoning applies near the root.

To reduce the integrality gap, the algorithm must assign integral values to a collection of variables that “pin down” every template, effectively enumerating exponentially many combinations of overlapping coalitions.

Formally, let U be the set of feasible x in the LP relaxation that use at most one template at fractional level. The number of distinct supports for such fractional solutions is exponential in n due to overlaps among coalitions containing substructures of the T j . Each node that fixes a subset of variables still leaves exponentially many fractional solutions in U unless a prohibitive number of variables are fixed; until then, the LP upper bound exceeds OPT -σ log(2n), preventing pruning. Therefore, at least 2 βn nodes must be explored before the algorithm encounters a node where all templates are forced integral, at which point a near-optimal feasible solution can be constructed.

Combining Theorems 3.1, 3.2 and 3.3, we obtain the following anytime separation between A sparse and the exact algorithm classes.

The results above show that, even for the classical deterministic CSG formulation, simple sparse relaxations can achieve near-optimal coalition structures in polynomial time with high probability under a basic random value model, while broad classes of exact methods exhibit exponentially poor anytime behaviour. These findings complement existing worst-case complexity results for CSG [1] and highlight the potential of sparse convex and greedy techniques in large-scale coalition formation.

We analysed the classical coalition structure generation problem under a simple random value model while keeping the underlying optimisation task entirely standard. Our main contribution is a rigorous anytime separation between sparse greedy or ℓ 1 -based relaxations and two broad classes of exact algorithms: dynamic-programming approaches and MILP methods relying on polynomial-time relaxations. With high probability, the sparse relaxations recover coalition structures whose welfare is arbitrarily close to optimal in polynomial time, whereas the exact methods require exponential time to reach comparable solution quality. These findings provide a theoretical explanation for the empirical observation that approximate methods can become competitive long before exact algorithms converge, even when full optimality is ultimately attainable.

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut