The Complexity of Homomorphism Reconstruction Revisited
We revisit the algorithmic problem of reconstructing a graph from homomorphism counts that has first been studied in (Böker et al., STACS 2024): given graphs $F_1,\ldots,F_k$ and counts $m_1,\ldots,m_k$, decide if there is a graph $G$ such that the number of homomorphisms from $F_i$ to $G$ is $m_i$, for all $i$. We prove that the problem is NEXP-hard if the counts $m_i$ are specified in binary and $Σ_2^p$-complete if they are in unary. Furthermore, as a positive result, we show that the unary version can be solved in polynomial time if the constraint graphs are stars of bounded size.
💡 Research Summary
This paper investigates the algorithmic problem of reconstructing a graph from prescribed homomorphism counts, a question first systematically studied by Böker et al. (STACS 2024). Formally, an instance consists of pairs ((F_i,m_i)) where each (F_i) is a graph and each (m_i) is a non‑negative integer. The task (HomRec) asks whether there exists a graph (G) such that for every (i) the number of homomorphisms from (F_i) to (G) equals (m_i). Two encoding regimes are considered: binary (the original setting) and unary (the “natural” setting where the size of the numbers is part of the input).
The first major contribution is a tight complexity classification for the binary version. By a polynomial‑time many‑one reduction from the well‑known NEXP‑complete SuccinctClique problem, the authors show that HomRec is NEXP‑hard. The reduction encodes a Boolean circuit into a coloured graph and uses specially crafted regularity constraints (F\equiv(A,B,n,m)) to force precise relationships between colour classes. A Cauchy–Schwarz argument guarantees that these constraints enforce uniform degree patterns, thereby ensuring that any homomorphism count reflects the correct evaluation of the circuit. Since SuccinctClique is already known to be in NEXP, this yields NEXP‑completeness for HomRec.
The second contribution addresses the unary encoding, denoted UnHomRec. When the counts are given in unary, any feasible solution must have at most (\sum_i m_i|F_i|) vertices, because vertices not appearing in the image of any required homomorphism can be deleted. Consequently, UnHomRec lies in NP(^#)P: a nondeterministic machine guesses a candidate graph and verifies each constraint using a #P oracle. The authors prove Σ(_2^p)-hardness by a reduction from quantified Boolean formulas of the form (\exists x \forall y,\varphi(x,y)). The existential variables are encoded by the choice of the guessed graph, while the universal quantifier is simulated by requiring that all possible assignments to the universally quantified variables satisfy the homomorphism constraints. This establishes Σ(_2^p)-completeness for the unary version.
The paper then turns to a positive tractability result for a restricted class of constraints. When every constraint graph (F_i) is a star (a tree of height one) and the counts are given in unary, the reconstruction problem becomes polynomial‑time solvable. The key observation is that the number of homomorphisms from a star into a target graph depends solely on the degree sequence of the target. Using this, the authors design a dynamic‑programming algorithm that computes a feasible degree sequence satisfying all star‑homomorphism counts. Once a suitable degree sequence is found, a concrete graph can be constructed via the classic Havel–Hakimi algorithm (or its directed analogue). The running time is (O(m\ell^2)), where (\ell) is the maximum size of any star constraint and (m) is the largest count among the constraints. Because homomorphism counts for stars of size up to (\ell) also determine subgraph counts for those stars, the same algorithm yields a polynomial‑time solution for the analogous subgraph‑count reconstruction problem (StarSubRec).
The paper situates its contributions within a broader context. Homomorphism counts are central to graph isomorphism theory, fractional isomorphism, quantum isomorphism, and graph embeddings used in machine learning (e.g., graph kernels, graph neural networks). The hardness results clarify the limits of inverting such embeddings, while the tractable star case offers a concrete algorithmic tool for applications where only simple pattern counts are needed. The authors also discuss connections to reconstruction conjectures, extremal graph theory (homomorphism densities), and database theory (containment of conjunctive queries under bag semantics), highlighting the interdisciplinary relevance of the problem.
In summary, the paper delivers a complete complexity landscape for homomorphism‑based graph reconstruction: NEXP‑completeness for binary counts, Σ(_2^p)-completeness for unary counts, and a polynomial‑time algorithm for unary instances with star constraints. The techniques blend reductions from succinct combinatorial problems, clever use of colour‑class regularity constraints, and classic degree‑sequence algorithms, thereby advancing both theoretical understanding and practical algorithm design for graph reconstruction from homomorphism data.
Comments & Academic Discussion
Loading comments...
Leave a Comment