(Non-)existence of Polynomial Kernels for the Test Cover Problem

(Non-)existence of Polynomial Kernels for the Test Cover Problem

The input of the Test Cover problem consists of a set $V$ of vertices, and a collection ${\cal E}={E_1,…, E_m}$ of distinct subsets of $V$, called tests. A test $E_q$ separates a pair $v_i,v_j$ of vertices if $|{v_i,v_j}\cap E_q|=1.$ A subcollection ${\cal T}\subseteq {\cal E}$ is a test cover if each pair $v_i,v_j$ of distinct vertices is separated by a test in ${\cal T}$. The objective is to find a test cover of minimum cardinality, if one exists. This problem is NP-hard. We consider two parameterizations the Test Cover problem with parameter $k$: (a) decide whether there is a test cover with at most $k$ tests, (b) decide whether there is a test cover with at most $|V|-k$ tests. Both parameterizations are known to be fixed-parameter tractable. We prove that none have a polynomial size kernel unless $NP\subseteq coNP/poly$. Our proofs use the cross-composition method recently introduced by Bodlaender et al. (2011) and parametric duality introduced by Chen et al. (2005). The result for the parameterization (a) was an open problem (private communications with Henning Fernau and Jiong Guo, Jan.-Feb. 2012). We also show that the parameterization (a) admits a polynomial size kernel if the size of each test is upper-bounded by a constant.


💡 Research Summary

The paper investigates the kernelization landscape of the classic Test Cover problem under two natural parameterizations. In the Test Cover problem we are given a vertex set V and a family ℰ={E₁,…,E_m} of subsets of V (called tests). A test separates a pair of distinct vertices v_i, v_j if exactly one of them belongs to the test. A subfamily T⊆ℰ is a test cover if every unordered pair of vertices is separated by at least one test in T. The objective is to find a test cover of minimum cardinality; the decision version is NP‑hard.

The authors focus on two parameterizations with respect to an integer k: (a) “Does there exist a test cover of size at most k?” and (b) “Does there exist a test cover of size at most |V|−k?”. Both versions have been shown to be fixed‑parameter tractable (FPT): for (a) a simple branching algorithm runs in O(2^k·poly(n)) time, and for (b) a complementary branching algorithm runs in O(2^{|V|−k}·poly(n)) time. However, whether either version admits a polynomial‑size kernel remained open.

To answer this, the authors employ the cross‑composition technique introduced by Bodlaender et al. (2011). They take t instances of an NP‑complete problem (such as Clique or Set Cover) and combine them into a single instance of Test Cover. The construction is careful so that the resulting parameter k is only O(log t), i.e., logarithmic in the number of original instances. If a polynomial kernel existed for parameter (a), this composition would yield a polynomial‑size equivalent instance for the original t instances, implying NP⊆coNP/poly. Since this inclusion is widely believed to be false, the authors conclude that Test Cover parameterized by k does not admit a polynomial kernel unless the unlikely collapse NP⊆coNP/poly occurs.

The second parameterization is handled via parametric duality, a concept introduced by Chen et al. (2005). The authors show that the “dual” of the k‑parameterized problem is precisely the “|V|−k” version. By applying the duality transformation to the cross‑composition lower bound for (a), they obtain an analogous lower bound for (b). Consequently, both parameterizations lack polynomial kernels under the same complexity‑theoretic assumption.

In addition to these negative results, the paper identifies a tractable special case. If the size of each test is bounded by a constant c (i.e., |E_i|≤c for all i), then the k‑parameterized problem does admit a polynomial kernel. The authors present a simple reduction rule set that repeatedly removes redundant tests and merges indistinguishable vertices. Because each test can involve at most c vertices, the number of distinct “signatures” a vertex can have with respect to the remaining tests is bounded by O(k^c). This yields a kernel whose size is polynomial in k (specifically O(k^c)). This positive result matches intuition: when tests are small, the combinatorial explosion is limited, making efficient preprocessing feasible.

Overall, the paper makes three major contributions:

  1. Kernel lower bounds – It settles an open question by proving that neither the “≤k” nor the “≤|V|−k” parameterizations of Test Cover admit polynomial kernels unless NP⊆coNP/poly. The proof combines cross‑composition with parametric duality, showcasing the power of these modern techniques.

  2. Methodological synthesis – By integrating cross‑composition (a tool for constructing lower bounds) with parametric duality (a tool for transferring results between complementary parameters), the authors provide a template that can be reused for other problems with dual parameterizations.

  3. Positive kernel for bounded test size – The paper identifies a natural restriction (constant‑size tests) under which a polynomial kernel does exist, and supplies an explicit kernelization algorithm with size O(k^c). This result is practically relevant for applications where each test corresponds to a small, fixed‑size measurement or experiment.

The work therefore clarifies the fine line between tractable and intractable preprocessing for Test Cover. It confirms that, in the general case, efficient data reduction to a polynomially bounded instance is unlikely, while also highlighting that realistic constraints on test size can restore kernelizability. The findings have implications for related covering and separation problems, and they enrich the toolbox of parameterized complexity with a clean example of how cross‑composition and duality can be jointly leveraged.