Polynomial-Time Approximation Schemes for Knapsack and Related Counting Problems using Branching Programs
We give a deterministic, polynomial-time algorithm for approximately counting the number of {0,1}-solutions to any instance of the knapsack problem. On an instance of length n with total weight W and accuracy parameter eps, our algorithm produces a (1 + eps)-multiplicative approximation in time poly(n,log W,1/eps). We also give algorithms with identical guarantees for general integer knapsack, the multidimensional knapsack problem (with a constant number of constraints) and for contingency tables (with a constant number of rows). Previously, only randomized approximation schemes were known for these problems due to work by Morris and Sinclair and work by Dyer. Our algorithms work by constructing small-width, read-once branching programs for approximating the underlying solution space under a carefully chosen distribution. As a byproduct of this approach, we obtain new query algorithms for learning functions of k halfspaces with respect to the uniform distribution on {0,1}^n. The running time of our algorithm is polynomial in the accuracy parameter eps. Previously even for the case of k=2, only algorithms with an exponential dependence on eps were known.
💡 Research Summary
The paper presents the first deterministic polynomial‑time approximation schemes (FPTAS) for several classic #P‑hard counting problems, most notably the knapsack problem and its extensions. Previously, only randomized fully polynomial‑time randomized approximation schemes (FPRAS) were known for these tasks, relying on rapidly mixing Markov chains or randomized dynamic programming. The authors’ key contribution is a novel use of read‑once branching programs (ROBPs) of small width to approximate the solution space under carefully chosen distributions, thereby enabling deterministic counting with multiplicative (1 + ε) error guarantees.
For a single 0‑1 knapsack instance (weights a∈ℤⁿ₊, capacity b), the exact DP can be expressed as a ROBP whose width equals the total weight W = Σa_i + b, which may be exponential in n. The authors show how to compress this program by partitioning the state space (partial sums) into intervals such that each interval contains roughly the same number of feasible suffixes. By keeping only a representative state per interval, they obtain a new ROBP of width O(log W · 1/ε). This compressed program approximates the original feasible set within a (1 ± ε) factor, and its accepting strings can be counted exactly by a simple DP in O(n·polylog W·1/ε) time. The resulting deterministic algorithm runs in O(n³·log W·log(n/ε)/ε) time.
The technique extends to multidimensional knapsack with a constant number k of constraints. Directly combining k single‑constraint ROBPs would blow up the width to W^k, so the authors first apply Dyer’s precise rounding to transform the multidimensional instance into a setting where the solution set is dense under a small‑space source. They then use the same interval‑sparsification on this source, yielding a deterministic algorithm with running time O((n/ε)^{O(k²)}·log W). For integer‑valued knapsack (variables bounded by u_i) and for contingency tables with a constant number of rows, analogous constructions give deterministic algorithms with polynomial dependence on log U, log R and 1/ε.
A further contribution is the handling of non‑uniform distributions via “small‑space sources”. These are distributions generated by width‑S branching programs; many natural distributions (symmetric, product) fall into this class. The authors show that their sparsified ROBP can be combined with any such source to approximate the probability that a random draw satisfies the knapsack constraint, in time O(n³·S·(S+log W)·log(n/ε)/ε). This yields deterministic FPTAS for counting solutions of a given Hamming weight, among other variants.
Finally, the paper leverages the same ROBP approximation framework for learning theory. Functions that are Boolean combinations of k halfspaces can be viewed as intersections of k knapsack constraints. By reconstructing the underlying small‑width ROBP through membership queries, the authors obtain a deterministic learning algorithm that, given uniform examples, learns any such function to error ε in time (n/ε)^{O(k)}. This dramatically improves over prior results that required exponential dependence on 1/ε even for k = 2.
In summary, the authors develop a unified, deterministic approach based on small‑width read‑once branching programs that yields polynomial‑time (1 + ε) multiplicative approximations for a suite of counting problems—single and multidimensional knapsack, integer knapsack, low‑row contingency tables—and also provides efficient learning algorithms for functions of a few halfspaces. The work bridges gaps between counting, pseudorandomness, and learning, offering deterministic alternatives to previously randomized methods.
Comments & Academic Discussion
Loading comments...
Leave a Comment