On the Complexity of Trial and Error
Motivated by certain applications from physics, biochemistry, economics, and computer science, in which the objects under investigation are not accessible because of various limitations, we propose a trial-and-error model to examine algorithmic issues in such situations. Given a search problem with a hidden input, we are asked to find a valid solution, to find which we can propose candidate solutions (trials), and use observed violations (errors), to prepare future proposals. In accordance with our motivating applications, we consider the fairly broad class of constraint satisfaction problems, and assume that errors are signaled by a verification oracle in the format of the index of a violated constraint (with the content of the constraint still hidden). Our discoveries are summarized as follows. On one hand, despite the seemingly very little information provided by the verification oracle, efficient algorithms do exist for a number of important problems. For the Nash, Core, Stable Matching, and SAT problems, the unknown-input versions are as hard as the corresponding known-input versions, up to a factor of polynomial. We further give almost tight bounds on the latter two problems’ trial complexities. On the other hand, there are problems whose complexities are substantially increased in the unknown-input model. In particular, no time-efficient algorithms exist (under standard hardness assumptions) for Graph Isomorphism and Group Isomorphism problems. The tools used to achieve these results include order theory, strong ellipsoid method, and some non-standard reductions. Our model investigates the value of information, and our results demonstrate that the lack of input information can introduce various levels of extra difficulty. The model exhibits intimate connections with (and we hope can also serve as a useful supplement to) certain existing learning and complexity theories.
💡 Research Summary
The paper introduces a “trial‑and‑error” computational model motivated by scenarios in physics, biochemistry, economics, and computer science where the full input to a problem is inaccessible. Instead of receiving the hidden instance directly, an algorithm may propose candidate solutions (trials) and a verification oracle returns only the index of a violated constraint, keeping the constraint’s content hidden. This minimal feedback captures many real‑world situations where experiments or observations can only point out what is wrong without revealing the underlying structure.
The authors focus on a broad class of constraint satisfaction problems (CSPs) and investigate how the lack of input information affects algorithmic complexity. Their contributions fall into two complementary parts.
Positive results – problems that remain tractable.
For several central problems—computing a Nash equilibrium, finding a core of a cooperative game, solving stable matching instances, and deciding SAT—the unknown‑input version is shown to be no harder than the standard version up to a polynomial factor. In particular, for SAT and stable matching the authors give almost tight bounds on the trial complexity (the number of oracle calls required). The key technical tools are order‑theoretic representations of the constraint set and a strong ellipsoid method that can turn the sparse “violation index” feedback into useful linear inequalities. By iteratively shrinking the feasible region, the algorithms locate a valid solution in polynomial time and with a number of trials that matches known lower bounds up to polylogarithmic terms.
Negative results – problems that become substantially harder.
In contrast, the paper proves that for Graph Isomorphism and Group Isomorphism no time‑efficient algorithm exists in the trial‑and‑error model under standard hardness assumptions (e.g., NP ≠ coNP or the Exponential Time Hypothesis). The authors construct non‑standard reductions that embed the difficulty of these isomorphism problems into the limited feedback setting, showing that the index of a violated constraint does not convey enough structural information to break the inherent symmetry. Consequently, any algorithm would require super‑polynomial time or an exponential number of trials.
Methodological contributions.
The work blends concepts from order theory (modeling constraints as a partially ordered set), convex optimization (using a robust ellipsoid algorithm to handle the sparse oracle information), and novel reduction techniques tailored to the verification‑oracle model. It also draws connections to learning theory, especially active learning and mistake‑bound models, by interpreting each violation as a “mistake” that guides future hypotheses.
Implications and future directions.
The trial‑and‑error framework provides a quantitative lens for assessing the value of information: some problems are resilient to severe information loss, while others critically depend on full input visibility. This dichotomy suggests a research agenda for classifying problems according to their “information‑sensitivity” and for designing algorithms that optimally exploit whatever limited feedback is available. Moreover, the model is directly applicable to experimental sciences where only error signals can be observed, offering a principled approach to algorithm design under realistic observational constraints.
In summary, the paper establishes a rich theory of computation with minimal feedback, delivering both algorithmic upper bounds for several classic problems and hardness results for others, thereby deepening our understanding of how information scarcity reshapes computational complexity.