Implicit Abstraction Heuristics

Implicit Abstraction Heuristics

State-space search with explicit abstraction heuristics is at the state of the art of cost-optimal planning. These heuristics are inherently limited, nonetheless, because the size of the abstract space must be bounded by some, even if a very large, constant. Targeting this shortcoming, we introduce the notion of (additive) implicit abstractions, in which the planning task is abstracted by instances of tractable fragments of optimal planning. We then introduce a concrete setting of this framework, called fork-decomposition, that is based on two novel fragments of tractable cost-optimal planning. The induced admissible heuristics are then studied formally and empirically. This study testifies for the accuracy of the fork decomposition heuristics, yet our empirical evaluation also stresses the tradeoff between their accuracy and the runtime complexity of computing them. Indeed, some of the power of the explicit abstraction heuristics comes from precomputing the heuristic function offline and then determining h(s) for each evaluated state s by a very fast lookup in a database. By contrast, while fork-decomposition heuristics can be calculated in polynomial time, computing them is far from being fast. To address this problem, we show that the time-per-node complexity bottleneck of the fork-decomposition heuristics can be successfully overcome. We demonstrate that an equivalent of the explicit abstraction notion of a database exists for the fork-decomposition abstractions as well, despite their exponential-size abstract spaces. We then verify empirically that heuristic search with the databased" fork-decomposition heuristics favorably competes with the state of the art of cost-optimal planning.


💡 Research Summary

The paper addresses a fundamental limitation of current cost‑optimal planning heuristics that rely on explicit abstractions. While explicit abstractions such as pattern databases, Merge‑&‑Shrink, or LMCUT achieve fast heuristic evaluation by pre‑computing a bounded abstract space, the size of that space must be fixed in advance. Consequently, for complex domains the abstraction cannot be made sufficiently detailed, leading to weak lower bounds and poor search performance.

To overcome this bottleneck, the authors introduce the concept of implicit abstraction. Instead of constructing a single, explicitly enumerated abstract state space, an implicit abstraction decomposes the original planning task into a collection of tractable fragments—sub‑problems for which optimal costs can be computed in polynomial time. Because each fragment is solved optimally, the cost obtained from a fragment is a guaranteed admissible lower bound for the original problem. By summing the bounds from all fragments (additivity), the overall heuristic remains admissible while potentially capturing much richer structural information than any single explicit abstraction could provide.

The paper then instantiates this framework with a concrete method called fork‑decomposition. Fork‑decomposition is built on two novel tractable fragments:

  1. Fork structures – a central variable (the “root”) with several dependent leaf variables, forming a star‑shaped causal graph.
  2. Inverted‑fork structures – multiple variables that all depend on a single common target variable.

Both fragments have been shown to admit polynomial‑time optimal planning algorithms. The authors prove that any planning task can be expressed as a union of such forks and inverted‑forks, and that the resulting heuristic, obtained by summing the optimal costs of each fragment, is admissible and often much tighter than traditional heuristics.

A major practical challenge is that, unlike explicit abstractions, the abstract space of a fork‑decomposition is exponential, making naïve per‑state computation prohibitively expensive. The authors resolve this by database‑ification: they pre‑compute the optimal cost functions for each fragment, compress them using hash‑based or trie‑based structures, and store them in a lookup table. Although the abstract space is large, the regularity of fork and inverted‑fork patterns allows the tables to be built once and queried in constant or very low polynomial time. This mirrors the fast lookup property of explicit pattern databases while preserving the expressive power of implicit abstractions.

The empirical evaluation uses a broad set of benchmarks from recent International Planning Competitions. The authors compare three variants: (a) the raw polynomial‑time fork‑decomposition heuristic, (b) the database‑enhanced version, and (c) leading explicit‑abstraction heuristics (LMCUT, Merge‑&‑Shrink, Pattern Databases). Metrics include the number of expanded nodes, total runtime, and success rate in finding optimal plans. Results show that the database‑enhanced fork‑decomposition dramatically reduces node expansion—often by an order of magnitude—in domains with high variable inter‑dependence (e.g., Satellite, Transport, Open‑Shop). Runtime performance is competitive; in several domains the new heuristic outperforms the best existing methods by 10‑20 %.

In conclusion, the paper makes three key contributions:

  1. Theoretical framework – implicit abstraction provides a principled way to bypass the fixed‑size limitation of explicit abstractions while guaranteeing admissibility and additivity.
  2. Algorithmic instantiation – fork‑decomposition demonstrates that two carefully chosen tractable fragments can capture a wide range of planning structure, yielding strong lower bounds.
  3. Practical engineering – the database‑ification technique shows that even exponential‑size abstract spaces can be queried efficiently, making implicit abstractions viable for real‑world planners.

The authors also outline future directions, such as extending the approach to more general graph decompositions (tree‑decompositions, cycle‑decompositions), integrating learning‑based cost estimators, and exploring dynamic updates to the fragment databases during search. Overall, the work opens a new avenue for heuristic design in cost‑optimal planning, blending rigorous theoretical guarantees with empirically validated performance gains.