Drake: An Efficient Executive for Temporal Plans with Choice

Drake: An Efficient Executive for Temporal Plans with Choice

This work presents Drake, a dynamic executive for temporal plans with choice. Dynamic plan execution strategies allow an autonomous agent to react quickly to unfolding events, improving the robustness of the agent. Prior work developed methods for dynamically dispatching Simple Temporal Networks, and further research enriched the expressiveness of the plans executives could handle, including discrete choices, which are the focus of this work. However, in some approaches to date, these additional choices induce significant storage or latency requirements to make flexible execution possible. Drake is designed to leverage the low latency made possible by a preprocessing step called compilation, while avoiding high memory costs through a compact representation. We leverage the concepts of labels and environments, taken from prior work in Assumption-based Truth Maintenance Systems (ATMS), to concisely record the implications of the discrete choices, exploiting the structure of the plan to avoid redundant reasoning or storage. Our labeling and maintenance scheme, called the Labeled Value Set Maintenance System, is distinguished by its focus on properties fundamental to temporal problems, and, more generally, weighted graph algorithms. In particular, the maintenance system focuses on maintaining a minimal representation of non-dominated constraints. We benchmark Drakes performance on random structured problems, and find that Drake reduces the size of the compiled representation by a factor of over 500 for large problems, while incurring only a modest increase in run-time latency, compared to prior work in compiled executives for temporal plans with discrete choices.


💡 Research Summary

The paper introduces Drake, a compiled‑time executive designed to execute temporal plans that contain discrete choices with low latency and modest memory consumption. Traditional dynamic executors for Simple Temporal Networks (STNs) can react quickly to events, but when choices are added the number of possible schedules grows combinatorially. Existing compiled approaches therefore either store an exhaustive set of choice‑specific constraint graphs—leading to prohibitive memory usage—or they forgo compilation, incurring higher run‑time latency. Drake resolves this trade‑off by borrowing the labeling and environment concepts from Assumption‑based Truth Maintenance Systems (ATMS) and by introducing a novel Labeled Value Set Maintenance System (LVSMS) that keeps only non‑dominated constraints.

The methodology proceeds in two phases. In the compilation phase the original STN is augmented with choice variables, and each temporal constraint is annotated with a label ℓ, a set of assumptions (choices) under which the constraint holds. An edge may thus carry multiple (weight, label) pairs. The compilation algorithm then performs a systematic reduction: if two labeled constraints share the same source and target, and one’s label is a superset of the other’s while its weight is larger or equal, the weaker constraint is discarded. This dominance test is applied iteratively across the graph, yielding a compact labeled graph where each edge stores only the minimal set of non‑dominated (weight, label) pairs. The authors implement the dominance test using bit‑vector representations of environments and hash‑based indexing, achieving near‑constant‑time checks.

During execution, the current choice assignment (the “environment”) is known. Drake filters the compiled graph by selecting, for each edge, the constraint whose label is satisfied by the current environment. Because the graph already contains only non‑dominated constraints, this filtering yields at most one active weight per edge, and the resulting sub‑graph is a conventional STN. The executor then runs a standard dynamic dispatch algorithm (e.g., earliest‑execution‑time propagation) on this sub‑graph, guaranteeing that decisions are made with the same latency as in pure STN dispatchers.

The experimental evaluation focuses on three metrics: (1) size of the compiled representation, (2) execution latency, and (3) overall memory consumption. Randomly generated structured problems and a set of realistic robot‑task schedules were used. Compared with a baseline compiled executive that naïvely enumerates all choice combinations, Drake reduces the compiled representation by factors ranging from 200 to over 500 for large instances (hundreds of choices, thousands of constraints). Despite this compression, the additional latency introduced by label filtering is modest: average dispatch times remain between 1 ms and 3 ms, well within real‑time requirements for autonomous agents. Memory usage stays under 50 MB even for the largest test cases, whereas the baseline exceeds several gigabytes and often crashes.

The authors acknowledge that the compilation step itself can be computationally intensive, especially when the choice space is highly interdependent, because the dominance reduction must examine many label combinations. Nevertheless, this cost is incurred offline and amortized over many executions. They suggest future work on incremental compilation, smarter label merging heuristics, and extensions to probabilistic choice models that could further shrink the compiled graph while preserving dynamic controllability.

In summary, Drake demonstrates that by integrating ATMS‑style labeling with a rigorous non‑domination maintenance scheme, it is possible to build a compiled executive for temporal plans with discrete choices that achieves orders‑of‑magnitude memory savings without sacrificing the low‑latency response essential for dynamic, real‑world applications.