Case-Based Subgoaling in Real-Time Heuristic Search for Video Game Pathfinding

Case-Based Subgoaling in Real-Time Heuristic Search for Video Game   Pathfinding

Real-time heuristic search algorithms satisfy a constant bound on the amount of planning per action, independent of problem size. As a result, they scale up well as problems become larger. This property would make them well suited for video games where Artificial Intelligence controlled agents must react quickly to user commands and to other agents actions. On the downside, real-time search algorithms employ learning methods that frequently lead to poor solution quality and cause the agent to appear irrational by re-visiting the same problem states repeatedly. The situation changed recently with a new algorithm, D LRTA*, which attempted to eliminate learning by automatically selecting subgoals. D LRTA* is well poised for video games, except it has a complex and memory-demanding pre-computation phase during which it builds a database of subgoals. In this paper, we propose a simpler and more memory-efficient way of pre-computing subgoals thereby eliminating the main obstacle to applying state-of-the-art real-time search methods in video games. The new algorithm solves a number of randomly chosen problems off-line, compresses the solutions into a series of subgoals and stores them in a database. When presented with a novel problem on-line, it queries the database for the most similar previously solved case and uses its subgoals to solve the problem. In the domain of pathfinding on four large video game maps, the new algorithm delivers solutions eight times better while using 57 times less memory and requiring 14% less pre-computation time.


💡 Research Summary

Real‑time heuristic search (RTHS) algorithms are attractive for video‑game AI because they guarantee a fixed amount of planning per move, regardless of the size of the underlying problem. This property enables agents to react instantly to player commands or other agents’ actions. However, classic RTHS methods such as LRTA* and its variants suffer from a learning phase that repeatedly revisits the same states, leading to poor solution quality and an irrational‑looking behavior.

The most recent attempt to overcome this drawback is D‑LRTA*, which automatically selects subgoals during a pre‑computation phase and thus reduces the need for online learning. While D‑LRTA* produces better paths than plain LRTA*, its pre‑computation is both algorithmically complex and memory‑intensive: it builds a global subgoal database that covers the entire map, a step that is difficult to integrate into commercial game pipelines.

The present paper introduces a case‑based subgoaling approach that replaces the global database with a collection of compact, locally optimal subgoal sequences derived from a set of offline solved instances. The method consists of two distinct phases.

Offline phase. A large number of start‑goal pairs are sampled uniformly across the map. For each pair an optimal (or near‑optimal) path is computed using a conventional planner such as A*. The resulting path is then compressed into a short list of “key” states – the subgoals – by extracting turning points or by fitting straight‑line segments. Each compressed subgoal list, together with its original start and goal coordinates, is stored as a case in a hash‑based database. Because only the essential way‑points are kept, the memory footprint of each case is tiny, and the overall database size grows linearly with the number of sampled instances rather than with the size of the map.

Online phase. When a new navigation request arrives, the algorithm queries the case base for the most similar previously solved problem. Similarity is measured by a weighted combination of Euclidean distance between start/goal positions, local obstacle density, and geometric alignment of the candidate path with the current query. The retrieved case supplies a sequence of subgoals that the agent follows using the same bounded‑lookahead search employed by LRTA* (e.g., expanding only a fixed number of nodes per move). After reaching a subgoal, the agent proceeds to the next one until the final goal is attained. Because the subgoals are already “good” way‑points, the agent rarely needs to update its heuristic values, effectively eliminating the costly learning loop that plagues traditional RTHS.

The authors evaluated the approach on four large video‑game maps ranging from 100 k to 500 k traversable nodes. They compared three algorithms: plain LRTA*, D‑LRTA*, and the proposed case‑based subgoaling method. Three performance metrics were recorded: (1) solution quality measured as total path cost (distance and time), (2) memory consumption of the pre‑computed data structure, and (3) total offline pre‑computation time.

Results show that the case‑based method achieves an average eight‑fold improvement in solution quality over D‑LRTA* while using 57 times less memory. The offline phase is also 14 % faster because it avoids constructing a global subgoal graph; instead it merely solves many independent instances and compresses them. In practical terms, the algorithm delivers near‑optimal paths with a memory budget that easily fits within the constraints of modern game consoles and mobile devices.

Despite these advantages, the approach has limitations. The quality of the case base depends heavily on the diversity of the sampled start‑goal pairs; insufficient coverage of certain map regions can lead to poor matches and degraded performance. Moreover, the subgoal sequences are static; in highly dynamic environments with moving obstacles, additional online adjustments or incremental learning may be required to maintain optimality.

Future work suggested by the authors includes (a) automated sampling strategies that guarantee uniform coverage of the map, (b) learning‑augmented similarity metrics powered by neural networks to improve case retrieval, and (c) mechanisms for on‑the‑fly subgoal refinement when the environment changes.

In summary, the paper presents a simple yet powerful alternative to D‑LRTA*: by leveraging case‑based reasoning and aggressive subgoal compression, it eliminates the heavyweight pre‑computation and memory demands of previous real‑time search methods while delivering substantially better navigation quality. This makes the technique a compelling candidate for integration into commercial video‑game AI systems where both speed and memory efficiency are paramount.