A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft

Reading time: 5 minute
...

📝 Original Info

  • Title: A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft
  • ArXiv ID: 1111.3735
  • Date: 2011-11-17
  • Authors: 원문에 저자 정보가 제공되지 않았습니다. —

📝 Abstract

The task of keyhole (unobtrusive) plan recognition is central to adaptive game AI. "Tech trees" or "build trees" are the core of real-time strategy (RTS) game strategic (long term) planning. This paper presents a generic and simple Bayesian model for RTS build tree prediction from noisy observations, which parameters are learned from replays (game logs). This unsupervised machine learning approach involves minimal work for the game developers as it leverage players' data (com- mon in RTS). We applied it to StarCraft1 and showed that it yields high quality and robust predictions, that can feed an adaptive AI.

💡 Deep Analysis

Figure 1

📄 Full Content

In a RTS, players need to gather resources to build structures and military units and defeat their opponents. To that end, they often have worker units than can gather resources needed to build workers, buildings, military units and research upgrades. Resources may have different uses, for instance in StarCraft: minerals are used for everything, whereas gas is only required for advanced buildings or military units, and technology upgrades. The military units can be of different types, any combinations of ranged, casters, contact attack, zone attacks, big, small, slow, fast, invisible, flying... Units can have attacks and defenses that counter each others as in rock-paper-scissors. Buildings and research upgrades define technology trees (precisely: directed acyclic graphs). Tech trees are tied to strategic planning, because they put constraints on which units types can be produced, when and in which numbers, which spells are available and how the player spends her resources.

Most real-time strategy (RTS) games AI are either not challenging or not fun to play against. They are not challenging because they do not adapt well dynamically to different strategies (long term goals and army composition) and tactics (army moves) that a human can perform. They are not fun to play against because they cheat economically, gathering resources faster, and/or in the intelligence war, bypassing the fog of war. We believe that creating AI that adapt to the Copyright c 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org ). All rights reserved.

1 StarCraft and its expansion StarCraft: Brood War are trademarks of Blizzard Entertainment TM strategies of the human player would make RTS games AI much more interesting to play against.

We worked on StarCraft: Brood War, which is a canonical RTS game, as Chess is to board games. It had been around since 1998, it has sold 10 millions licenses and was the best competitive RTS for more than a decade. There are 3 factions (Protoss, Terran and Zerg) that are totally different in terms of units, tech trees and thus gameplay styles. StarCraft and most RTS games provide a tool to record game logs into replays that can be re-simulated by the game engine and watched to improve strategies and tactics. All high level players use this feature heavily either to improve their play or study opponents style. Observing replays allows players to see what happened under the fog of war, so that they can understand timing of technologies and attacks and find clues/evidences leading to infer the strategy as well as weak points (either strategic or tactical). We used this replay feature to extract players actions and learn the probabilities of tech trees to happen at a given time.

In our model, we used the buildings part of tech trees because buildings can be more easily viewed than units when fog of war is enforced, and our main focus was our StarCraft bot implementation (see Figure 1), but nothing hinders us to use units and upgrades as well in a setting without fog of war (commentary assistant or game AI that cheat).

This work was encouraged by the reading of Weber and Mateas (2009) Data Mining Approach to Strategy Prediction and the fact that they provided their dataset. They tried and evaluated several machine learning algorithms on replays that were labeled with strategies (supervised learning).

There are related works in the domains of opponent modeling (Hsieh and Sun 2008;Schadd, Bakkes, and Spronck 2007;Kabanza et al. 2010). The main methods used to these ends are case-based reasoning (CBR) and planning or plan recognition (Aha, Molineaux, and Ponsen 2005;Ontañón et al. 2008;Ontañón et al. 2007;Hoang, Lee-Urban, and Muñoz-Avila 2005;Ramírez and Geffner 2009). There are precedent works of Bayesian plan recognition (Charniak and Goldman 1993), even in games with (Albrecht, Zukerman, and Nicholson 1998) using dynamic Bayesian networks to recognize a user’s plan in a multi-player dungeon adventure. Also, Chung, Buro, and Schaeffer (2005) describe a Monte-Carlo plan selection algorithm applied to Open RTS. Aha, Molineaux, and Ponsen (2005) used CBR to perform dynamic plan retrieval extracted from domain knowledge in Wargus (Warcraft II clone). Ontañón et al. (2008) base their real-time case-based planning (CBP) system on a plan dependency graph which is learned from human demonstration. In (Ontañón et al. 2007;Mishra, Ontañón, and Ram 2008), they use CBR and expert demonstrations on Wargus. They improve the speed of CPB by using a decision tree to select relevant features. Hsieh and Sun (2008) based their work on CBR and Aha, Molineaux, and Ponsen (2005) and used StarCraft replays to construct states and building sequences. Strategies are choices of building construction order in their model. Schadd, Bakkes, and Spronck (2007) describe opponent modeling through hierarchically structured models of the opponent behaviour and they applied their work to the Spring RTS (Total

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut