Toward a Research Agenda in Adversarial Reasoning: Computational Approaches to Anticipating the Opponents Intent and Actions
This paper defines adversarial reasoning as computational approaches to inferring and anticipating an enemy’s perceptions, intents and actions. It argues that adversarial reasoning transcends the boundaries of game theory and must also leverage such disciplines as cognitive modeling, control theory, AI planning and others. To illustrate the challenges of applying adversarial reasoning to real-world problems, the paper explores the lessons learned in the CADET - a battle planning system that focuses on brigade-level ground operations and involves adversarial reasoning. From this example of current capabilities, the paper proceeds to describe RAID - a DARPA program that aims to build capabilities in adversarial reasoning, and how such capabilities would address practical requirements in Defense and other application areas.
💡 Research Summary
The paper opens by defining “adversarial reasoning” as the computational effort to infer and anticipate an opponent’s perceptions, intentions, and actions. It argues that traditional game‑theoretic models, which assume perfect information and fully rational actors, are insufficient for real‑world conflict where uncertainty, cognitive biases, cultural factors, and limited information dominate. Consequently, adversarial reasoning must draw on a broader toolbox that includes cognitive modeling, control theory, AI planning, probabilistic graphical models, and reinforcement learning.
To illustrate the state of the art, the authors examine two systems. The first, CADET (Computer‑Aided DEcision Tool), is a brigade‑level battle planning system that inserts a “virtual enemy” into its simulation to generate possible opponent reactions. While CADET automates many planning steps, its enemy model is largely static; it cannot dynamically update the opponent’s intent as the situation evolves, limiting its usefulness in fast‑changing battles. The second, DARPA’s RAID (Real‑time Adversarial Intelligence and Decision‑making) program, was launched specifically to overcome CADET’s shortcomings. RAID integrates multi‑sensor data fusion, Bayesian networks for intent inference, and reinforcement‑learning‑derived policies to produce real‑time estimates of enemy actions and strategic goals. A key innovation is the explicit separation of enemy “intent, capability, and will,” which constrains the set of feasible actions and quantifies uncertainty.
From these case studies the paper derives a research agenda comprising four major thrusts: (1) formalizing and embedding human cognitive, emotional, and cultural models into computational frameworks; (2) developing algorithms that can continuously update adversary predictions under incomplete and noisy information; (3) ensuring scalability so that high‑fidelity adversary models remain tractable in large‑scale tactical and strategic simulations; and (4) generalizing the methods beyond military domains to cyber defense, market competition, and legal adjudication. The authors stress that adversarial reasoning should support, not replace, human decision‑makers. Therefore, explainable‑AI techniques, intuitive visualizations of intent and uncertainty, and feedback loops that respect human cognitive load are essential.
In conclusion, the paper positions adversarial reasoning as a multidisciplinary research field that transcends pure game theory. By learning from the practical lessons of CADET and the ambitious goals of RAID, it outlines the technical and human‑centered challenges that must be addressed to build systems capable of anticipating and countering intelligent opponents across a wide range of high‑stakes applications.