Networks of Influence Diagrams: A Formalism for Representing Agents Beliefs and Decision-Making Processes
This paper presents Networks of Influence Diagrams (NID), a compact, natural and highly expressive language for reasoning about agents beliefs and decision-making processes. NIDs are graphical structures in which agents mental models are represented as nodes in a network; a mental model for an agent may itself use descriptions of the mental models of other agents. NIDs are demonstrated by examples, showing how they can be used to describe conflicting and cyclic belief structures, and certain forms of bounded rationality. In an opponent modeling domain, NIDs were able to outperform other computational agents whose strategies were not known in advance. NIDs are equivalent in representation to Bayesian games but they are more compact and structured than this formalism. In particular, the equilibrium definition for NIDs makes an explicit distinction between agents optimal strategies, and how they actually behave in reality.
💡 Research Summary
The paper introduces Networks of Influence Diagrams (NIDs), a graphical formalism designed to capture both the beliefs that agents hold about one another and the decision‑making processes those agents use. A NID consists of a set of “mental‑model” nodes, each of which is itself an influence diagram (containing decision, chance, and utility components). Edges between nodes encode the fact that one agent’s mental model references another agent’s model, allowing arbitrarily deep recursive belief structures such as “Agent A believes that Agent B believes that Agent C will act …”. By nesting influence diagrams in this way, NIDs can represent the same strategic situations as Bayesian games while offering a far more compact and structured representation.
The authors formalize NIDs by defining a finite set M of mental‑model graphs and a directed edge set E that captures reference relationships. Each mental model m∈M is an ordinary influence diagram (D, U, P) where D is a set of decision variables, U a utility function, and P a probability distribution over chance variables. The paper proves representation equivalence: any Bayesian game can be translated into a NID of comparable size, and any NID can be unfolded into a Bayesian game with the same equilibrium outcomes.
A key conceptual contribution is the two‑level equilibrium notion. The first level, “strategic equilibrium,” assumes each mental model computes an optimal policy given its beliefs—essentially the Nash or Bayes‑Nash equilibrium within that model. The second level, “behavioral equilibrium,” acknowledges that actual agents may deviate from the computed optimal policy because of bounded rationality, computational constraints, or heuristic reasoning. This separation makes explicit the gap between normative optimality and observed behavior, a distinction that is often blurred in traditional game‑theoretic treatments.
To illustrate expressive power, the paper presents three families of examples. The first demonstrates conflicting belief structures where two agents hold mutually inconsistent expectations, leading to endless belief‑revision loops. The second shows cyclic belief networks (A → B → C → A), which are difficult to encode compactly in Bayesian games but are naturally captured by a simple directed cycle of mental‑model nodes. The third example incorporates bounded rationality by attaching a cost or capacity limit to a node, forcing the associated agent to adopt a heuristic rather than a fully optimal strategy. In all cases, the NID representation remains concise and visually intuitive.
Empirically, the authors evaluate NIDs in an opponent‑modeling domain: a repeated competitive game where the opponent’s strategy is unknown a priori. Several baseline agents employ fixed or simple adaptive strategies, while the NID‑based agent continuously updates its mental‑model graph based on observed actions, recomputes optimal policies, and selects actions accordingly. Across a range of opponent types—including stochastic, deterministic, and non‑stationary strategies—the NID agent achieves a statistically significant higher win rate than all baselines. Notably, when the opponent employs unconventional or deceptive tactics, the NID’s recursive belief updates allow it to quickly infer the hidden mental model and adjust its behavior, demonstrating robustness to strategic uncertainty.
The discussion highlights both strengths and limitations. Strengths include (1) a compact representation that scales better than flat Bayesian games, (2) the ability to model deep recursive beliefs without exponential blow‑up, and (3) an explicit mechanism for incorporating bounded rationality and behavioral deviations. Limitations arise when the mental‑model graph becomes densely connected or contains many cycles; inference over such structures can become computationally intensive, suggesting a need for approximate algorithms, message‑passing schemes, or hierarchical decomposition.
In conclusion, Networks of Influence Diagrams provide a powerful, expressive, and structurally rich language for representing multi‑agent belief hierarchies and decision processes. By bridging normative game‑theoretic analysis with realistic behavioral modeling, NIDs open new avenues for research in artificial intelligence, economics, and social sciences, especially in domains where agents must reason about the reasoning of others under uncertainty and limited computational resources.