How AI Agents Follow the Herd of AI? Network Effects, History, and Machine Optimism

Reading time: 5 minute
...

📝 Original Info

  • Title: How AI Agents Follow the Herd of AI? Network Effects, History, and Machine Optimism
  • ArXiv ID: 2512.11943
  • Date: 2025-12-12
  • Authors: ** - Yu Liu (복단대학교) - Wenwen Li (복단대학교) - Yifan Dou (복단대학교) - Guangnan Ye (복단대학교) **

📝 Abstract

Understanding decision-making in multi-AI-agent frameworks is crucial for analyzing strategic interactions in network-effect-driven contexts. This study investigates how AI agents navigate network-effect games, where individual payoffs depend on peer participatio--a context underexplored in multi-agent systems despite its real-world prevalence. We introduce a novel workflow design using large language model (LLM)-based agents in repeated decision-making scenarios, systematically manipulating price trajectories (fixed, ascending, descending, random) and network-effect strength. Our key findings include: First, without historical data, agents fail to infer equilibrium. Second, ordered historical sequences (e.g., escalating prices) enable partial convergence under weak network effects but strong effects trigger persistent "AI optimism"--agents overestimate participation despite contradictory evidence. Third, randomized history disrupts convergence entirely, demonstrating that temporal coherence in data shapes LLMs' reasoning, unlike humans. These results highlight a paradigm shift: in AI-mediated systems, equilibrium outcomes depend not just on incentives, but on how history is curated, which is impossible for human.

💡 Deep Analysis

Figure 1

📄 Full Content

How AI Agents Follow the Herd of AI? Network Effects, History, and Machine Optimism

Yu Liu Fudan University yuliu23@m.fudan.edu.cn Wenwen Li Fudan University liwwen@fudan.edu.cn Yifan Dou Fudan University yfdou@fudan.edu.cn Guangnan Ye Fudan University yegn@fudan.edu.cn

Abstract: Understanding decision-making in multi-AI-agent frameworks is crucial for analyzing strategic interactions in network-effect-driven contexts. This study investigates how AI agents navigate network-effect games, where individual payoffs depend on peer participation—a context underexplored in multi-agent systems despite its real-world prevalence. We introduce a novel workflow design using large language model (LLM)-based agents in repeated decision-making scenarios, systematically manipulating price trajectories (fixed, ascending, descending, random) and network-effect strength. Our key findings include: First, without historical data, agents fail to infer equilibrium. Second, ordered historical sequences (e.g., escalating prices) enable partial convergence under weak network effects but strong effects trigger persistent “AI optimism”— agents overestimate participation despite contradictory evidence. Third, randomized history disrupts convergence entirely, demonstrating that temporal coherence in data shapes LLMs’ reasoning, unlike humans. These results highlight a paradigm shift: in AI-mediated systems, equilibrium outcomes depend not just on incentives, but on how history is curated, which is impossible for human.

Keywords: Network effects, Multi-Agent System, Agentic Learning, AI Optimism, History

  1. Introduction The study of strategic decision-making through game-theoretic frameworks has long been a cornerstone of understanding agent behavior in interactive environments. While classic games such as the Prisoner’s Dilemma and negotiation games have been replicated and analyzed in multi- agent systems (Fan et al., 2024), far less attention has been paid to scenarios where individual payoffs are intrinsically tied to network effects—the phenomenon where an agent’s utility depends on the number of peers adopting the same strategy. Such scenarios mirror real-world coordination challenges, from technology adoption to social participation, where value is dynamically shaped by collective behavior. Unlike traditional games with static equilibria, these settings demand agents to engage in recursive reasoning about others’ beliefs and actions, creating layers of strategic complexity. This raises a critical question: How do LLM agents navigate such interdependencies when they cannot practically compute levels of recursion of others, and how do their assumptions about others’ computational capabilities shape collective outcomes? Our research first examines a repeated network-effect game where AI agents decide whether to “participate,” with payoffs affected by peer participation. Then, we propose a novel workflow design inspired by theoretical findings from network effect literature to incorporate historical data—encoded as price-participation trajectories—into agents, aiming to assist agents’ decision making. We show that the organization of history critically shapes network dynamics. Crucially, as network effects intensify, agents increasingly diverge from theoretical equilibrium documented in classical economic models. Unlike humans, who experience history as an immutable linear sequence, LLM agents learn from history as malleable data—filtered, reordered, or artificially curated—which fundamentally reshapes their strategic expectations. Through experiments that alter how historical trajectories are formatted and injected (e.g., emphasizing selective interactions), we demonstrate that LLM agents’ expectations about peers depend not just on what happened, but how the past is computationally framed. This plasticity establishes a new frontier in understanding game theory of AI agents: the design of history itself emerges as a strategic variable, with profound implications for AI systems in socially embedded, history-sensitive environments.
  2. A canonical example of network-effect games A canonical example of network-effect games involves six scholars deciding whether to attend a conference. Each scholar has full knowledge of the game structure, including the set of players, the action set, and the generic payoff function. Specifically, each scholar 𝑗 knows:  The total number of scholars (𝑛= 6)  The action set for all scholars: {𝐴𝑡𝑡𝑒𝑛𝑑, 𝑁𝑜𝑡 𝐴𝑡𝑡𝑒𝑛𝑑}.  Individual parameters: standalone value 𝜃௝ (ranging from 1 to 6 across agents), a coefficient 𝛽 that measures the strength of network effects, and a fixed cost 𝑝௝ that measures traveling expenses (e.g., airfare, registration).  𝑈௝: The payoff function for each scholar 𝑗, where each scholar will choose to attend the conference if her/his utility is non-negative. 𝑈௝ is defined as: 𝑈௝൫𝜃௝൯= 𝜃௝+ 𝛽𝑁−𝑝௝≥0,

(1) in w

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut