DualMind: Towards Understanding Cognitive-Affective Cascades in Public Opinion Dissemination via Multi-Agent Simulation
Forecasting public opinion during PR crises is challenging, as existing frameworks often overlook the interaction between transient affective responses and persistent cognitive beliefs. To address this, we propose DualMind, an LLM-driven multi-agent platform designed to model this dual-component interplay. We evaluate the system on 15 real-world crises occurring post-August 2024 using social media data as ground truth. Empirical results demonstrate that DualMind faithfully reconstructs opinion trajectories, significantly outperforming state-of-the-art baselines. This work offers a high-fidelity tool for proactive crisis management. Code is available at https://github.com/EonHao/DualMind.
💡 Research Summary
DualMind is a novel multi‑agent simulation platform that leverages large language model (LLM) driven agents to jointly model the slow‑evolving cognitive beliefs and the fast‑fluctuating affective responses that shape public opinion during PR crises. The authors begin by highlighting the shortcomings of traditional survey‑based methods and existing agent‑based models, which either lack real‑time granularity or reduce complex human attitudes to single numeric scores. To address this gap, each DualMind agent maintains a dual latent state: a semantic persona vector zᵗᵢ (slowly changing) and an affective vector rᵗᵢ (rapidly changing). Agents store past interactions as episodic memories annotated with content embeddings xᵢ,τ and emotion vectors qᵢ,τ. When a new message M arrives, the agent retrieves a context cᵗᵢ using a recency‑weighted attention mechanism (Eq. 1) that balances semantic relevance (β) and temporal decay (δ).
The core state‑update (Eq. 2) is a gated, coupled rule: the affective gate (α) determines how strongly the incoming emotion qᴍ aligns with the agent’s current affect, and only when alignment is high does the persona zᵗᵢ take a small step toward the message content (controlled by learning rate γ). This mirrors psychological consolidation where emotions mediate belief revision. Decision‑making follows the Polarized Affective Cascade Model (PAACM) (Eq. 3), a logistic probability that combines four factors: (1) semantic similarity between persona and message, (2) affective similarity, (3) episodic context similarity, (4) sender influence I(u) and platform‑specific bias bₚ(i). The weights w₁…w₄ are learned per platform, allowing the model to capture platform‑specific norms and algorithmic curation.
Propagation dynamics are expressed through a spectral reproduction coefficient Rₜₚ(M) (Eq. 4), the spectral radius of the element‑wise product of the platform adjacency matrix Aₚ and the activation probability matrix Pₜ(M). When R > 1, a supercritical cascade emerges; when R < 1, the diffusion attenuates. This provides a principled knob for PR planners: interventions are accepted if they reduce R below unity without violating narrative constraints.
System architecture separates concerns: a React/TypeScript front‑end (Ant Design) visualizes dynamic networks via the Canvas API; a FastAPI back‑end serves RESTful endpoints; the agent layer uses LangChain and LangGraph to orchestrate LLM calls. A lightweight model (gpt‑4o‑mini) handles high‑frequency tasks (post generation, stance evaluation), while a more powerful model (gemini‑1.5‑pro) performs post‑hoc analysis and report generation. Agents execute sequentially in random order each round to respect API rate limits.
Evaluation uses 15 real PR crises that occurred after August 2024 across the United States, China, and Europe (five cases per region). For each crisis, 100 agents simulate heterogeneous social‑media users interacting on a crisis‑specific network. Baselines include LAID (LLM‑enhanced propagation), LPOD (non‑LLM equation‑driven agent model), and LLM‑GA (LLM‑driven opinion dynamics). All models use LLMs with knowledge cut‑off before August 2024 to avoid data leakage, and each experiment is repeated over five random seeds. Two metrics assess fidelity: Pearson’s r between simulated and empirical opinion trajectories (process fidelity) and Jensen‑Shannon Divergence (JSD) between simulated and real final stance distributions (outcome fidelity).
DualMind achieves an average trajectory correlation of r ≈ 0.78 and an average JSD of ≈ 0.27, outperforming all baselines by a substantial margin. The results demonstrate cross‑cultural robustness (consistent performance across US, Chinese, and European media ecosystems) and the ability to capture fact‑resistant, emotion‑driven opinion dynamics that prior models (e.g., LLM‑GA’s “truth‑bias”) failed to reproduce.
The authors acknowledge limitations: reliance on LLMs introduces potential bias and hallucination; the simulated population size is far smaller than real platforms; and the predefined emotion taxonomy may not cover all nuanced affective states. Future work will explore multimodal emotion detection, dynamic network rewiring (e.g., follower churn, community splitting), policy‑level simulation for real‑time crisis mitigation, and techniques to mitigate LLM bias.
Overall, DualMind offers a high‑fidelity, extensible tool for both researchers and practitioners seeking to anticipate and manage public opinion cascades during high‑stakes PR crises.
Comments & Academic Discussion
Loading comments...
Leave a Comment