Improving Behavioral Alignment in LLM Social Simulations via Context Formation and Navigation

Reading time: 1 minute
...

📝 Original Info

  • Title: Improving Behavioral Alignment in LLM Social Simulations via Context Formation and Navigation
  • ArXiv ID: 2601.01546
  • Date: 2026-01-04
  • Authors: Letian Kong, Qianran, Jin, Renyu Zhang

📝 Abstract

Large language models (LLMs) are increasingly used to simulate human behavior in experimental settings, but they systematically diverge from human decisions in complex decision-making environments where participants must anticipate others' actions and form beliefs based on observed behavior. We propose a twostage framework for improving behavioral alignment. The first stage, context formation, explicitly specifies the experimental design to establish an accurate representation of the decision task and its context. The second stage, context navigation, guides the reasoning process within that representation to make decisions. We validate this framework through a focal replication of a sequential purchasing game with quality signaling (Kremer and Debo 2016), extending to a crowdfunding game with costly signaling (Cason et al. 2025) and a demand-estimation task (Gui and Toubia 2025) to test generalizability across decision environments. Across four state-of-the-art (SOTA) models (GPT-4o, GPT-5, Claude-4.0-Sonnet-Thinking, DeepSeek-R1), we find that complex decision-making environments require both stages to achieve behavioral alignment with human benchmarks, whereas the simpler demand-estimation task requires only conte...

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut