LLM Collusion

Reading time: 1 minute
...

📝 Original Info

  • Title: LLM Collusion
  • ArXiv ID: 2601.01279
  • Date: 2026-01-03
  • Authors: Shengyu Cao, Ming Hu

📝 Abstract

We investigate how the widespread adoption of large language models (LLMs) for pricing decisions can facilitate collusion among competing sellers. We develop a theoretical framework to analyze a duopoly in which both sellers delegate pricing to the same pre-trained LLM characterized by two parameters: a propensity parameter that captures the model's internal preference for high-price recommendations, and an output fidelity parameter that measures the alignment between this preference and the generated outputs. The LLM updates its propensity through retraining the model. Somewhat surprisingly, we find that the seemingly prudent practice of configuring LLMs for robustness and reproducibility in high-stakes pricing tasks gives rise to collusion through a phase transition. Specifically, we establish a critical output-fidelity threshold that governs long-run market behavior. Below this threshold, competitive pricing is the unique long-run outcome regardless of initial conditions. Above this threshold, the system exhibits bistability, with both competitive and collusive pricing being locally stable, and the realized outcome is determined by the model's initial preference. The collusive pricing outcome resembles tacit collusion: prices are elevated on average, but occasional low-price recommendations create plausible deniab...

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut