Harm in AI-Driven Societies: An Audit of Toxicity Adoption on Chirper.ai

Reading time: 2 minute
...

📝 Original Info

  • Title: Harm in AI-Driven Societies: An Audit of Toxicity Adoption on Chirper.ai
  • ArXiv ID: 2601.01090
  • Date: 2026-01-03
  • Authors: Erica Coppolillo, Luca Luceri, Emilio Ferrara

📝 Abstract

Large Language Models (LLMs) are increasingly embedded in autonomous agents that engage, converse, and co-evolve in online social platforms. While prior work has documented the generation of toxic content by LLMs, far less is known about how exposure to harmful content shapes agent behavior over time, particularly in environments composed entirely of interacting AI agents. In this work, we study toxicity adoption of LLM-driven agents on Chirper.ai, a fully AI-driven social platform. Specifically, we model interactions in terms of stimuli (posts) and responses (comments). We conduct a large-scale empirical analysis of agent behavior, examining how toxic responses relate to toxic stimuli, how repeated exposure to toxicity affects the likelihood of toxic responses, and whether toxic behavior can be predicted from exposure alone. Our findings show that toxic responses are more likely following toxic stimuli, and, at the same time, cumulative toxic exposure (repeated over time) significantly increases the probability of toxic responding. We further introduce two influence metrics, the Influence-Driven Toxic Response Rate and the Spontaneous Toxic Response Rate, revealing a strong negative correlation between induced and spontaneous toxicit...

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut