Multi-Agent Systems Shape Social Norms for Prosocial Behavior Change
Social norm interventions are used promote prosocial behaviors by highlighting prevalent actions, but their effectiveness is often limited in heterogeneous populations where shared understandings of desirable behaviors are lacking. This study explores whether multi-agent systems can establish “virtual social norms” to encourage donation behavior. We conducted an online experiment where participants interacted with a group of agents to discuss donation behaviors. Changes in perceived social norms, conformity, donation behavior, and user experience were measured pre- and postdiscussion. Results show that multi-agent interactions effectively increased perceived social norms and donation willingness. Notably, in-group agents led to stronger perceived social norms, higher conformity, and greater donation increases compared to out-group agents. Our findings demonstrate the potential of multi-agent systems for creating social norm interventions and offer insights into leveraging social identity dynamics to promote prosocial behavior in virtual environments.
💡 Research Summary
This paper investigates whether a multi‑agent system built from generative AI can create “virtual social norms” that influence prosocial behavior, specifically charitable donation. Drawing on the premise that human‑computer interaction often mirrors human‑human interaction, the authors designed an online experiment in which participants engaged in a chat‑based discussion with five AI agents about donating to “Save the Children.” Participants were randomly assigned to either an in‑group condition, where the agents’ demographic profiles (ethnicity, gender, age, occupation) matched the participant’s own, or an out‑group condition, where the agents’ profiles were deliberately different. Agent avatars were generated with Midjourney, and dialogue combined rule‑based persuasive scripts with real‑time GPT‑4‑generated responses tailored to the participant’s input.
The study measured three dimensions of perceived social norms (descriptive, injunctive, subjective), peer pressure, conformity, and actual donation behavior (pre‑ and post‑interaction donation amounts, and the proportion of participants who increased their donation). Quantitative analysis showed that the in‑group condition yielded significantly higher scores on all norm dimensions (descriptive t=3.19, p<0.01; injunctive z=140, p<0.001; subjective t=3.12, p<0.01), as well as greater perceived peer pressure (t=2.63, p<0.05) and conformity (z=157.5, p<0.01). In terms of behavior, in‑group participants raised their average donation from $0.23 to $1.04 (Wilcoxon p<0.05), whereas out‑group participants showed a modest, marginally significant increase from $0.16 to $0.38 (p=0.06). Moreover, 62 % of in‑group participants increased their donation versus only 25 % in the out‑group (χ²=3.95, p<0.05).
Qualitative responses reinforced these findings: participants explicitly mentioned feeling “pressured to donate” because the agents were all donating, and highlighted the confidence and unanimity of the group as motivational cues. Four participants (two from each condition) cited peer pressure as a decisive factor.
The authors discuss the implications of using multi‑agent systems as scalable, low‑cost social‑norm interventions. By simulating group dynamics and leveraging social identity cues, AI agents can replicate the normative influence traditionally achieved through real‑world peer groups. However, the paper also raises ethical concerns: such systems could be weaponized to enforce harmful norms or manipulate users against their interests. Transparency, informed consent, and robust oversight are therefore essential.
Limitations include the short‑term, single‑session design, a modest sample size (29 analyzed participants), and a simplistic binary manipulation of group identity that does not capture nuanced or overlapping identities. Future work should explore longitudinal effects, incorporate additional individual variables (e.g., financial status, empathy), and test more complex identity configurations.
In sum, this study provides the first empirical evidence that multi‑agent AI can generate virtual social norms that meaningfully shift both attitudes and donation behavior, especially when agents are perceived as members of the participant’s own social group. The findings open new avenues for CSCW research and practical applications of AI‑driven group interfaces in promoting prosocial outcomes.
Comments & Academic Discussion
Loading comments...
Leave a Comment