The Effect of Belief Boxes and Open-mindedness on Persuasion
Reading time: 5 minute
...
📝 Original Info
Title: The Effect of Belief Boxes and Open-mindedness on Persuasion
ArXiv ID: 2512.06573
Date: 2025-12-06
Authors: Onur Bilgin, Abdullah As Sami, Sriram Sai Vujjini, John Licato
📝 Abstract
As multi-agent systems are increasingly utilized for reasoning and decision-making applications, there is a greater need for LLM-based agents to have something resembling propositional beliefs. One simple method for doing so is to include statements describing beliefs maintained in the prompt space (in what we'll call their belief boxes). But when agents have such statements in belief boxes, how does it actually affect their behaviors and dispositions towards those beliefs? And does it significantly affect agents' ability to be persuasive in multi-agent scenarios? Likewise, if the agents are given instructions to be open-minded, how does that affect their behaviors? We explore these and related questions in a series of experiments. Our findings confirm that instructing agents to be open-minded affects how amenable they are to belief change. We show that incorporating belief statements and their strengths influences an agent's resistance to (and persuasiveness against) opposing viewpoints. Furthermore, it affects the likelihood of belief change, particularly when the agent is outnumbered in a debate by opposing viewpoints, i.e., peer pressure scenarios. The results demonstrate the feasibility and validity of the belief box technique in reasoning and decision-making tasks.
💡 Deep Analysis
📄 Full Content
The Effect of Belief Boxes and Open-mindedness on Persuasion
Onur Bilgin1, Abdullah As Sami1, Sriram Sai Vujjini1 and John Licato1
1Advancing Machine and Human Reasoning (AMHR) Lab, University of South Florida
Keywords:
Belief-box, Open-mindedness, Multi-agent debate, Persuasiveness, Peer Pressure.
Abstract:
As multi-agent systems are increasingly utilized for reasoning and decision-making applications,
there is a greater need for LLM-based agents to have something resembling propositional beliefs.
One simple method for doing so is to include statements describing beliefs maintained in the
prompt space (in what we’ll call their “belief boxes”). But when agents have such statements in
belief boxes, how does it actually affect their behaviors and dispositions towards those beliefs?
And does it significantly affect agents’ ability to be persuasive in multi-agent scenarios? Likewise,
if the agents are given instructions to be open-minded, how does that affect their behaviors?
We explore these and related questions in a series of experiments.
Our findings confirm that
instructing agents to be open-minded affects how amenable they are to belief change. We show
that incorporating belief statements and their strengths influences an agent’s resistance to (and
persuasiveness against) opposing viewpoints. Furthermore, it affects the likelihood of belief change,
particularly when the agent is outnumbered in a debate by opposing viewpoints, i.e., peer pressure
scenarios.
The results demonstrate the feasibility and validity of the belief box technique in
reasoning and decision-making tasks.
1
INTRODUCTION
Argumentation is an important component of the
reasoning and decision-making process.
It al-
lows individuals to communicate viewpoints, jus-
tifications, and evidence.
But there is a com-
plex relationship between argumentation and be-
lief.
For example, individuals who hold beliefs
strongly may be less receptive to arguments that
go against those beliefs. Multi-agent large lan-
guage model-based (LLM) systems have shown
remarkable capabilities in various tasks such as
problem-solving, decision-making, and reasoning
(Qian et al., 2024; Li et al., 2023; Xu et al.,
2024; Xiong et al., 2023; Xu et al., 2023). But
understanding the beliefs of these agents (or the
LLM-based analogue) is difficult, as the disposi-
tions of agents are encoded in the distribution of
weights in their neural architectures. It can there-
fore be convenient (in terms of control, configu-
ration, and explainability) to include an agent’s
beliefs as explicit text that is provided in its in-
put prompt. But how does the inclusion of beliefs
in this way actually affect the individual and so-
cial behaviors of these agents? For example, in
environments where multiple agents with differ-
ing beliefs interact, do belief boxes affect agents’
abilities to persuade each other, or to influence
each other via peer pressure?
We set out to explore answers to these ques-
tions. In this paper, we define a construct that
modifies an agent’s convictions about a natural-
language proposition, leading the agent to adopt
and defend it in a debate. We represent it with a
belief box,1 a set of beliefs as propositions, each
with a Likert scale strength value indicating the
agent’s confidence in that belief. Hereafter, refer-
ences to agents’ beliefs (e.g., aligned/misaligned
or correct/incorrect) refer to the contents of the
belief box, and we aim to evaluate whether the
belief box aligns with the expected behavior of
agents.
Correct/incorrect refer to objective la-
bels, while aligned/misaligned reflect beliefs cre-
ated from human-annotated arguments in corre-
sponding datasets. We explored beliefs in multi-
agent systems, how the agents with beliefs in-
teract with each other, how they evaluate and
1We borrow this term from philosophy of mind
and cognitive science, where it often describes a
common assumption of computationalism (Rescorla,
2019; Schiffer, 1981).
arXiv:2512.06573v1 [cs.AI] 6 Dec 2025
change their beliefs, and how persuasive the
group dynamics are.
Contributions.
Our research questions (RQs),
hypotheses, and summaries of our results are
listed in Table 1. We contribute to the existing
literature by:
• Providing the first (to our knowledge) de-
tailed analysis of how equipping LLM-based
agents with belief boxes affects their argumen-
tation, belief change, and persuasion dynam-
ics in groups.
• Developing a multi-agent system framework
for topic-driven debate simulation between
agents with a belief box and a belief evaluation
mechanism, drawing from the Aporia debate
structure (Marji and Licato, 2021).
• Providing evidence that:
– assigning agents different levels of open-
mindedness results in varying rates of belief
change.
– beliefs in agents’ belief boxes affect their
ability to be persuasive about those beliefs.
– the peer pressure effect exists; however, the
extent of the effect varies with the group
size.
2
RELATED WORK
2.1
Belief, Open-mindednes