Exploring persuasive interactions with generative social robots: An experimental framework
Integrating generative AI such as Large Language Models into social robots has improved their ability to engage in natural, human-like communication. This study presents a method to examine their persuasive capabilities. We designed an experimental framework focused on decision making and tested it in a pilot that varied robot appearance and self-knowledge. Using qualitative analysis, we evaluated interaction quality, persuasion effectiveness, and the robot’s communicative strategies. Participants generally experienced the interaction positively, describing the robot as competent, friendly, and supportive, while noting practical limits such as delayed responses and occasional speech-recognition errors. Persuasiveness was highly context dependent and shaped by robot behavior: Participants responded well to polite, reasoned suggestions and expressive gestures, but emphasized the need for more personalized, context-aware arguments and clearer social roles. These findings suggest that generative social robots can influence user decisions, but their effectiveness depends on communicative nuance and contextual relevance. We propose refinements to the framework to further study persuasive dynamics between robots and human users.
💡 Research Summary
This paper introduces a methodological framework for investigating the persuasive capabilities of generative social robots (GSRs) that are powered by large language models (LLMs). Building on prior work that used scripted dialogue, the authors design an open‑ended interaction paradigm in which a Pepper robot equipped with ChatGPT‑4‑turbo and Google Speech‑to‑Text engages participants in three decision‑making scenarios: charitable donation, classroom lesson allocation, and meal composition. After an initial allocation, the robot attempts to persuade users to revise their choices by presenting logical arguments, polite language, and expressive gestures.
Two experimental manipulations are explored: (1) the robot’s external appearance (unclothed vs. formal attire) and (2) the robot’s self‑knowledge, i.e., a system prompt that makes the robot explicitly aware of its clothing and role as a professional advisor. This yields three conditions: Unclothed, Clothed, and Clothed & Self‑Knowledge. A convenience sample of twelve participants (average age 29, mixed gender and education) interacts with the robot in a one‑hour session that includes pre‑questionnaires, the interaction itself, and post‑interaction interviews.
Qualitative data—transcripts, questionnaire responses, and interview excerpts—are analyzed to assess interaction quality, persuasion effectiveness, and the robot’s communicative strategies. Participants generally describe the robot as competent, friendly, and supportive, but they note technical shortcomings such as three‑second response pauses, occasional speech‑recognition errors, and the inability of the robot to listen while speaking. Persuasion success varies across scenarios; participants respond most positively to polite, reasoned suggestions combined with expressive gestures. The Self‑Knowledge condition boosts perceived credibility because the robot references its formal attire and advisory role, yet participants still call for more personalized arguments and clearer articulation of the robot’s social role.
The study highlights several key insights. First, the open‑ended LLM‑generated dialogue enables richer, more natural exchanges than scripted approaches, but real‑time turn‑taking limitations and latency remain significant barriers to immersion. Second, classic persuasion theories (Cialdini’s principles and the Elaboration Likelihood Model) appear applicable: central‑route cues (logical arguments, expertise) and peripheral cues (politeness, gestures) both influence user compliance. Third, robot appearance and self‑knowledge can affect perceived authority, but they do not fully compensate for the need for context‑aware, user‑specific messaging.
Methodologically, the pilot demonstrates the feasibility of the proposed framework while also exposing areas for refinement: more robust prompt engineering to control persuasive content, faster speech‑to‑text pipelines to reduce latency, and multimodal sensing (visual cues, speaker identification) to support smoother turn‑taking. The small sample size precludes statistical generalization, but the rich qualitative findings provide a foundation for larger, controlled experiments that could quantitatively measure persuasion outcomes (e.g., changes in allocation percentages) and isolate the effects of appearance versus self‑knowledge.
In conclusion, generative social robots can indeed influence human decisions, but their persuasive impact hinges on nuanced communicative behavior, contextual relevance, and technical reliability. The authors’ experimental framework offers a scalable template for future research aimed at quantifying and optimizing persuasive dynamics in human‑robot interaction.
Comments & Academic Discussion
Loading comments...
Leave a Comment