Designing for Disagreement: Front-End Guardrails for Assistance Allocation in LLM-Enabled Robots

LLM-enabled robots prioritizing scarce assistance in social settings face pluralistic values and LLM behavioral variability: reasonable people can disagree about who is helped first, while LLM-mediated interaction policies vary across prompts, contex…

Authors: Carmen Ng

Designing for Disagreement: Front-End Guar drails for Assistance Allocation in LLM-Enabled Rob ots Carmen Ng carmen.ng@tum.de T echnical University of Munich Germany Abstract LLM-enabled robots prioritizing scarce assistance in social settings face pluralistic values and LLM behavioral variability: reasonable people can disagree about who is helped rst, while LLM-mediated interaction policies vary across prompts, contexts, and gr oups in ways that are dicult to anticipate or verify at contact point. Y et user-facing guardrails for real-time, multi-user assistance alloca- tion remain under-specied. W e propose bounde d calibration with contestability , a procedural front-end pattern that (i) constrains prioritization to a gov ernance-approved menu of admissible modes, (ii) keeps the active mode legible in interaction-relevant terms at the point of deferral, and (iii) provides an outcome-spe cic contest path- way without renegotiating the global rule. T reating pluralism and LLM uncertainty as standing conditions, the pattern avoids both silent defaults that hide implicit value skews and wide-open user- congurable “value settings” that shift burden under time pressure. W e illustrate the pattern with a public-concourse robot vignette and outline an evaluation agenda centered on legibility , procedural legitimacy , and actionability , including risks of automation bias and uneven usability of contest channels. CCS Concepts • Human-centered computing → Interaction design ; Interac- tion design process and methods . Ke ywords Front-end ethics, multi-user embodied AI, interaction-level gover- nance A CM Reference Format: Carmen Ng. 2026. Designing for Disagreement: Front-End Guardrails for Assistance Allocation in LLM-Enabled Robots. In Proceedings of the CHI 2026 W orkshop: Ethics at the Front-End: Responsible User-Facing Design for AI Systems (CHI ’26), A pril 13–17, 2026, Barcelona, Spain. A CM, New Y ork, NY, USA, 4 pages. https://doi.org/XXXXXXX.XXXXXXX Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commer cial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components of this work owned by others than the author(s) must be honor ed. Abstracting with credit is permitted. T o copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and /or a fee. Request p ermissions from permissions@acm.org. CHI ’26, Barcelona, Spain © 2026 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-XXXX -X/2026/04 https://doi.org/XXXXXXX.XXXXXXX 1 Introduction As large language models (LLMs) are incr easingly embedded into socially assistive robots as components shaping high-level de cision- making, commonsense reasoning, and action-sele ction [ 4 , 14 , 35 , 38 ], a front-end ethics challenge be comes harder to treat as a back- end-only issue: LLM behavior no longer shapes only static outputs but also interaction p olicy in open-world settings, extending engagement and task sequencing toward determining who is ac- knowledged, deferred, and assisted rst, thereby in eect allocating scarce attention or assistance in real time. Social robots are already deployed across care, service, and public navigation domains [ 3 , 16 , 17 , 27 ], including recurrent multi-party settings [ 10 , 31 ]. In edge cases with competing needs and limited time, these sequencing decisions can aect access to help and perceived fairness across di- verse social norms, often without legible rules or usable avenues for contestation. While concentrated in robotics research and emerging commercial systems via multimodal stacks [ 7 , 11 ], the expanding LLM-robot convergence introduces model variability and stochastic- ity into embodie d interaction contexts, amid emerging risk signals at both model and interface levels across elds: audits of LLM-driven robots ag group-based discrimination under open-vocabular y in- puts [ 13 ]; LLM studies uncover so cial bias [ 9 ] and uneven value generalization across populations and languages [ 2 , 5 , 8 ]; HCI re- search shows interaction design can encode harms via manipu- lation and exclusion [ 12 ]. While model-level mitigations remain crucial, this paper focuses on ethical safeguards operationalized through front-end mechanisms supporting transparency , user agency , and contestability , e.g., what users can see, understand, and act upon at the point of contact ( or deferral) with an LLM- enabled social robot [ 1 ], rather than assigning ethical responsibility to model properties alone. W e introduce b ounded calibration with contestability as a front-end design pattern for assistance allocation, featuring a governance-approved menu of prioritiza- tion modes, legibility throughout interaction, and a contestation pathway , preventing silent defaults while making value-laden se- quencing inspectable and procedurally accountable. 2 Related W ork Adjacent literature pr ovides building blocks for ethical front-end design, but they generally stop short of mechanism-level guidance for interaction-time assistance allocation when an LLM-enabled ro- bot must sequence help under scarcity and situational uncertainty . The gap is not ethical intent, but rather an under-specication of how an embodiment system’s front end should make a prioritiza- tion rule legible, keep it within admissible bounds, and provide usable challenge pathways when the allocation is enacte d through deferral in the moment. HCI and human-centered AI work shows CHI ’26, April 13–17, 2026, Barcelona, Spain Carmen Ng that front-end conguration is not neutral: small interface choices can shift outcomes in consent and choice architectures [ 21 , 24 ]. By design rationale, allocation policy in LLM-enabled r obots is simi- larly implemented as “small” interaction mo ves ( who gets acknowl- edged, and how this is justied), so even silent defaults already function as value-laden governance choices rather than mere tech- nical parameters. Studies also suggest transparency is experienced through procedural features rather than a binar y “disclosed vs. not disclosed” property [ 25 ], implying that legibility of a prioritization mode is a front-end design pr oblem. System-level syntheses fur- ther argue that accountable systems require user-facing interaction mechanisms, not merely algorithmic techniques [ 1 ]. Y et systematic mapping shows that r esponsible AI work clusters around high-level governance [ 33 ], oering limited guidance for interaction-lev el guardrails as LLM integration can shift robot interaction patterns from pr edened rules toward context-sensitive, language-mediated reasoning [15, 19]. In parallel, multi-user HRI studies show that interaction policy is designable in shared-robot settings, such as using engagement and turn-taking policies to determine who is addressed and when, and conict handling can shape user evaluations [ 23 , 30 ]. Howe ver , these policies are more often treated as coordination or social intel- ligence problems than as distributive commitments that should be explicitly governed as a front-end ethical interface . Prior work on procedural justice and contestability emphasizes that fairness and legitimacy depend on process features and usable procedures, not only outcomes or formal appeal rights alone [ 18 , 20 , 37 ]. Y et much of this guidance is developed around non-emb odied (e.g., online platforms) or post-hoc decision settings, leaving open questions on contestability in interaction-time deferral with material stakes. W e clarify a scope boundary: our argument does not depend on how contention is detected ( overlapping spee ch or sensor inference). Our claim is narrower: when contention occurs, prioritization is operationalized through interaction; under LLM behavioral uncer- tainty , governance must be available through front-end legibility and recourse. In sum, e xisting literature provides components in- cluding value-laden interface mechanisms, multi-user interaction policy , and procedural legitimacy , but they leave under-sp ecied an integrated front-end mechanism anchored to real-time assistance al- location uniquely relevant for LLM-enabled robots and their diverse users. 3 Bounded Calibration With Contestability 3.1 Centering pluralism and uncertainty Under scarcity , an embodie d agent inevitably allocates limited atten- tion through interaction. Because reasonable prioritization princi- ples frequently conict (e.g., urgency-rst, queue order , vulnerable groups-rst), any silent default becomes a non-neutral value com- mitment. This matters because value pluralism is a standing condi- tion, not an edge case. Fairness judgments var y within populations and across contexts, shaped by outcome favorability and individ- ual dierences [ 36 ] and rarely converging on a single interpreta- tion [ 32 ]. Cross-cultural work similarly cautions against assuming universality in how transpar ency or fairness are interpreted [ 6 ]. Meanwhile, multilingual LLM studies report cross-cultural biases and value misalignment [ 26 , 34 ], so “cultural competence” is not a safe default to outsource to LLM behavior . Accordingly , we do not claim a universally correct rule; we instead treat value plural- ism and LLM behavioral uncertainty as conditions the front end must govern . Under these conditions, leaving any single rule as a silent default would conceal value commitments, while full user congurability would invite co ercion, preference conicts, and burden-shifting towards users under time pressure. The ethical front-end alternative is therefore user-facing, bounded governance. 3.2 Pattern overview: what b ounded calibration means W e propose bounded calibration with contestability as a front- end pattern coupling three elements: a governance-approved set of prioritization modes, continuous mode legibility in interaction- relevant terms, and a lightweight pathway to challenge or esca- late outcomes. Although prioritization is executed by back-end components, we treat it as a front-end ethics problem here since legitimacy and perceived fairness depend on process features, not outcomes alone [ 18 ]. Importantly , we separate value mediation from interaction-time allocation , as real-time “value balancing” can reintroduce opaque trade-os when fairness depends on ab- straction and context choices [ 29 ]. W e therefor e structure value me- diation across three gov ernance layers: (i) Dene : deplo yers dene a small set of defensible prioritization modes and exclude harmful congurations as upstr eam boundaries, reecting critiques that high-level principles often underdetermine implementable rules in practice [ 22 ]; (ii) Select : authorized roles choose the active mode for a context window (e.g., time shift, location), with role-gating and rate limits to preserve predictability and avoid preference conicts; (iii) Challenge : users can contest a specic deferral and demand escalation without re-negotiating the global rule, consistent with ndings that meaningful contestation requires concrete , usable mechanisms, and transparent review pr ocedures [20, 37]. Bounded means calibration is constrained along three dimen- sions: (i) Admissibility (which modes can be chosen): calibration is not an open “values settings” panel; it restricts prioritization to in- stitutionally sanctioned modes and excludes extreme or discrimina- tory congurations. This responds to evidence that choice architec- tures can be engineered to steer or obstruct decisions through dark patterns [ 21 , 24 ]; (ii) Abstraction (what level is chosen): calibration does not operate as step-by-step micro-control, but at the level of pri- oritization principles (e .g., urgency-rst vs queue-order), av oiding overly-granular fairness rules that can be context-blind in diverse societal environments [ 29 ]; (iii) A uthority and timing (who can change it and when): calibration is governance-constrained rather than individually congurable; mode switching is r ole-gated and rate-limited, while user voice is integrated through contestability , not instant overrides. This also guards against responsibility dis- placement where humans aorded limited control can become the blamed surface [7]. 4 Scenario Vignette: LLM-Enabled Robot Guide in a Busy Concourse This illustrative scenario sho ws how b ounded calibration, mode legibility , and contestability can jointly govern scarcity-driven allo- cation at the front end (abstracting ov er input modality): Designing for Disagreement: Front-End Guardrails for Assistance Allocation in LLM-Enabled Robots CHI ’26, April 13–17, 2026, Barcelona, Spain Setup (b ounded mode selection): A public guide robot in a busy concourse connecting a train station to a mall can attend to only one interaction at a time at peak hours, creating contention. The station therefore pre-denes an admissible menu of prioritiza- tion modes and authorizes sta to select one for a time window . At the start of the shift, sta select an “urgency-rst” mode from this bounded, policy-approved menu ( e.g., urgency-rst, queue-order , vulnerability-aware), allowing legitimate variation across contexts without making any principle a silent default. Allocation point ( legibility at deferral): T wo requests arrive in close succession: a tourist asks for directions, another distressed person r eports a lost wallet. The robot prioritizes the distressed per- son and defers the tourist while disclosing the active mode (“ Priority mode: urgent nee ds rst — I’ll return to you next ”), aligning with HRI transparency and explainability w ork that frames understanding as a communicative design problem in co-located settings [28]. Contest point (outcome-specic recourse): The tourist con- tests the deferral (e .g., via a spoken phrase, a button, or an operator channel). Contestation does not necessarily change the global mode, but instead triggers an outcome-specic pathway communicating the grounds and consequences of challenge, e.g., a brief clarica- tion and optional escalation to sta, aligned with research on how meaningful challenge demands usable mechanisms, not only appeal rights [20, 37]. Boundaries and trace to enable stability and reviewability: If the tourist attempts to switch mo des, the rob ot role-gates the action (“ Only station sta can change priority mode ”); discriminatory or inadmissible modes are rejected by default. The interaction is also logged (active mode, deferral, contestation, escalation outcome) to support later review , echoing procedural fairness work that emphasizes reviewable pr ocesses instead of outcomes alone [18]. 5 Evaluation Agenda This paper does not r eport an empirical evaluation; instead, w e out- line evaluation targets focusing on three benchmarks: legibility (mode comprehension: can users identify the active mode and antici- pate deferrals), legitimacy (procedural fairness: do users judge allo- cation based on process features, not only on “who gets help rst”), and actionability (contestation: can decision subjects access and complete contest steps under time pressure , and understand what happens next). Feasible, diagnostic methods can isolate interface governance from back-end LLM capability , such as vignette experi- ments that compare silent defaults, legible prioritization mo des, and legible modes plus contestability; Wizard-of-Oz multi-user studies that stress-test timing and interruption; or governance w orkshops that probe the feasibility of an admissible mode menu. Finally , be- cause contest channels may be under-use d if they are perceived as inecient or futile, evaluation should also test adoption and drop- o across user groups and accessibility constraints, and whether contest logs and reviews actually fe ed back into organizational learning and future revision of admissible modes. 6 Limitations Because prioritization principles remain contested e ven within a single deployment community , our contribution is procedural and targeting allo cation settings face d by LLM-enable d robots in dy- namic environments. W e do not propose an empirically validated interface, nor any model-level alignment me thod. The pattern de- pends on governance capacity for mode denition and role-gating. W e also acknowledge that contestability may be unevenly usable across user groups and access needs, and legible modes may induce automation bias over time. Finally , we scope the pattern to scar city- driven allo cation primarily relevant for LLM-enabled robots de- ployed in socially assistive settings, not for all AI systems. 7 Conclusion and Future Pathways LLM-enabled robots enact assistance prioritization through inter- action, rendering it an interface-level governance issue rather than a back-end challenge alone. Our contribution is a proce dural front- end guardrail that constrains allocation to a governance-approved set of value choices, supports explainability during interaction, and provides a low-barrier path to contest and escalate without immedi- ately re-negotiating the global rule. This pattern makes interaction policy inspectable and accountable where dierential treatment is experienced . For deployers , it oers a way to bound and disclose prioritization choices without e xposing harmful congurations; for researchers , it denes evaluable front-end criteria beyond back-end performance; for regulators and auditors , it provides interaction-level traces of where and how value commitments are enacted, challenged, or revised, making ethical risks inspectable as LLM-robot convergence expands. References [1] Ashraf Abdul, Jo V ermeulen, Danding W ang, Brian Y . Lim, and Mohan Kankan- halli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . Montreal QC Canada, 1–18. [2] Muhammad Farid Adilazuarda et al . 2024. T owards Measuring and Modeling “Culture” in LLMs: A Survey . arXiv preprint (2024). [3] Ejaz Ahmed et al . 2024. Human–Rob ot Companionship: Current Trends and Future Agenda. International Journal of So cial Robotics 16, 8 (Aug. 2024), 1809– 1860. doi:10.1007/s12369- 024- 01160- y [4] Michael Ahn et al . 2022. Do As I Can, Not As I Say: Grounding Language in Robotic Aordances. arXiv preprint (2022). [5] Y ong Cao et al . 2023. Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study . In Proceedings of the First W orkshop on Cross-Cultural Considerations in NLP (C3NLP) . Dubrovnik, Croatia, 53–67. [6] Shih- Yi Chien et al . 2025. Comparative Study of XAI Per ception Between Eastern and W estern Cultures. International Journal of Human–Computer Interaction 41, 17 (Sept. 2025), 11192–11208. doi:10.1080/10447318.2024.2441015 [7] Madeleine Clare Elish. 2019. Moral Crumple Zones: Cautionar y T ales in Human- Robot Interaction. Engaging Science, T echnology , and Society 5 (March 2019), 40–60. doi:10.17351/ests2019.260 [8] Kathleen C. Fraser et al . 2022. Does Moral Code Have a Moral Code? Probing Delphi’s Moral Philosophy . arXiv preprint (2022). [9] Isabel O. Gallegos et al . 2024. Bias and Fairness in Large Language Mo dels: A Survey . Computational Linguistics 50, 3 (Sept. 2024), 1097–1179. doi:10.1162/coli_ a_00524 [10] Juan M. Garcia-Haro et al . 2020. Service Robots in Catering Applications: A Review and Future Challenges. Electronics 10, 1 (Dec. 2020), 47. doi:10.3390/ electronics10010047 [11] Gemini Robotics Team et al . 2025. Gemini Robotics: Bringing AI into the Physical W orld. arXiv preprint (2025). [12] Colin M. Gray et al . 2018. The Dark (Patterns) Side of U X Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . Montreal QC Canada, 1–14. [13] Andrew Hundt et al . 2025. LLM-Driven Rob ots Risk Enacting Discrimination, Violence, and Unlawful Actions. International Journal of Social Robotics 17, 11 (Nov . 2025), 2663–2711. doi:10.1007/s12369- 025- 01301- x [14] Hojae Jeong et al . 2024. A Survey of Robot Intelligence with Large Language Models. A pplied Sciences 14, 19 (2024), 8868. doi:10.3390/app14198868 CHI ’26, April 13–17, 2026, Barcelona, Spain Carmen Ng [15] Claire Y oonjung Kim et al . 2024. Understanding Large-Language Model (LLM)- powered Human-Robot Interaction. In Proceedings of the 2024 A CM/IEEE Interna- tional Conference on Human-Robot Interaction . Boulder CO USA, 371–380. [16] Alex Lamb ert et al . 2020. A Systematic Review of T en Y ears of Research on Human Interaction with Social Robots. International Journal of Human–Computer Interaction 36, 19 (2020), 1804–1817. doi:10.1080/10447318.2020.1801172 [17] Inha Lee. 2021. Ser vice Robots: A Systematic Literature Revie w . Electronics 10, 21 (2021), 2658. doi:10.3390/electronics10212658 [18] Min K yung Lee et al . 2019. Procedural Justice in Algorithmic Fairness: Lev eraging Transparency and Outcome Control for Fair Algorithmic Mediation. Proce edings of the A CM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1–26. doi:10. 1145/3359284 [19] Y oon Kyung Lee et al . 2023. Developing Social Robots with Empathetic Non- V erbal Cues Using Large Language Models. (2023). doi:10.48550/ARXI V .2308. 16529 [20] Henrietta Lyons et al . 2021. Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1–25. doi:10.1145/3449180 [21] Arunesh Mathur et al . 2019. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping W ebsites. Proce edings of the A CM on Human-Computer Interaction 3, CSCW (Nov . 2019), 1–32. doi:10.1145/3359183 [22] Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (Nov . 2019), 501–507. doi:10.1038/s42256- 019- 0114- 4 [23] Muneeb Moujahid et al . 2022. Demonstration of a Robot Receptionist with Multi- party Situated Interaction. In 2022 17th A CM/IEEE International Conference on Human-Robot Interaction (HRI) . Sapporo, Japan, 1202–1203. [24] Midas Nouw ens et al . 2020. Dark Patterns after the GDPR: Scraping Consent Pop- ups and Demonstrating their Inuence. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . Honolulu HI USA, 1–13. [25] Emilee Rader et al . 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency . In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . Montreal QC Canada, 1–13. [26] Jens Rystrøm et al . 2025. Multilingual != Multicultural: Evaluating Gaps Between Multilingual Capabilities and Cultural Alignment in LLMs. arXiv preprint (2025). [27] Niina Savela et al . 2018. Social Acceptance of Robots in Dierent Occupational Fields: A Systematic Literature Review . International Journal of Social Robotics 10, 4 (Sept. 2018), 493–502. doi:10.1007/s12369- 017- 0452- 5 [28] Sven Y adel Schött. 2023. Thermal Feedback for Transparency in Human-Robot Interaction. arXiv preprint (2023). [29] Andrew D . Selbst et al . 2019. Fairness and Abstraction in Sociotechnical Systems. In Procee dings of the Conference on Fairness, Accountability , and Transpar ency . Atlanta GA USA, 59–68. [30] Magnus Söderlund and Anastasia Natorina. 2024. Service robots in a multi-party setting: An examination of robots’ ability to detect human-to-human conict and its eects on robot evaluations. T echnology in Society 77 (June 2024), 102560. doi:10.1016/j.techsoc.2024.102560 [31] Alessandra Sorrentino et al . 2024. From the Denition to the A utomatic Assess- ment of Engagement in Human–Robot Interaction: A Systematic Review . Interna- tional Journal of Social Robotics 16, 7 (July 2024), 1641–1663. doi:10.1007/s12369- 024- 01146- w [32] Christopher Starke et al . 2022. Fairness p erceptions of algorithmic decision- making: A systematic review of the empirical literature. Big Data & Society 9, 2 (July 2022), 20539517221115189. doi:10.1177/20539517221115189 [33] Mohammad T ahaei et al . 2023. A Systematic Literature Review of Human- Centered, Ethical, and Responsible AI. arXiv preprint (2023). [34] Y ali Tao et al . 2024. Cultural Bias and Cultural Alignment of Large Language Models. PNAS Nexus 3, 9 (Sept. 2024), pgae346. doi:10.1093/pnasnexus/pgae346 [35] Jiaqi W ang et al . 2025. Large language models for robotics: Opportunities, chal- lenges, and perspe ctives. Journal of Automation and Intelligence 4, 1 (March 2025), 52–64. doi:10.1016/j.jai.2024.12.003 [36] Ruotong W ang et al . 2020. Factors Inuencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, De velopment Procedures, and Individual Dierences. arXiv preprint (2020). [37] Mireia Yurrita et al . 2025. Identifying Algorithmic De cision Subje cts’ Nee ds for Meaningful Contestability . Procee dings of the ACM on Human-Computer Interaction 9, 7 (Oct. 2025), 1–29. doi:10.1145/3757415 [38] Ceng Zhang et al . 2023. Large language models for human–robot interaction: A review . Biomimetic Intelligence and Robotics 3, 4 (Dec. 2023), 100131. doi:10.1016/ j.birob.2023.100131

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment