Designing AI for Real Users -- Accessibility Gaps in Retail AI Front-End
As AI becomes embedded in customer-facing systems, ethical scrutiny has largely focused on models, data, and governance. Far less attention has been paid to how AI is experienced through user-facing design. This commentary argues that many AI front-e…
Authors: Neha Puri, Tim Dixon
Designing AI for Real Users: Accessibility Gaps in Retail AI Front- End Proceedings of the CHI 2026 Workshop: Ethi cs at the Front-End: Responsi ble User-Facing Design for AI Systems (CHI '26), April 13--17, 2026, Ba rcelona, Spain Neha Puri and Tim Dixon Intertek Group Plc ABSTRACT As AI b ecomes embedded in customer-facing sy stems, eth ical scrutiny has largely focused o n models, d ata, and governance. Far less attention has bee n paid to how AI is experienced through user-facing design [ 11 ]. T his commentary argues that man y AI front-ends imp licitly assume an “ideal u ser body and mind,” and t hat this bec omes visib le—and ethically c onsequential—when examined through the experiences of differently abled users .We explore this through retail AI f ront-ends for customer engagement – v irtual assistants, virtual try-on systems, and hyper-person alised recommendations. Despite intu itive and in clusive framing, these systems embed interaction assumptions that marg inalise users with vision, hearing, motor, cognit ive, speech, and sensory differences, as w ell as age-related variation in digital literacy and interaction norms.Drawing on practice-led insights, we argue that these failu res p ersist not p rimarily due to technical limits, but d ue to th e commercial, o rganisational, and p rocurement contexts in whic h AI front-ends are designed and deployed, wh ere accessib ility is rarely contractual [ 1 5 , 1 4 ]. We propose front- end assurance as a practical complement to AI govern ance, aligning claims o f intelligence a nd multimodal ity with the d iversity of real users. CCS CONCEPTS • Human-centered computing → Accessibility ; • Human-centered computing → Accessibility design and evaluation methods; • Human-centered compu ting → User interface design. KEYWORDS Accessibility, AI interfa ces, Retail AI, Inclusive design, Front-end ethic s, Human–AI interaction 1 Introduction AI front-end design in creasingly mediates everyday human–AI interacti on, yet ethics discussion s focus primarily on models, data, and governance rather than how people experience AI sy stems at the interface [ 11 ]. This commentary argu es that many contemporary AI front- ends implicitly assume an “ideal u ser body and mind,” and th at this assumption becomes visible—and ethica lly consequential—when examin ed through the experiences o f differently abled users. Different ly abled users do not represent edge cases; th ey reveal where systems have been designed around a narro w and idealised model of human ability. The Wo rld Health Organization estimates th at over one billion people g lobally live with some form of disability [ 16 ]. Our contribution is a practice-led analysis that links front- end exclusion not to technical limits, but to organisational, procurement, and regulatory structures. We focus on retail AI front-ends used fo r customer engagement— conversational assistants, vir tual try-on, and hyper- personalised re commendations—and show how interaction assumptions ma rginalise users with vision, hearing, moto r, cognitive, speech, and se nsory differences, as well as age- related variation in interaction norms. We then examine why these failures persi st and propose front-end assurance as a missing gove rnance layer for ethical AI. 2 The “Ideal User” as a Default Design Assumption 2.1 Real-worl d design dri vers Front-end interaction design is shaped by market optimisation , vendor defaults, and inherited visual paradigms. Success metrics such as engagement and conversion tend to prioritise sighted and neuro-typical interaction p atterns [ 15 ]. Adaptive AI compon ents, often procured as third-p arty modules, are integrated “as-is,” with accessibility features optional rather th an enforced [ 14 ]. As accessibility is rarely specified in procurement o r tested in deploy ment, inclusive interaction is treated as a late-stage retrofit r ather than a design requirement. These assumptions al so extend to age: interfaces are frequently described as “intuitive” base d on familiarity within a narrow age band, impli citly assuming shared knowledge of interaction p atterns, symbols, and gestures that may not hold for younge r or older users. 2.2 Regulatory dr ivers The Europ ean Union is currently ahead of other regions in mandating digita l accessibility. The European Accessibility Act (EAA) establishes a h armonised, legally bind ing baseline for accessible d igital products and service s across the EU from 28 Jun e 2025, including e-commerce and customer-facing platfo rms [ 5 ]. However, th ese obligations apply to digital service s in general rather than to AI systems specifically. In contrast, th e EU Artificial Intelligence Act (AI Act) adopts a risk-b ased approach and does not directly apply to most customer-facing retail AI front-ends. Its obligation s phase in over time and primarily target high-risk AI systems [ 6 ]. 2.3 Implications Together, these dynamics mean that while brands are under immediate accessibilit y obligations for their dig ital interfaces und er the EAA, there is currently limited AI- specific regulato ry pressure to ensure accessible AI front- ends. In this gap, o rganisational incentives and v endor defaults embed a narrow “ideal user” model in to AI interfaces—even where inclu sive interaction is technically possible. 3 Illustrative Ethical Fault Lines in Retail AI Front-Ends The ideal user assumption materialises in th ree widely deployed retail AI front- ends; across them, assumption s about sight and spe ed are compounded by assumptions about age-specific familia rity with interface convent ions. 3.1 Virtual Try-On: Exclusio n by Design Virtual try -on tools based on computer vision and generative models are widely deployed by apparel and eyewear brands to simulate f it and appearance and reduce returns [ 1 ]. These systems are of ten presented as improving accessibility by enablin g remote shopping for users who face barriers to visitin g physical stores. However, current implemen tations remain predominantly visual, relying on photore alistic overlays and body o r face mapping. Blind and lo w-vision users—and some older users unfamiliar with gesture- or image-driven interfaces— cannot access, qu ery, or verify what the system is “showing,” revealing a default assumption of sight that is neither tested no r audited at the interface. This illu strates a broader ethical tension: an intervention that improves access for some users can simult aneously create new forms of exclusion fo r others. This limitat ion is not inherent to AI. Voice-first assistants can provide structu red descriptions of products (e.g., fit, colour, fabric behaviour) and support non-visual shopping journeys [ 2 ]. Integrating voice or text-based descriptions of try-on outputs, alongsid e auditable semantic metadata describing what th e model inferred and why, would allow these systems to ex tend accessibility benefits more equitably. 3.2 Hyper-Pers onalised Ranking: Invisible Coercio n Retail platfor ms increasingly use machine learning to dynamically reord er products and highlight “recommended,” “popular,” or “limited time” items [ 10 ]. These signals are conveyed through visual hierarchie s, badges, and colour cues. Ag e further complicates this dynamic: su ch cues are often treated as self-expla natory, yet their meaning may be opaque to older users unfamiliar with platform-specific conven tions, or to younge r users encountering them outside their expected context. In many deployed syste ms, these cues are not programmatically exp osed to assistive technologies. Accessibility aud its of AI chat and recommendation widgets report missing seman tic markup, poor focus management, and un announced content updates that obscure the reason s behind ranking [ 14 , 1 2 ]. What appears as a subtle nudge for some users becomes a command b y default for o thers who cannot perceive the persuasion cues. These effects can be mitigated through semantic lab elling of ranking cues, accessible exp lanations of p romotion logic, and evaluation with assistive technolog ies and diverse users. 3.3 Conversational AI for Support & Redress AI chatbots and voice assistants now med iate customer service, returns, and complaints for many retailers [ 4 ]. Common accessibilit y barriers include unlab eled controls, keyboard tr aps, focus jumping, and screen readers failing to announce new messages [ 14 ]. While conv ersational interfaces are often posi tioned as more accessible, they can still assume age-specific no rms around phrasing, pacing, and turn-taking that a re not equally intuitive across generations. When access to refunds, complaints, or escalation depends on navigating an in accessible interface, routine support becomes a rights f ailure. These barriers can be countered using established in clusive design patterns: robust focus management, reliable a nnouncements of new content, fully keyboard-navig able controls, and a clearly labelled huma n escalation path . Across these fault lines, d ifferently abled users expose where AI front-ends assume sigh t, rapid cognition, and visual persuasion. Th e countermeasures above reframe the interface as a site of ethic al accountability rather th an a neutral delivery layer. 4 Why These Failures Persist These failures persist due to a combination of structural and governance factors: Perfo rmance optimisation metrics: AI systems are trained and evalu ated on majority-user behaviour (e.g., click times, dwell times, conversion), embe dding narrow assumptions about age, ability, and interaction speed. In herited design sy stems: Retail front-ends often reuse interaction componen ts from mainstream consumer platforms (badges, c olour cues, infinite scroll), without inclu sive reevaluation. For example, Micro soft Windows Copilot has exhibited a persistent issu e where screen readers announce only part of a response, with spoken output diverg ing from the visible chat histo ry and leaving blind users wit h incomplete or conflicting information [ 13 ]. Pr ocurement norms: Off-th e-shelf AI modules are widely adopt ed, yet accessibility is rarely a contractual requirement or acceptance criterion. Regu latory ambiguity: Laws like the EAA apply broadly to accessible p roducts and services, but guidance for adaptive and generative systems is still emerging, creating uncertai nty for teams implementing AI featur es. This misalignment b etween technical capability, organisation al practice, and regulatory interpretation allows exclusionary design assumptions to persist. 5 Front-End Assurance: To wards Ethical Interfaces Inclusive AI front-end s are technically feasible when accessibility is trea ted as a primary design goal rather th an a retrofit. Applic ations such as Be My Eyes and InnoSearch.ai demonstrate h ow conversational interaction can support blind and low-vision users in interpreting images, asking follo w-up questions, searching across retailers, and placing orders through accessible web or phone-based AI inte rfaces [ 3 , 7 ]. At the same time, these in termediary solutions expose an ethical tension: by compens ating for inaccessible retail platforms, they risk becoming a substitute for—ra ther than a catalyst for—accessible desig n by primary brand s. If the interface is where ethical consequ ences emerge, it must also b e a site of accountability. We propose front-end assurance as a practical extensio n of AI governance, encompassing: • Access testing: Can differently abled users p erceive explanation s, consent controls, and int eraction alternatives?; • T ransparency ch ecks: Are uncertainty and persuasion cues semantically exp osed?; • Agency audit: Can users override defaults or re quest human interv ention?; and • Consistency evalu ation: Do disclosures an d options vary with personalisation? As an illustration, access testing could draw on established AI fairness metrics frameworks (e.g., [ 8 , 9 ]), adapting group-based disp arity measures from model ou tputs to interaction outco mes. (e.g. as task completion parity , error rate balance across ability groups). However, this adaptation requires careful attention to group taxonomy— disability catego ries are more fluid and intersectional th an typical prote cted attributes such as gender or origin—and to data collection meth odology, as interaction -level metrics require representative u ser testing rather than computational ev aluation on held-out datasets. Develo ping standardised protoco ls for accessibility fairness measurement represents a v aluable direction for future work. Overall, treating front- end behaviour as an assurance surface—testable, measurable, and enforceable—is important to ensure that claims of in telligence, personalisation, and multimodality hold for the diversi ty of real users. 6 Conclusion AI front-ends increasing ly determine how people encounter brands, make choices, and se ek support. Differently abled users reveal the narrow assump tions embedded in these interfaces—assumption s about sight, hearing, movement, and cognition that sh ape who can meaningfully engage with systems pre sented as intelligent or multimodal. When interaction works only for a subset of users, adaptabilit y and intellig ence become performative rather than real. These assumption s are often justified through claims of “intuitive” d esign, yet intuitiveness is frequently age - specific and cu lturally learned rather than universal. Importantly, ac cessibility is not a niche concern. De sign choices that improve accessibility—clear explanations, consistent interaction p atterns, multimodal feedback, and user control—o ften improve usability, trust, and resilience for all users, including those in situational o r temporary states of limitation. Given that a significant proportion of the global population experiences disability at some point in their lives, in clusive front-end design is in separable from designing for real-world use at scale [ 16 ]. Ethical AI cannot be achiev ed without ethical interfaces. Shifting accessibility from a late-stage accommodation to a core design, p rocurement, and governance requirement is therefore foundational. Fron t-end assurance offers a practical pathway to make claims of intelligen ce, personalisation, and multimodality meaningful by align ing AI systems with th e diversity of real users. ACKNOWLEDGMENTS This p aper was in formed b y a Vision Awareness Seminar conducted by Tim Dixon, Head o f I T Architecture at Intertek Group plc in November 2025. The seminar i ntroduced a user-lens gr ounded in lived e xperience of vision lo ss, highlighting interaction challenges and potential design responses relevant t o AI-enabled systems. This perspectiv e helped shape the analytical framing adopted in this wor k. REFERENCES < number>[1 ]A ccent ure. 2024. R einventing Ret ail with Generative AI. Retrieved from https:/ /www.acc enture.com/us- en/insights/reta il/generative-a i-reta il < number>[2 ]A ndrea Amores -Falconi a nd Carlos Coronel- Silva. 2023. Design Proposal for a Virtual Shopping As sistant for Pe ople with Vision Problems Applying Artificial Intellige nce Te chniques. Big Data and Cognitive Computin g 7, 2. https://doi.org/10.339 0/bdc c7020096 < number>[3 ]B e My Eyes. 2023. Introducing: Be My A I. Retrieved from https://www. bemyeyes .com/blog/in troducing-be -my-ai/ < number>[4 ]D eloitte. 2024. T he State of AI in the Re tail Industry. R etrieved from https://www2.de loitte.com/ global/en/insi ghts/industry /retail-dist ribution/ai -in-retail.ht ml < number>[5 ]E uropean Union. 2019. Directive ( EU) 2019/8 82 on the A ccessibility Requirements for P roducts and Service s (European Accessibil ity Act). Official J ournal of the Europea n Union. Re trieved from htt ps://eur -lex.e uropa.eu/eli/dir/2019 /882/oj < number>[6 ]E uropean Union. 2024. Regulation ( EU) 2024 /1689 Laying Down Harmo nised R ules on Artific ial Intellige nce (Ar tificial Intelligence Act). Official J ournal o f the Europea n Union. Re trieved from https ://eur -lex.e uropa.eu/eli/reg/2024/ 1689/oj/e ng < number>[7 ]In noSearch AI. 2025. Inno Search AI: Ac cessible AI-Powered Sho pping. Re trieved from https://in nosearch.a i/ < number>[8 ]I SO/IEC. 2021. ISO/I EC TR 24027:2021 Inf ormation T echnology — Artificial In telligenc e (AI) — Bias in AI Syste ms and AI Aided Dec ision Making. International Organiz ation for Standa rdizati on/Internation al Elec trotechnica l Commission. Retrieved from https://www.iso. org/standard /77607.html bib> < number>[9 ]I SO/IEC. 2024. ISO/I EC TS 12791:2024 Information Technolo gy — Arti ficial Intel ligence — Trea tment of Unwanted Bias in Classifica tion and Re gression M achine Learning T asks . International Organiz ation f or Standardiza tion/Inte rnational Elec trotec hnical Commis sion. Retrieve d from https://www.iso. org/standard /84110.html bib> < number>[10 ] McKin sey & Compa ny. 2023. The State of AI in Retail. Retrieved from ht tps://www. mckinsey .com/industries /retail/our - insights/the-state -of-ai-in- retail < number>[11 ] Dorian Peters et al. 2026. Ethics at the Front-End: Re sponsible Us er-Facing D esign for AI Systems. In Extended Abs tracts of the CHI C onference on H uman Facto rs in Computing S ystems . ACM. < number>[12 ] S. Ra jmohan, R. Patel, and A. Singh. 2025. Access ibility Barriers in De ployed AI Cha t Interfaces . arXiv preprint. Retrieved from https://arxiv. org/abs/2506.0 4659 < number>[13 ] The Idea Place . 2025. Windows Copilot Serves at B est H alf an Answe r to Screen Re ading Users. The Idea Place (November 6, 2025). Re trieved from https ://theideapla ce.net/wind ows-copi lot-serves -at-best-half-an -answer-to-scre en-reading -users/ < number>[14 ] UsableN et. 2024. AI Chat Bot Acc essibility Issues from a Blind U ser's Perspective . Retrieved from https://blog.us ablenet.c om/ai-chat-bo t-acces sibility-issues-f rom-a-blind -users-perspec tive < number>[15 ] Michae l Wald. 2021. AI Da ta-Driven Perso nalisation and Disability I nclusion. U niversal Access in the Information Society. Re trieved from https ://pmc.ncbi .nlm.nih. gov/articles /PMC7861332/ < number>[16 ] World Hea lth Organiza tion. 2023. Global Report on He alth Equity fo r Persons with Dis abilities. W orld Health Organization, Geneva, Switz erland. Retriev ed from https: //www.who.int /publications /i/item/97 89240063600 bib>
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment