Enabling human-centered AI: A new junction and shared journey between AI and HCI communities
Artificial intelligence (AI) has brought benefits, but it may also cause harm if it is not appropriately developed. Current development is mainly driven by a “technology-centered” approach, causing many failures. For example, the AI Incident Database has documented over a thousand AI-related accidents. To address these challenges, a human-centered AI (HCAI) approach has been promoted and has received a growing level of acceptance over the last few years. HCAI calls for combining AI with user experience (UX) design will enable the development of AI systems (e.g., autonomous vehicles, intelligent user interfaces, or intelligent decision-making systems) to achieve its design goals such as usable/explainable AI, human-controlled AI, and ethical AI. While HCAI promotion continues, it has not specifically addressed the collaboration between AI and human-computer interaction (HCI) communities, resulting in uncertainty about what action should be taken by both sides to apply HCAI in developing AI systems. This Viewpoint focuses on the collaboration between the AI and HCI communities, which leads to nine recommendations for effective collaboration to enable HCAI in developing AI systems.
💡 Research Summary
The paper opens by documenting the growing number of AI‑related mishaps—over a thousand incidents recorded in the AI Incident Database—and argues that these failures stem largely from a technology‑centric development mindset that neglects human values, usability, and ethical considerations. In response, the authors champion a Human‑Centered AI (HCAI) paradigm, which seeks to fuse artificial intelligence with the principles of user experience (UX) design, thereby delivering systems that are usable, explainable, controllable, and ethically sound. While HCAI has gained traction in recent years, the authors note a critical gap: the lack of concrete collaboration between the AI research community and the Human‑Computer Interaction (HCI) community. This disconnect leaves practitioners uncertain about how to operationalize HCAI in real‑world AI products such as autonomous vehicles, intelligent interfaces, and decision‑support tools.
To bridge this divide, the paper proposes nine actionable recommendations aimed at fostering sustained, bidirectional cooperation between AI and HCI scholars, educators, industry stakeholders, and policymakers.
- Joint Research Initiatives – Establish interdisciplinary project grants, co‑authored publications, and shared testbeds that require both AI algorithmic expertise and HCI design insight.
- Integrated Curriculum Development – Embed HCI modules (e.g., user‑centered design, usability testing) into AI degree programs and conversely introduce machine‑learning fundamentals into HCI courses, creating a new generation of “dual‑skill” researchers.
- Standardized Interfaces and Shared Datasets – Develop open specifications for AI‑HCI interaction APIs and curate benchmark datasets that include both performance metrics and human‑centric annotations (e.g., trust scores, perceived fairness).
- Co‑Created Ethical‑Legal Frameworks – Align emerging AI regulations (EU AI Act, US AI Bill of Rights) with HCI’s accessibility and privacy standards, producing joint guidelines that address accountability, transparency, and user rights.
- Participatory Design Processes – Involve end‑users, domain experts, and affected communities from the earliest conceptual stages, using methods such as co‑design workshops, scenario‑based prototyping, and iterative feedback loops.
- Unified Evaluation Methodologies – Combine quantitative AI performance measures (accuracy, latency) with qualitative HCI assessments (task success, satisfaction, mental workload) to produce holistic evaluation reports.
- Sustainable Open‑Source Ecosystem – Release collaborative toolkits, simulation environments, and documentation under permissive licenses, encouraging community contributions and long‑term maintenance.
- Policy and Industry Alignment – Create liaison bodies that translate academic findings into industry best practices and inform policy drafts, ensuring that HCAI principles become contractual or regulatory requirements.
- Continuous Monitoring and Feedback Loops – Implement post‑deployment monitoring infrastructures that capture real‑world usage data, user complaints, and ethical breaches, feeding these insights back into the design cycle for ongoing improvement.
Each recommendation is accompanied by concrete implementation steps, potential funding sources, and expected impact on HCAI outcomes. The authors emphasize that user participation and joint evaluation are especially pivotal for achieving trustworthy, controllable AI systems.
The paper also outlines future research directions, including meta‑analyses of interdisciplinary case studies, longitudinal studies of user trust dynamics, and the development of adaptive systems that can modify their behavior in real time based on human feedback. By positioning these as open challenges, the authors invite the broader community to extend the proposed roadmap.
In conclusion, the article delivers a strategic blueprint that reframes AI development from a siloed, performance‑first pursuit to a collaborative, human‑centric engineering effort. The nine recommendations serve as a practical guide for academia, industry, and policymakers to co‑create AI technologies that are not only technically proficient but also aligned with societal values, user needs, and ethical standards. If adopted, this roadmap promises to reduce AI‑related incidents, enhance public trust, and accelerate the responsible diffusion of intelligent systems across domains.