AI Phenomenology for Understanding Human-AI Experiences Across Eras

AI Phenomenology for Understanding Human-AI Experiences Across Eras
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

There is no ‘ordinary’ when it comes to AI. The human-AI experience is extraordinarily complex and specific to each person, yet dominant measures such as usability scales and engagement metrics flatten away nuance. We argue for AI phenomenology: a research stance that asks “How did it feel?” beyond the standard questions of “How well did it perform?” when interacting with AI systems. AI phenomenology acts as a paradigm for bidirectional human-AI alignment as it foregrounds users’ first-person perceptions and interpretations of AI systems over time. We motivate AI phenomenology as a framework that captures how alignment is experienced, negotiated, and updated between users and AI systems. Tracing a lineage from Husserl through postphenomenology to Actor-Network Theory, and grounding our argument in three studies-two longitudinal studies with “Day”, an AI companion, and a multi-method study of agentic AI in software engineering-we contribute a set of replicable methodological toolkits for conducting AI phenomenology research: instruments for capturing lived experience across personal and professional contexts, three design concepts (translucent design, agency-aware value alignment, temporal co-evolution tracking), and a concrete research agenda. We offer this toolkit not as a new paradigm but as a practical scaffold that researchers can adapt as AI systems-and the humans who live alongside them-continue to co-evolve.


💡 Research Summary

The paper introduces “AI phenomenology” as a research stance that foregrounds the lived, first‑person experience of interacting with AI systems, asking “How did it feel?” rather than the usual performance‑oriented questions. Drawing on Husserl’s phenomenology, post‑phenomenology (Ihde, Verbeek) and Actor‑Network Theory, the authors argue that agency in human‑AI interaction is not a fixed property of either party but emerges through the entanglement of human expectations, system behavior, and contextual cues. To operationalize this stance they develop a methodological toolkit that includes longitudinal instruments, progressive transparency interviews, and multi‑method data collection across personal and professional settings.

Three empirical studies conducted in the summer of 2025 illustrate the approach. The first two studies involve “Day,” a human‑like chatbot, and focus respectively on agency negotiation and value alignment. In the agency study participants engage with Day for a month and then undergo a three‑stage “progressive transparency interview.” Stage 1 asks them to review their own conversation history; Stage 2 introduces anonymized excerpts from other participants; Stage 3 reveals Day’s internal architecture (user profiles, memory, goal models). This design mirrors Husserl’s epoché, allowing participants to bracket and then un‑bracket their assumptions and to observe how their perception of Day’s agency shifts as transparency increases. Findings show a fluid spectrum of relationships—tool, companion, quasi‑other—and a phenomenon the authors call “pragmatic anthropomorphism,” where users treat the AI as a social actor while maintaining awareness of its artificial nature.

The value‑alignment study builds a “Value‑Alignment Perception Toolkit” (VAPT). Participants first explore a Topic‑Context Graph that visualizes the themes discussed with Day and associated sentiment scores. Next they evaluate four AI‑generated personas (chat‑based, survey‑based, anti‑persona, random) on how well each reflects their own voice in value‑laden dilemmas. Finally they compare radar charts of their self‑reported Schwartz values with LLM‑inferred values, accompanied by detailed reasoning logs. Participants describe the experience as looking into a “third‑person mirror,” feeling both exposed and validated. Quantitatively, AI‑inferred values correlate moderately with self‑reports (Spearman ρ≈0.58), but the richer insight is that the AI can reshape users’ self‑understanding in real time, a risk the authors label “weaponized empathy.”

The third study moves beyond chatbots into professional software engineering. While prior work measures AI impact in terms of speed or defect reduction, this study treats the workplace as a lived arena where developers negotiate identity, code ownership, and career trajectories with agentic AI tools (e.g., code assistants, automated reviewers). Interviews and diary entries reveal divergent attitudes between junior and senior engineers regarding control, trust, and the perceived intrusion of AI into professional agency.

From these studies the authors distill three design concepts: (1) Translucent design – staged disclosure of AI internals to support users’ phenomenological re‑orientation; (2) Agency‑aware value alignment – recognizing that AI’s mediation of values can both empower and manipulate users; (3) Temporal co‑evolution tracking – longitudinal mapping of how human‑AI relationships evolve over time. The accompanying toolkit (progressive transparency interview protocol, VAPT, workplace diary framework) is presented not as a new paradigm but as a practical scaffold that can be adapted across domains as AI systems and their human partners co‑evolve.

The paper contributes a philosophically grounded yet empirically tractable methodology for capturing the nuanced, affective dimensions of human‑AI interaction, highlighting the importance of agency negotiation, value mediation, and temporal dynamics. Limitations include relatively small, culturally homogeneous samples and the nascent state of quantitative analysis for phenomenological data. Future work is called for to scale the toolkit, diversify participant pools, and develop mixed‑methods analytic pipelines that can integrate rich first‑person accounts with system‑level metrics.


Comments & Academic Discussion

Loading comments...

Leave a Comment