Navigating Algorithmic Opacity: Folk Theories and User Agency in Semi-Autonomous Vehicles
As semi-autonomous vehicles (AVs) become prevalent, drivers must collaborate with AI systems whose decision-making processes remain opaque. This study examines how drivers of AVs develop folk theories to interpret algorithmic behavior that contradicts their expectations. Through 16 semi-structured interviews with drivers in the United States, we investigate the explanatory frameworks drivers construct to make sense of AI decisions, the strategies they employ when systems behave unexpectedly, and their experiences with control handoffs and feedback mechanisms. Our findings reveal that drivers develop sophisticated folk theories – often using anthropomorphic metaphors describing systems that see,'' hesitate,’’ or become ``overwhelmed’’ – yet lack informational resources to validate these theories or meaningfully participate in algorithmic governance. We identify contexts where algorithmic opacity manifests acutely, including complex intersections, adverse weather, and rural environments. Current AV designs position drivers as passive data sources rather than epistemic agents, creating accountability gaps that undermine trust and safety. Drawing on critical data studies and algorithmic accountability literature, we propose a framework for participatory algorithmic governance that would provide drivers with transparency into AI decision-making and meaningful channels for contributing to system improvement. This research contributes to understanding how users navigate datafied sociotechnical systems in safety-critical contexts.
💡 Research Summary
This paper investigates how drivers of semi‑autonomous vehicles (SAE Level 2‑3) cope with algorithmic opacity and construct “folk theories” to make sense of unexpected system behavior. Using 16 semi‑structured interviews with U.S. drivers who own vehicles equipped with advanced driver‑assistance systems (primarily Tesla’s Autopilot/FSD, but also Polestar, Chevrolet, and Mitsubishi models), the authors explore the explanatory frameworks, coping strategies, and perceived agency of drivers when the AI’s decisions diverge from their expectations.
The theoretical framing draws on three strands: (1) Burrell’s taxonomy of algorithmic opacity (intentional secrecy, technical illiteracy, and fundamental inscrutability), (2) critical data studies concepts such as Haraway’s “cyborg” metaphor, Hind’s “semantic colonization” of driving data, and Lupton’s “digital companion species,” and (3) Schulz‑Schaeffer’s distinction between designed technology and machine‑learning‑based AI, highlighting a “domestication gap” where users must integrate inscrutable systems into everyday practice. The authors also situate folk theories within the broader literature on algorithmic transparency, emphasizing that in safety‑critical contexts like driving, inaccurate intuitions can have immediate physical consequences.
Analysis of the interview data reveals four dominant anthropomorphic metaphors that drivers use to describe the vehicle’s behavior: (a) “seeing” – the system is perceived as having visual perception; (b) “hesitating” – the AI appears to pause or deliberate; (c) “overwhelmed” – the vehicle seems to be overloaded in complex intersections or adverse weather; and (d) “intentional” – the system is treated as an agent with its own goals. These folk theories are generated from lived experience, not from any technical documentation, and remain largely unverifiable. Consequently, drivers adopt a set of pragmatic coping mechanisms: manual takeover, speed reduction, heightened monitoring, and reliance on auditory or visual cues. However, the current vehicle interfaces provide only one‑way data collection; there is little opportunity for drivers to give structured feedback, contest decisions, or influence algorithmic updates. This design positions drivers as passive data sources rather than epistemic agents, creating accountability gaps when accidents occur.
To address these gaps, the authors propose a participatory algorithmic governance framework for semi‑autonomous vehicles. The framework includes: (1) real‑time exposure of decision logs and explainable visualizations to the driver; (2) a systematic feedback portal where drivers can report edge‑case failures and annotate system behavior; (3) co‑design workshops involving manufacturers, regulators, and driver communities to incorporate contextual knowledge into system design; and (4) a clear responsibility matrix that delineates liability among manufacturers, software developers, and drivers. By embedding transparency and participation into the vehicle’s lifecycle, the framework aims to reduce cognitive load, improve trust, and enhance safety in human‑AI driving assemblages.
Overall, the study contributes a nuanced understanding of how users navigate opaque, safety‑critical AI systems, highlights the insufficiency of current transparency measures, and offers concrete design and policy recommendations to empower drivers as active participants in algorithmic governance.
Comments & Academic Discussion
Loading comments...
Leave a Comment