The Dynamics of Attention across Automated and Manual Driving Modes: A Driving Simulation Study
This study aims to explore the dynamics of driver attention to various zones, including the road, the central mirror, the embedded Human-Machine Interface (HMI), and the speedometer, across different driving modes in AVs. The integration of autonomous vehicles (AVs) into transportation systems has introduced critical safety concerns, particularly regarding driver re-engagement during mode transitions. Past accidents underscore the risks of overreliance on automation and highlight the need to understand dynamic attention allocation to support safety in autonomous driving. A high-fidelity driving simulation was conducted. Eye-tracking technology was used to measure fixation duration, fixation count, and time to first fixation across distinct driving modes (automated, manual, and transition), which were then used to assess how drivers allocated attention to various areas of interest (AOIs). Findings show that drivers’ attention varies significantly across driving modes. In manual mode, attention consistently focuses on the road, while in automated mode, prolonged fixation on the embedded HMI was observed. During the handover and takeover phases, attention shifts dynamically between environmental and technological elements. The study reveals that driver attention allocation is mode-dependent. These findings inform the design of adaptive HMIs in AVs that align with drivers’ attention patterns. By presenting relevant information according to the driving context, such systems can enhance driver-vehicle interaction, support effective transitions, and improve overall safety. Systematic analysis of visual attention dynamics across driving modes is gaining prominence, as it informs adaptive HMI designs and driver readiness interventions. The GLMM findings can be directly applied to the design of adaptive HMIs or driver training programs to enhance attention and improve safety.
💡 Research Summary
This paper investigates how drivers allocate visual attention across different zones—road, central mirror, embedded Human‑Machine Interface (HMI), and speedometer—when operating in automated, manual, and transition (handover and takeover) driving modes. Using a high‑fidelity Level‑3 driving simulator equipped with a 10‑inch Android tablet HMI and Tobii Pro Glasses 2 eye‑tracking (50 Hz), the authors recorded fixation duration, fixation count, and time‑to‑first‑fixation for 34 participants (average age 33 years, 15 M/19 F). The experimental scenario involved a sequence of rural, suburban, and highway segments with multiple automated‑to‑manual switches prompted by visual, auditory, and haptic cues.
Data were aggregated into two functional Areas of Interest (AOIs): “Windshield” (road, landscape, mirror) and “Tablet” (HMI). A Generalized Linear Mixed Model (GLMM) treated driving mode, AOI, and self‑reported trust in automation as fixed effects, with random intercepts for participants. Results show a clear mode‑dependent pattern: in automated mode, drivers spent significantly longer on the Tablet AOI (mean fixation ≈ 1.8 s) and less on the windshield (≈ 0.9 s). In manual mode, the opposite pattern emerged (windshield ≈ 2.3 s, tablet ≈ 0.7 s). During handover, attention initially remained on the windshield but shifted toward the Tablet as takeover cues appeared; in takeover, the time‑to‑first‑fixation on the Tablet decreased sharply, indicating rapid re‑orientation.
Trust analysis revealed a non‑linear interaction: participants with high trust levels exhibited a 30 % reduction in windshield fixation and a 45 % increase in Tablet fixation during automation, whereas low‑trust participants maintained more road‑focused gaze even when the vehicle was autonomous. This aligns with prior findings that excessive trust can lead to attentional disengagement from primary driving tasks.
The authors argue that these dynamics have direct implications for adaptive HMI design. Reducing visual load on the HMI during automation, and providing salient, road‑oriented cues just before takeover, could mitigate the observed attention shift toward non‑driving information. Moreover, a “trust‑adaptive” interface that modulates alert intensity based on real‑time trust estimates could further enhance safety.
Limitations include the laboratory nature of the simulator (absence of real‑world vibration, lighting changes, and external distractions), the exclusion of non‑driving secondary tasks (which are common in everyday driving), and reliance on a pre‑experiment trust questionnaire rather than continuous trust monitoring. Future work should validate findings in on‑road tests, incorporate real‑time trust metrics, and explore a broader set of secondary tasks.
In conclusion, visual attention allocation varies markedly across driving modes, with automated driving encouraging prolonged HMI fixation and manual driving maintaining road‑centered gaze. The identified attention patterns and their relationship with driver trust provide actionable insights for designing adaptive HMIs and driver‑training programs aimed at improving safety during automated‑manual transitions.
Comments & Academic Discussion
Loading comments...
Leave a Comment