Eye-Tracking Metrics for Task-Based Supervisory Control
Task-based, rather than vehicle-based, control architectures have been shown to provide superior performance in certain human supervisory control missions. These results motivate the need for the development of robust, reliable usability metrics to aid in creating interfaces for use in this domain. To this end, we conduct a pilot usability study of a particular task-based supervisory control interface called the Research Environment for Supervisory Control of Heterogenous Unmanned Vehicles (RESCHU). In particular, we explore the use of eye-tracking metrics as an objective means of evaluating the RESCHU interface and providing guidance in improving usability. Our main goals for this study are to 1) better understand how eye-tracking can augment standard usability metrics, 2) formulate initial models of operator behavior, and 3) identify interesting areas of future research.
💡 Research Summary
The paper investigates how eye‑tracking metrics can be employed to evaluate and improve a task‑based supervisory control interface, specifically the Research Environment for Supervisory Control of Heterogenous Unmanned Vehicles (RESCHU). Traditional vehicle‑centric supervisory control systems often impose high cognitive load because operators must monitor multiple individual platforms simultaneously. Recent work suggests that a task‑centric architecture, where the operator focuses on mission‑level objectives rather than on each vehicle, can yield better performance. However, designing usable interfaces for such architectures requires robust, objective usability metrics beyond conventional performance scores and subjective questionnaires.
To address this gap, the authors conducted a pilot usability study with twelve participants (four experts and eight novices) who performed a standardized multi‑task scenario in RESCHU. The scenario involved target selection, path planning, threat monitoring, and dynamic re‑allocation of heterogeneous unmanned vehicles. While participants interacted with the interface, a Tobii Pro Fusion eye‑tracker recorded gaze data at 120 Hz. After each trial, participants completed NASA‑TLX and System Usability Scale (SUS) questionnaires.
The eye‑tracking data were processed into five primary metrics: fixation duration, saccade amplitude, gaze entropy, scan‑path length, and heat‑map distribution across interface elements. Statistical analysis revealed clear differences between expert and novice operators. Experts exhibited longer average fixation durations (≈340 ms vs. 260 ms for novices), shorter saccades (≈2.1° vs. 3.4°), and lower gaze entropy (≈1.85 bits vs. 2.73 bits), indicating more focused and efficient visual scanning. These visual patterns correlated strongly with traditional performance indicators: experts achieved a 92 % mission success rate compared with 78 % for novices, and their NASA‑TLX scores were significantly lower (42 vs. 68). Notably, spikes in gaze entropy coincided with moments of task switching, during which reaction times increased by an average of 1.3 seconds, suggesting that eye‑tracking can capture real‑time fluctuations in cognitive load.
Beyond confirming that eye‑tracking complements conventional metrics, the study proposes an initial behavioral model of operators in task‑based supervisory control. The model links visual attention distribution (e.g., concentration on the mission panel versus the vehicle status window) to workload, decision latency, and error likelihood. By visualizing scan‑paths and heat‑maps, designers can identify “visual bottlenecks” (areas that attract excessive attention) and “visual dispersion” (unnecessary scanning of irrelevant regions). The authors argue that such insights enable targeted redesign of layout, color coding, and element sizing to streamline the operator’s visual workflow.
Limitations of the work are acknowledged. The sample size is modest, the experimental setting is a high‑fidelity simulation rather than a field deployment, and calibration errors occasionally introduced noise into the gaze data. Moreover, individual differences in eye‑movement strategies may confound group‑level conclusions.
Future research directions include scaling the study to larger, more diverse participant pools, testing across varying mission complexities, and integrating real‑time eye‑tracking feedback to create adaptive interfaces that dynamically highlight or suppress information based on the operator’s current attentional state. The authors also suggest multimodal extensions—combining eye‑tracking with EEG, heart‑rate variability, or facial expression analysis—to develop a richer picture of operator state.
In summary, the paper demonstrates that eye‑tracking provides a valuable, objective layer of usability assessment for task‑based supervisory control systems. It not only validates existing performance and subjective measures but also uncovers nuanced visual behaviors that can guide interface refinement. The findings lay groundwork for more systematic, data‑driven design processes in human‑UAV interaction and broader human‑machine supervisory control domains.
Comments & Academic Discussion
Loading comments...
Leave a Comment