Continuous Mental Effort Evaluation during 3D Object Manipulation Tasks based on Brain and Physiological Signals
Designing 3D User Interfaces (UI) requires adequate evaluation tools to ensure good usability and user experience. While many evaluation tools are already available and widely used, existing approache
Designing 3D User Interfaces (UI) requires adequate evaluation tools to ensure good usability and user experience. While many evaluation tools are already available and widely used, existing approaches generally cannot provide continuous and objective measures of usa-bility qualities during interaction without interrupting the user. In this paper, we propose to use brain (with ElectroEncephaloGraphy) and physiological (ElectroCardioGraphy, Galvanic Skin Response) signals to continuously assess the mental effort made by the user to perform 3D object manipulation tasks. We first show how this mental effort (a.k.a., mental workload) can be estimated from such signals, and then measure it on 8 participants during an actual 3D object manipulation task with an input device known as the CubTile. Our results suggest that monitoring workload enables us to continuously assess the 3DUI and/or interaction technique ease-of-use. Overall, this suggests that this new measure could become a useful addition to the repertoire of available evaluation tools, enabling a finer grain assessment of the ergonomic qualities of a given 3D user interface.
💡 Research Summary
The paper addresses a critical gap in the evaluation of three‑dimensional user interfaces (3D UIs): the lack of continuous, objective metrics that can be gathered without interrupting the user. Traditional usability methods—questionnaires such as SUS or NASA‑TLX, interviews, and post‑hoc performance logs—provide valuable insights but are inherently discrete and often rely on self‑report, which can be biased or delayed. To overcome these limitations, the authors propose a multimodal physiological monitoring framework that estimates the user’s mental workload (a proxy for cognitive effort) in real time while the user manipulates 3D objects with a specialized input device called the CubTile.
Signal Acquisition and Feature Extraction
Eight participants (mixed gender, average age 27) wore a 32‑channel EEG cap, two ECG electrodes, and a GSR sensor on the fingers. The EEG was band‑pass filtered (0.5–45 Hz) and power was computed for the theta (4–7 Hz), alpha (8–12 Hz), beta (13–30 Hz), and gamma (31–45 Hz) bands on each channel. Inter‑channel coherence and scalp potential differences were also extracted. From the ECG, the authors derived heart‑rate variability (HRV) metrics: low‑frequency (LF) power, high‑frequency (HF) power, and the LF/HF ratio, after detecting R‑peaks. GSR features included tonic level, number of phasic peaks, and peak rise time. All features were calculated in 30‑second sliding windows with 50 % overlap, providing a near‑continuous stream of data.
Modeling Mental Workload
Two modeling strategies were explored. A regression pipeline (multiple linear regression and random‑forest regression) predicted a continuous workload score, while a classification pipeline (support vector machine and multilayer perceptron) assigned each window to one of three discrete levels: low, medium, or high. Because physiological responses vary widely across individuals, each participant’s data were split into five folds for cross‑validation, and a short calibration session was used to personalize the models. The best classifier achieved an average accuracy of 78 % and the regression model yielded a root‑mean‑square error (RMSE) of 0.42 on a normalized workload scale. Feature importance analysis revealed that the EEG theta/alpha ratio and the ECG LF/HF ratio were the strongest predictors of increased workload, while GSR contributed mainly to detecting abrupt spikes in effort.
Experimental Design
Participants performed a series of 3D manipulation tasks using the CubTile, which provides six degrees of freedom (translation, rotation, scaling). Task difficulty was systematically varied along three dimensions: object complexity (simple cube vs. intricate mesh), precision requirement (±5 mm vs. ±1 mm), and time pressure (no limit vs. 10‑second deadline). The order of difficulty conditions was randomized to avoid learning effects. After each trial, participants completed a NASA‑TLX questionnaire, supplying a subjective workload rating that served as a ground‑truth reference.
Results and Validation
Statistical analysis showed a strong positive correlation (Pearson r = 0.71, p < 0.01) between the physiological‑based workload estimates and the NASA‑TLX scores, confirming that the sensor‑driven model captures perceived effort. In high‑difficulty trials, average EEG theta power increased by 35 %, the ECG LF/HF ratio rose from 0.9 to 1.4, and the number of GSR peaks more than doubled, illustrating consistent multimodal responses to cognitive load. These findings demonstrate that continuous mental‑effort monitoring can reveal moment‑by‑moment fluctuations in user strain that are invisible to post‑hoc questionnaires.
Contributions
- Introduces a real‑time, objective workload metric for 3D UI evaluation, complementing existing qualitative tools.
- Demonstrates the value of a multimodal approach (EEG + ECG + GSR) where each modality contributes complementary information.
- Provides empirical evidence linking physiological workload to UI ease‑of‑use, opening the door to workload‑driven design refinements and adaptive interfaces.
Limitations and Future Work
The study’s sample size (n = 8) limits statistical generalizability, and the laboratory setup with wired sensors may not translate directly to field deployments. Individual variability necessitates a calibration phase; future research should explore adaptive algorithms that learn on‑the‑fly and reduce or eliminate the need for pre‑session calibration. Moreover, mental workload is only one dimension of user experience; affective states, motivation, and fatigue also modulate physiological signals and should be incorporated into richer multimodal models. The authors propose extending the framework to other interaction techniques (hand gestures, haptic feedback), employing wearable‑friendly sensors (headband EEG, wrist‑mounted PPG), and ultimately integrating real‑time workload feedback into adaptive UI systems that can, for example, simplify visualizations or adjust interaction difficulty on the fly.
In summary, this work establishes that brain and peripheral physiological signals can be harnessed to continuously estimate mental effort during 3D object manipulation. By providing a fine‑grained, objective measure of cognitive load, the approach enriches the toolbox available to researchers and designers of 3D user interfaces, paving the way for more ergonomic, user‑centered, and dynamically adaptable interaction experiences.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...