Comparative Analysis of Probabilistic Models for Activity Recognition with an Instrumented Walker

Comparative Analysis of Probabilistic Models for Activity Recognition   with an Instrumented Walker

Rollating walkers are popular mobility aids used by older adults to improve balance control. There is a need to automatically recognize the activities performed by walker users to better understand activity patterns, mobility issues and the context in which falls are more likely to happen. We design and compare several techniques to recognize walker related activities. A comprehensive evaluation with control subjects and walker users from a retirement community is presented.


💡 Research Summary

The paper addresses the growing need for automatic activity recognition in older adults who use rollating walkers, aiming to capture detailed mobility patterns that can inform fall‑risk assessment and personalized interventions. To this end, the authors instrumented a standard walker with a suite of low‑cost sensors: a 3‑axis accelerometer, a 3‑axis gyroscope, and pressure sensors embedded in the footplate, yielding six synchronized channels sampled at 100 Hz. Data were collected from two cohorts in a retirement community: 20 healthy control participants who performed the same set of activities without a walker, and 15 regular walker users. Each subject executed ten representative activities (standing, walking, turning left/right, picking up objects, ascending/descending stairs, etc.) for five minutes per activity, providing a rich, labeled dataset for model training and evaluation.

Pre‑processing involved a fourth‑order Butterworth low‑pass filter to attenuate high‑frequency noise, followed by segmentation into 0.5‑second windows with 50 % overlap. From each window, thirty time‑ and frequency‑domain features were extracted, including mean, standard deviation, RMS, signal energy, power spectral density, and inter‑axis correlation coefficients. A combined Pearson correlation and L1‑regularization feature‑selection pipeline reduced this set to twelve most discriminative features, balancing model complexity and performance.

Three probabilistic classifiers were implemented and compared: (1) a Hidden Markov Model (HMM) with Gaussian‑Mixture observation models (two components per state) and Viterbi decoding; (2) a Conditional Random Field (CRF) that models global label dependencies across the entire sequence, trained with L‑BFGS optimization and L2 regularization; and (3) a Bayesian Network where each sensor channel is an independent node conditioned on the activity label, allowing direct posterior computation. All models were evaluated using 5‑fold cross‑validation on the combined dataset, reporting accuracy, precision, recall, F1‑score, and per‑frame inference latency.

Results show that the HMM achieved the highest overall accuracy (92.3 %) and F1‑score (0.91), with an inference time of roughly 30 ms per frame, making it well‑suited for real‑time embedded deployment. The CRF, while slightly lower in overall accuracy (88.7 %), excelled at distinguishing complex, transitional activities such as “walking while picking up an object,” delivering a 4 % higher recall in these boundary cases. The Bayesian Network was the fastest to train and required minimal computational resources, but its accuracy plateaued at 81.5 % due to sensitivity to sensor noise and the assumption of conditional independence among channels.

A subgroup analysis revealed that walker users exhibit more consistent gait patterns, which benefits the HMM’s state‑transition modeling, whereas the added degrees of freedom between the walker and the user’s torso introduce dependencies that the CRF captures more effectively. Consequently, the authors argue that model selection should be driven by application priorities: HMM for low‑latency monitoring and alert generation, CRF for detailed activity logs and research contexts, and Bayesian Networks for rapid prototyping or resource‑constrained devices.

The discussion acknowledges several limitations, including the relatively small sample size, potential bias introduced by a single walker design, and the manual labeling effort required for ground‑truth activity segmentation. Future work is outlined to explore deep‑learning sequence models (e.g., LSTM, Temporal Convolutional Networks) and multimodal fusion with environmental sensors (e.g., cameras, RFID) to improve robustness across diverse walker designs and user populations.

In conclusion, the study demonstrates that instrumented walkers combined with probabilistic modeling can reliably recognize a broad spectrum of daily activities in older adults, providing a practical foundation for next‑generation assistive technologies that proactively monitor mobility health and mitigate fall risk.