Exploratory studies of human gait changes using depth cameras and considering measurement errors

Exploratory studies of human gait changes using depth cameras and   considering measurement errors
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This research aims to quantify human walking patterns through depth cameras to (1) detect walking pattern changes of a person with and without a motion-restricting device or a walking aid, and to (2) identify distinct walking patterns from different persons of similar physical attributes. Microsoft Kinect devices, often used for video games, were used to provide and track coordinates of 25 different joints of people over time to form a human skeleton. Then multiple machine learning (ML) models were applied to the SE datasets from ten college-age subjects - five males and five females. In particular, ML models were applied to classify subjects into two categories: normal walking and abnormal walking (i.e. with motion-restricting devices). The best ML model (K-nearest neighborhood) was able to predict 97.3% accuracy using 10-fold cross-validation. Finally, ML models were applied to classify five gait conditions: walking normally, walking while wearing the ankle brace, walking while wearing the ACL brace, walking while using a cane, and walking while using a walker. The best ML model was again the K-nearest neighborhood performing at 98.7% accuracy rate.


💡 Research Summary

The thesis investigates the use of low‑cost depth cameras, specifically Microsoft Kinect™ devices, combined with Sample Entropy (SE) analysis to quantify human gait and to detect subtle changes caused by motion‑restricting devices or walking aids. Two primary studies were conducted with ten healthy college‑age participants (five males, five females). In the first study, each subject walked a 10‑foot straight path multiple times under five conditions: normal walking, wearing an ankle brace, wearing an ACL (knee) brace, using a cane, and using a four‑legged walker. Two Kinect units captured the three‑dimensional coordinates of 25 skeletal joints from frontal and sagittal views.

Data processing employed two complementary statistical approaches. The first extracted six gait parameters (e.g., spine tilt, hip tilt, shoulder tilt) from selected joints, generated time‑series for each, and computed Sample Entropy to quantify the regularity/complexity of each parameter. The second approach applied SE directly to the raw (X, Y, Z) trajectories of 15 selected joints, bypassing any intermediate parameter calculation. Both methods revealed statistically significant differences between normal and device‑assisted walking for all subjects, with particular joints (e.g., right knee, pelvis) showing the largest SE changes depending on the device.

The second study examined whether SE‑derived gait signatures could differentiate two individuals with similar physical attributes. By averaging SE values across three gait parameters and visualizing them with a star‑glyph, the authors demonstrated clear separation between the two subjects, suggesting that gait can serve as a biometric identifier.

Machine‑learning classification was then applied to the SE feature sets. Four classifiers—K‑Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest, and Decision Tree—were evaluated using 10‑fold cross‑validation. For the binary task (normal vs. any abnormal condition), KNN achieved the highest accuracy of 97.3 %. For the multi‑class task (five gait conditions), KNN again performed best with 98.7 % accuracy. Notably, comparable performance was retained when the feature set was truncated to a subset of joints, indicating that a reduced sensor configuration could still support reliable real‑time monitoring.

The thesis also includes a Gauge R&R (repeatability and reproducibility) study to assess Kinect’s measurement precision. Results showed low coordinate variability across days and subjects, confirming that Kinect provides sufficiently stable joint tracking for gait analysis, though occasional tracking failures and software limitations were acknowledged.

Limitations of the work include the small sample size, short walking distance, and the exclusive focus on healthy young adults, which restricts generalization to older populations, patients with neurological disorders, or long‑term gait monitoring scenarios. The study also did not account for fatigue or learning effects across repeated trials.

Future research directions proposed by the author involve: (1) employing newer depth sensors (e.g., Azure Kinect, LiDAR) and multi‑camera setups to improve spatial resolution and reduce occlusion; (2) extending the participant pool to diverse age groups and clinical populations; (3) integrating the SE‑based features into deep‑learning time‑series models for enhanced classification and prediction; and (4) developing real‑time feedback systems for rehabilitation, fall‑risk assessment, and personalized gait biometrics. Overall, the work demonstrates that inexpensive depth cameras combined with nonlinear entropy measures can reliably detect gait alterations and distinguish individuals, offering a promising foundation for low‑cost, portable gait analysis solutions.


Comments & Academic Discussion

Loading comments...

Leave a Comment