Assessing Low Back Movement with Motion Tape Sensor Data Through Deep Learning
Back pain is a pervasive issue affecting a significant portion of the population, often worsened by certain movements of the lower back. Assessing these movements is important for helping clinicians prescribe appropriate physical therapy. However, it can be difficult to monitor patients’ movements remotely outside the clinic. High-fidelity data from motion capture sensors can be used to classify different movements, but these sensors are costly and impractical for use in free-living environments. Motion Tape (MT), a new fabric-based wearable sensor, addresses these issues by being low cost and portable. Despite these advantages, novelty and variability in sensor stability make the MT dataset small scale and inherent to noise. In this work, we propose the Motion-Tape Augmentation Inference Model (MT-AIM), a deep learning classification pipeline trained on MT data. In order to address the challenges of limited sample size and noise present within the MT dataset, MT-AIM leverages conditional generative models to generate synthetic MT data of a desired movement, as well as predicting joint kinematics as additional features. This combination of synthetic data generation and feature augmentation enables MT-AIM to achieve state-of-the-art accuracy in classifying lower back movements, bridging the gap between physiological sensing and movement analysis.
💡 Research Summary
The paper presents a novel deep‑learning pipeline, the Motion‑Tape Augmentation Inference Model (MT‑AIM), designed to classify six fundamental low‑back movements using data from Motion Tape (MT), a low‑cost, fabric‑based wearable strain sensor. The authors first highlight the clinical need for remote monitoring of low‑back pain (LBP) patients, noting that high‑fidelity motion‑capture (MoCap) or inertial measurement unit (IMU) systems are accurate but prohibitively expensive and impractical for everyday home use. MT offers a lightweight, inexpensive alternative, yet its data suffer from high variability due to sensor placement, skin adherence, and the fact that only a small, noisy dataset is currently available.
To overcome these challenges, MT‑AIM incorporates both feature‑level and data‑level augmentation. Ten healthy participants performed three repetitions of six movements (standing extension, forward flexion, left/right lateral bending, and seated left/right rotation), yielding 180 six‑second time‑series recordings from six MT sensors placed in a 3 × 2 matrix along the lumbar spine. Simultaneous optical MoCap provided ground‑truth joint angles. Raw resistance signals were normalized to baseline resistance, filtered with a Hampel outlier detector, and scaled to the range
Comments & Academic Discussion
Loading comments...
Leave a Comment