Dynamic prediction of locomotor capacity after stroke could enable more individualized rehabilitation, yet current assessments largely provide static impairment scores and do not indicate whether patients can perform specific tasks such as slope walking or stair climbing. Here, we present a wearable-informed data-physics hybrid generative framework that reconstructs a stroke survivor's locomotor control from wearable inertial sensing and predicts task-conditioned post-stroke locomotion in new environments. From a single 20 m level-ground walking trial recorded by five IMUs, the framework personalizes a physics-based digital avatar using a healthy-motion prior and hybrid imitation learning, generating dynamically feasible, patient-specific movements for inclined walking and stair negotiation. Across 11 stroke inpatients, predicted postures reached 82.2% similarity for slopes and 69.9% for stairs, substantially exceeding a physics-only baseline. In a multicentre pilot randomized study (n = 21; 28 days), access to scenario-specific locomotion predictions to support task selection and difficulty titration was associated with larger gains in Fugl-Meyer lower-extremity scores than standard care (mean change 6.0 vs 3.7 points; $p < 0.05$). These results suggest that wearable-informed generative digital avatars may augment individualized gait rehabilitation planning and provide a pathway toward dynamically personalized post-stroke motor recovery strategies.
Stroke is a leading cause of long-term motor disability and frequently results in persistent gait impairments such as hemiplegia, reduced joint mobility, and asymmetric movement patterns [1][2][3]. These motor deficits substantially limit independence, increase fall risk, and contribute to longterm healthcare burden. Globally, approximately 11.9 million people experience a stroke each year, and more than one-third of survivors retain chronic motor dysfunction despite months of rehabilitation [4,5]. In many developing regions, disability rates can rise to 60-80 percent, reflecting not only the severity of post-stroke impairments but also the difficulty of delivering timely, intensive, and personalized rehabilitation [6][7].
Motor rehabilitation remains the most effective pathway for restoring function [3,[8][9][10], but its impact depends on how well task selection and dosing are matched to an individual patient’s changing capacity. Clinicians continuously adjust cadence, incline, step height, and repetition to elicit targeted neuromuscular engagement while avoiding excessive fatigue, frustration, or injury risk [11][12][13]. To allow fine-grained quantification of gait and impairment in both laboratory and routine settings, advanced wearable sensing technologies have been introduced [14,15]. Yet most existing pipelines compress rich time-series signals into static, descriptive outputs such as standardized clinical rating scales [16][17][18], kinematic movement-quality indicators [15,[19][20][21], and predicted recovery curves or performance scores [22,23]. These summaries describe how a patient walked under an observed condition, but they fail to translate motion data into a dynamic, task-conditioned forecast of how the same individual is likely to move in a new rehabilitation scenario, which compensatory strategies will emerge, and where stability and safety limits are likely to be reached. Consequently, even when extensive wearables-generated assessments are available, therapists still rely on cautious trial-and-error when deciding, for example, whether a patient can safely progress to slope walking or stair training, which can lead to mismatched difficulty, suboptimal dosing, and delayed use of the critical rehabilitation window. The missing capability is not a new way to measure gait, but a way to convert a brief, wearable-captured gait sequence into task-level, patient-specific predictions.
To address this gap, we introduce a wearable-informed generative framework that reconstructs each patient’s locomotor control from a single 20 m level-ground walking trial recorded with bodyworn inertial sensors, and uses the resulting control signature to predict task-conditioned poststroke locomotion in new scenarios. The framework instantiates a patient-specific digital avatar by coupling a proportional-derivative physics controller with a Healthy Motion Atlas that captures normative gait coordination, and a goal-conditioned deep reinforcement learning policy that synthesizes individualized, dynamically feasible movements for two common yet challenging tasks: slope ascent and stair climbing. In this formulation, a brief wearable recording provides a minimal, scalable input from which the avatar can generate locomotion beyond the observed baseline while preserving patient-specific characteristics.
The avatar predicts task-specific locomotion after observing only the baseline walk: patient data drive personalization, whereas task training is obtained through large-scale exploration in the physics simulator, removing the need for task-specific demonstrations from patients. In a multicentre study of 21 post-stroke inpatients across five rehabilitation centres, the model generated stable task-conditioned predictions for both slope and stair scenarios and reproduced individualized locomotion patterns, with predicted motions showing high similarity to measured task kinematics. To explore potential clinical utility, the cohort was further randomized into a model-assisted group (n = 11) and a control group (n = 10) for a 28-day pilot study; therapists in the model-assisted arm used the avatar’s task-conditioned predictions to inform task selection and difficulty titration. Weekly Fugl-Meyer Assessment lower-extremity scores increased from 25.6 to 31.6 in the model-assisted group and from 26.0 to 29.7 in controls (p < 0.05). Together, these results suggest that task-conditioned locomotion predictions derived from a brief wearable recording may support individualized rehabilitation planning and reduce reliance on trial-and-error.
Together, the results establish the proposed generative model to transform wearable gait measurements into support for scenario-specific rehabilitation decisions. By transforming a short walking clip into actionable forecasts of task performance, compensatory strategy, and safety limits, the framework provides a mechanism to use wearable data for individualized planning rather than static descr
This content is AI-processed based on open access ArXiv data.