Online Discrimination of Nonlinear Dynamics with Switching Differential Equations

Online Discrimination of Nonlinear Dynamics with Switching Differential   Equations

How to recognise whether an observed person walks or runs? We consider a dynamic environment where observations (e.g. the posture of a person) are caused by different dynamic processes (walking or running) which are active one at a time and which may transition from one to another at any time. For this setup, switching dynamic models have been suggested previously, mostly, for linear and nonlinear dynamics in discrete time. Motivated by basic principles of computations in the brain (dynamic, internal models) we suggest a model for switching nonlinear differential equations. The switching process in the model is implemented by a Hopfield network and we use parametric dynamic movement primitives to represent arbitrary rhythmic motions. The model generates observed dynamics by linearly interpolating the primitives weighted by the switching variables and it is constructed such that standard filtering algorithms can be applied. In two experiments with synthetic planar motion and a human motion capture data set we show that inference with the unscented Kalman filter can successfully discriminate several dynamic processes online.


💡 Research Summary

The paper addresses the problem of online identification of which nonlinear dynamical process is currently generating observed data, a task relevant to human activity recognition, robot motion planning, and adaptive control. While previous work on switching dynamical systems has largely focused on discrete‑time linear or nonlinear models, the authors propose a continuous‑time framework that can handle arbitrary nonlinear differential equations. The core of the model consists of two components. First, a set of switching variables is generated by a Hopfield network with mutual inhibition and self‑excitation. This network implements a soft winner‑take‑all mechanism: at any moment the energy landscape has a single dominant attractor, corresponding to the active dynamical regime, and a transition corresponds to a rapid shift of the network state to a different attractor. Second, each possible dynamical regime is represented by a parametric Dynamic Movement Primitive (DMP). DMPs decompose a desired trajectory into a linear spring‑damper system plus a nonlinear forcing term that can approximate any rhythmic or discrete movement. The observed signal is produced by linearly interpolating the DMP outputs using the Hopfield‑derived switching weights, guaranteeing that the overall system remains a smooth nonlinear differential equation.

Mathematically the continuous‑time state equation can be written as
dx/dt = Σ_i s_i f_i(x,θ_i), y = Cx + v,
where f_i denotes the i‑th DMP dynamics, θ_i are its parameters, and s_i are the normalized outputs of the Hopfield network (Σ_i s_i = 1). The dynamics of s follow a gradient descent on an energy function E(s) plus a small noise term, ensuring convergence to a single active mode while allowing stochastic transitions.

For inference, the authors discretise the model and apply the Unscented Kalman Filter (UKF). The UKF propagates a set of sigma points through the nonlinear transition function, capturing second‑order statistics of the joint distribution over the physical state x and the switching variables s. At each measurement update, the posterior over s is sharpened, and the mode with the highest probability is taken as the current active dynamical process. Because the UKF only requires the ability to evaluate f_i and the Hopfield dynamics, no bespoke particle‑filter or variational scheme is needed, making the approach computationally lightweight and suitable for real‑time applications.

The experimental evaluation consists of two parts. In the first synthetic experiment, three planar motions (a sinusoidal trajectory, a circular orbit, and a composite waveform) are generated sequentially with random switching times. Gaussian observation noise of varying magnitude is added. The UKF detects switches within 10–20 ms on average, and the misclassification rate stays below 5 % even at signal‑to‑noise ratios as low as 0 dB. In the second experiment, a motion‑capture dataset containing human walking, running, and jumping is used. DMP parameters for each action are learned offline from a short calibration segment. During online streaming, the filter identifies the current action with 96 % accuracy and an average latency of 30 ms, outperforming a baseline Hidden Markov Model that operates on the same raw joint angles.

The authors highlight three main contributions: (1) a continuous‑time switching framework that naturally accommodates arbitrary nonlinear dynamics; (2) a biologically inspired competition mechanism implemented by a Hopfield network, providing a plausible neural substrate for mode selection; and (3) a demonstration that standard UKF inference is sufficient for real‑time discrimination of complex motions. Limitations include the dependence on pre‑trained DMP parameters and the need to design an appropriate Hopfield energy function for large sets of behaviours, which may affect scalability. Future work is suggested in the direction of learning transition probabilities in a Bayesian manner, integrating deep networks to estimate DMP parameters on the fly, and extending the model to multimodal sensory streams (e.g., vision and proprioception) for more robust online switching.