Multi-Momentum Observer Contact Estimation for Bipedal Robots

Multi-Momentum Observer Contact Estimation for Bipedal Robots
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As bipedal robots become more and more popular in commercial and industrial settings, the ability to control them with a high degree of reliability is critical. To that end, this paper considers how to accurately estimate which feet are currently in contact with the ground so as to avoid improper control actions that could jeopardize the stability of the robot. Additionally, modern algorithms for estimating the position and orientation of a robot’s base frame rely heavily on such contact mode estimates. Dedicated contact sensors on the feet can be used to estimate this contact mode, but these sensors are prone to noise, time delays, damage/yielding from repeated impacts with the ground, and are not available on every robot. To overcome these limitations, we propose a momentum observer based method for contact mode estimation that does not rely on such contact sensors. Often, momentum observers assume that the robot’s base frame can be treated as an inertial frame. However, since many humanoids’ legs represent a significant portion of the overall mass, the proposed method instead utilizes multiple simultaneous dynamic models. Each of these models assumes a different contact condition. A given contact assumption is then used to constrain the full dynamics in order to avoid assuming that either the body is an inertial frame or that a fully accurate estimate of body velocity is known. The (dis)agreement between each model’s estimates and measurements is used to determine which contact mode is most likely using a Markov-style fusion method. The proposed method produces contact detection accuracy of up to 98.44% with a low noise simulation and 77.12% when utilizing data collect on the Sarcos Guardian XO robot (a hybrid humanoid/exoskeleton).


💡 Research Summary

The paper addresses the critical problem of determining which foot of a bipedal robot is in contact with the ground, a prerequisite for safe control and accurate whole‑body state estimation. Traditional approaches either rely on dedicated foot‑mounted contact sensors (switches, force‑torque transducers) or infer contact from joint torques and encoder data. The former suffers from hardware wear, latency, and added cost, while the latter assumes the robot’s torso can be treated as an inertial frame and that the legs are negligibly light—assumptions that break down for large humanoids whose legs may comprise up to half of the total mass.

To overcome these limitations, the authors propose a multi‑momentum‑observer framework that does not require any foot‑level sensors. The core idea is to treat the ground as the inertial frame and to construct several reduced‑order dynamic models, each corresponding to a different contact hypothesis: left‑foot single support, right‑foot single support, and double support. By imposing the holonomic constraint that the contacting foot’s pose is stationary (A·q̇ = 0), the full dynamics M q̈ + C q̇ + G + Aᵀλ = Bᵀ(τ_mot + τ_ext) are projected onto a lower‑dimensional coordinate set y, yielding constrained dynamics that no longer contain the unknown constraint forces λ. For each hypothesis i, a momentum observer is built using the reduced mass matrix M̃_i, Coriolis term C̃_i, and gravity term G̃_i. The observer integrates the predicted momentum p̂_i and compares it with the measured momentum p_i = M̃_i ẏ_i; the discrepancy, scaled by an observer gain K_O, provides an estimate of the external torque τ̂_i,ext.

When a leg is truly in stance, the corresponding observer’s external‑torque estimate should be near zero, while the observer for the swinging leg will report a large torque. In double‑support, both observers produce comparable non‑zero torques. These torque patterns are fed into a Markov‑style probabilistic fusion module that updates the posterior probability of each contact mode based on the current torque estimates and the previous mode’s probability.

Because torque estimates alone are insensitive to foot lift‑off events, the authors augment the fusion with a relative‑velocity constraint between the two feet: v_rel = (A_l − A_r)·q̇. During double support v_rel should be zero; a non‑zero value signals that one foot has left the ground. This additional cue improves detection of lift‑off and complements the torque‑based touchdown detection, enabling low‑latency identification of contact transitions.

The method was evaluated in two settings. In a low‑noise simulation, it achieved 98.44 % contact‑detection accuracy. On the real Sarcos Guardian XO platform—a hybrid humanoid/exoskeleton with roughly 50 % of its mass in the legs—the approach attained 77.12 % accuracy, demonstrating robustness despite sensor noise, modeling errors, and uneven terrain. The performance gap highlights the challenges of real‑world implementation but also confirms that reliable contact estimation is possible without any dedicated hardware.

Key contributions include: (1) a novel reduction of full‑body dynamics that avoids the inertial‑base assumption, (2) simultaneous deployment of multiple momentum observers for distinct contact hypotheses, (3) a fusion scheme that combines external‑torque estimates with inter‑foot relative velocity to probabilistically infer the current contact mode, and (4) validation on a large‑scale biped where leg mass is non‑negligible. The approach is scalable to other legged platforms where adding foot sensors is impractical, and it opens avenues for adaptive gain tuning, handling irregular terrain, and integrating directly with whole‑body state estimators and model‑based controllers for more dynamic and robust locomotion.


Comments & Academic Discussion

Loading comments...

Leave a Comment