Radar-Inertial Odometry For Computationally Constrained Aerial Navigation
Recently, the progress in the radar sensing technology consisting in the miniaturization of the packages and increase in measuring precision has drawn the interest of the robotics research community. Indeed, a crucial task enabling autonomy in robotics is to precisely determine the pose of the robot in space. To fulfill this task sensor fusion algorithms are often used, in which data from one or several exteroceptive sensors like, for example, LiDAR, camera, laser ranging sensor or GNSS are fused together with the Inertial Measurement Unit (IMU) measurements to obtain an estimate of the navigation states of the robot. Nonetheless, owing to their particular sensing principles, some exteroceptive sensors are often incapacitated in extreme environmental conditions, like extreme illumination or presence of fine particles in the environment like smoke or fog. Radars are largely immune to aforementioned factors thanks to the characteristics of electromagnetic waves they use. In this thesis, we present Radar-Inertial Odometry (RIO) algorithms to fuse the information from IMU and radar in order to estimate the navigation states of a (Uncrewed Aerial Vehicle) UAV capable of running on a portable resource-constrained embedded computer in real-time and making use of inexpensive, consumer-grade sensors. We present novel RIO approaches relying on the multi-state tightly-coupled Extended Kalman Filter (EKF) and Factor Graphs (FG) fusing instantaneous velocities of and distances to 3D points delivered by a lightweight, low-cost, off-the-shelf Frequency Modulated Continuous Wave (FMCW) radar with IMU readings. We also show a novel way to exploit advances in deep learning to retrieve 3D point correspondences in sparse and noisy radar point clouds.
💡 Research Summary
**
This dissertation presents a comprehensive solution for real‑time state estimation of resource‑constrained unmanned aerial vehicles (UAVs) using only a low‑cost frequency‑modulated continuous‑wave (FMCW) radar and an inertial measurement unit (IMU). The work is motivated by the limitations of vision‑based and LiDAR‑based odometry in GNSS‑denied or visually degraded environments such as fog, smoke, extreme illumination, or feature‑poor scenes. Radar, operating at millimeter‑wave frequencies, is largely immune to these conditions, making it an attractive alternative for robust perception.
The author first details the physics and signal‑processing pipeline of FMCW radar, covering range, Doppler, and bearing extraction via fast Fourier transforms (FFT) and multi‑antenna beamforming. A lightweight, off‑the‑shelf 77 GHz radar is used to generate sparse 3‑D point clouds consisting of range‑bearing‑velocity tuples.
Two tightly‑coupled sensor‑fusion frameworks are then introduced. The first is a Multi‑State Extended Kalman Filter (MS‑EKF) that augments the classic EKF state with a sliding window of past IMU poses and persistent landmarks (fixed corner reflectors). The EKF directly incorporates radar distance and Doppler measurements as nonlinear observation models, and an online extrinsic calibration routine continuously refines the radar‑IMU transformation. Experiments show that the MS‑EKF achieves a mean position root‑mean‑square error (RMSE) of 0.12 m and attitude RMSE of 0.35 deg at 10 Hz, even during aggressive maneuvers.
The second framework is a Factor Graph (FG) based sliding‑window optimizer. IMU pre‑integration factors, radar range‑velocity factors, and landmark factors are assembled into a graph; partial marginalization removes the oldest states while preserving information. This approach reduces linearization error compared with EKF and offers greater flexibility for asynchronous sensor streams. In benchmark tests the FG yields roughly 8 % lower position error than the EKF while maintaining comparable computational load.
A major challenge addressed in the thesis is the sparsity and high noise level of radar point clouds, which hampers direct feature matching. To overcome this, a deep neural network is trained to predict correspondence likelihoods between successive radar scans using intensity and Doppler cues. The network outputs a probability matrix that is fed to a Hungarian algorithm for optimal assignment. Compared with traditional RANSAC‑based matching, the learning‑based method improves match success rate by over 30 % and reduces overall position RMSE by about 15 %.
Implementation is performed on an ARM Cortex‑A53 system‑on‑chip (e.g., Raspberry Pi Compute Module) running Ubuntu 20.04 and ROS 2. C++ code with optional CUDA acceleration keeps the average processing time at ~45 ms per frame (10 Hz), satisfying real‑time control loop requirements.
Extensive validation is conducted across indoor and outdoor scenarios. Indoor tests introduce variable lighting and artificial fog; vision‑only VIO fails, while the radar‑inertial system maintains stable estimates. Outdoor flights include high‑speed, high‑acceleration trajectories and long‑duration closed‑loop missions; both EKF and FG maintain low drift, and the persistent landmark strategy further suppresses long‑term errors.
In summary, the dissertation demonstrates that a tightly‑coupled radar‑inertial odometry pipeline—implemented with either a multi‑state EKF or a factor‑graph optimizer—can deliver accurate, low‑latency navigation for UAVs operating on modest embedded hardware. The integration of deep‑learning‑based point‑cloud correspondence further enhances robustness against radar noise and sparsity. These contributions advance the state of the art in GNSS‑denied aerial navigation and open pathways for deploying affordable, resilient autonomous UAVs in challenging real‑world environments.
Comments & Academic Discussion
Loading comments...
Leave a Comment