Head-to-Head autonomous racing at the limits of handling in the A2RL challenge
Autonomous racing presents a complex challenge involving multi-agent interactions between vehicles operating at the limit of performance and dynamics. As such, it provides a valuable research and testing environment for advancing autonomous driving technology and improving road safety. This article presents the algorithms and deployment strategies developed by the TUM Autonomous Motorsport team for the inaugural Abu Dhabi Autonomous Racing League (A2RL). We showcase how our software emulates human driving behavior, pushing the limits of vehicle handling and multi-vehicle interactions to win the A2RL. Finally, we highlight the key enablers of our success and share our most significant learnings.
💡 Research Summary
This paper presents the complete autonomous racing system developed by the Technical University of Munich (TUM) Autonomous Motorsport team that secured victory in the inaugural Abu Dhabi Autonomous Racing League (A2RL) in 2024. The authors first describe the competition format and the hardware of the race cars (EA V24), which are heavily modified Dallara Formula‑SF23 chassis equipped with a 2.0 L turbocharged 4‑cylinder engine (550 hp), limited‑slip differential, carbon brakes, Yokohama slicks, an AMD EPYC 7313P CPU, an NVIDIA RTX 6000 GPU, and a rich sensor suite (three LiDARs, four radars, seven high‑resolution cameras, GNSS‑RTK, IMU, wheel speed, and optical velocity).
The software architecture is divided into offline and online components. Offline modules generate the optimal raceline, a high‑resolution grip map that encodes spatially varying friction and acceleration limits, and a three‑dimensional point‑cloud map of the track. These assets are created once before competition and are used to guide the online stack.
The online stack follows a classic Sense‑Plan‑Act hierarchy, with each layer implemented as a set of interchangeable modules. The Sense layer performs sensor pre‑processing, object detection, and tracking, as well as ego‑state estimation. LiDAR point clouds are filtered to a fixed size of 4 000 points, dramatically reducing latency. A lightweight adaptation of OpenPCDet’s PointRCNN runs in 8 ms while preserving detection accuracy. Radar data are clustered in the x‑y‑velocity space to complement LiDAR blind spots. Detections from both modalities are fused and tracked using a Kalman‑filter‑based algorithm, achieving reliable tracking of dynamic objects up to 200 m away and 70 m/s relative speed.
Localization combines GNSS, high‑rate IMU, wheel‑speed, and optical‑velocity measurements in a three‑dimensional Extended Kalman Filter. Because GNSS coverage is intermittent around the Yas Marina circuit, map‑based LiDAR/Radar localization using a pre‑built point‑cloud map (generated offline with KISS‑ICP) fills the gaps, delivering centimeter‑level pose accuracy even under vibration and engine noise.
The Plan layer leverages the grip map to enforce physical limits while planning trajectories at the handling boundary. A high‑frequency (≥20 Hz) optimization‑based planner computes feasible trajectories that respect the grip constraints and can be replanned in real time. In multi‑agent scenarios, predicted trajectories of opponent vehicles are incorporated, enabling overtaking and defensive maneuvers. The planner outputs both a nominal racing trajectory and an emergency trajectory for rapid response to unexpected events.
The Act layer implements a hierarchical control architecture. A Model Predictive Controller (MPC) tracks the planned trajectory at the vehicle dynamics level, while low‑level steering, longitudinal acceleration, and brake controllers (PID with feed‑forward terms) execute the commands. The vehicle model parameters are continuously adapted using learning‑based techniques that account for tire temperature, wear, and varying friction, allowing the controller to operate consistently at the grip limit, akin to a human Formula‑1 driver.
System validation proceeded in three stages: (1) extensive simulation to verify each module in isolation, (2) on‑track testing that demonstrated autonomous tire warm‑up, sensor fault detection, and recovery without human intervention, and (3) full competition runs comprising time‑trial, attack‑and‑defend, and the four‑car final race. The TUM team’s software successfully handled cold‑tire conditions, GNSS‑denied sections, and high‑speed multi‑vehicle interactions, achieving the fastest lap times and ultimately winning the final race.
The contribution of this work lies in delivering a fully integrated, modular, and robust autonomous racing stack that has been proven in a real‑world, high‑speed, multi‑agent environment. The paper highlights key enablers—offline grip‑map generation, multi‑sensor fusion for perception and localization, high‑frequency constrained trajectory planning, and adaptive model‑based control—and discusses lessons learned that can guide future research toward even higher speeds, more complex tracks, and greater autonomy in competitive racing.
Comments & Academic Discussion
Loading comments...
Leave a Comment