Exploring Unstructured Environments using Minimal Sensing on Cooperative Nano-Drones
Recent advances have improved autonomous navigation and mapping under payload constraints, but current multi-robot inspection algorithms are unsuitable for nano-drones due to their need for heavy sensors and high computational resources. To address these challenges, we introduce ExploreBug, a novel hybrid frontier range bug algorithm designed to handle limited sensing capabilities for a swarm of nano-drones. This system includes three primary components: a mapping subsystem, an exploration subsystem, and a navigation subsystem. Additionally, an intra-swarm collision avoidance system is integrated to prevent collisions between drones. We validate the efficacy of our approach through extensive simulations and real-world exploration experiments involving up to seven drones in simulations and three in real-world settings, across various obstacle configurations and with a maximum navigation speed of 0.75 m/s. Our tests demonstrate that the algorithm efficiently completes exploration tasks, even with minimal sensing, across different swarm sizes and obstacle densities. Furthermore, our frontier allocation heuristic ensures an equal distribution of explored areas and paths traveled by each drone in the swarm. We publicly release the source code of the proposed system to foster further developments in mapping and exploration using autonomous nano drones.
💡 Research Summary
The paper addresses the challenge of autonomous exploration with nano‑drones, which are severely limited in payload (≈50 g) and computational resources, making conventional multi‑robot exploration methods that rely on heavy sensors (depth cameras, 3‑D LiDAR) infeasible. The authors introduce ExploreBug, a hybrid frontier‑range “bug” algorithm specifically designed for swarms of nano‑drones equipped with only four single‑beam time‑of‑flight range sensors (front, back, left, right) with a maximum range of 4 m and a 20 Hz update rate.
System Architecture
The solution is organized into four subsystems:
- Mapping Subsystem – each drone streams its range measurements to a central server, which fuses them into a 2‑D occupancy‑grid map (free, occupied, unknown cells). Because the sensors only provide planar data, the drones keep a fixed altitude, and a motion‑capture system supplies a common reference frame and infinite‑range communication.
- Exploration Subsystem – implemented as a finite‑state machine (FSM) with states Init → Sense → Navigate‑to‑Frontier → Drone‑Avoidance → End. In the Sense state the drone rotates by π/2 rad and continuously scans its surroundings, performing a cautious spiral search similar to CautiousBug. When the front‑range measurement falls below a threshold, the drone proceeds to the “Navigate‑to‑Frontier” state.
- Navigation Subsystem – built on Aerostack2, it uses a local 2‑D path planner (A*/D*‑like) to travel to the selected frontier while staying within already‑mapped space, thus avoiding the need for altitude changes or complex heading control.
- Intra‑Swarm Collision Avoidance – monitors inter‑drone distances; if two drones become closer than a safety radius, the one nearer its goal is temporarily halted, lifted slightly, and then allowed to resume once the safety distance is restored. This design minimizes pause time while guaranteeing safety.
Frontier Generation & Allocation
From the global occupancy grid, free cells are extracted, Canny edge detection is applied, and resulting edge pixels are clustered into contiguous frontier regions. Each frontier fᵢ is described by its centroid cᵢ, orientation ψᵢ, and area Aᵢ. Small frontiers below a lower area threshold are discarded; overly large ones are split to keep the size manageable.
Frontier allocation solves the following optimization:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment