An Ontology-driven Dynamic Knowledge Base for Uninhabited Ground Vehicles
In this paper, the concept of Dynamic Contextual Mission Data (DCMD) is introduced to develop an ontology-driven dynamic knowledge base for Uninhabited Ground Vehicles (UGVs) at the tactical edge. The dynamic knowledge base with DCMD is added to the UGVs to: support enhanced situation awareness; improve autonomous decision making; and facilitate agility within complex and dynamic environments. As UGVs are heavily reliant on the a priori information added pre-mission, unexpected occurrences during a mission can cause identification ambiguities and require increased levels of user input. Updating this a priori information with contextual information can help UGVs realise their full potential. To address this, the dynamic knowledge base was designed using an ontology-driven representation, supported by near real-time information acquisition and analysis, to provide in-mission on-platform DCMD updates. This was implemented on a team of four UGVs that executed a laboratory based surveillance mission. The results showed that the ontology-driven dynamic representation of the UGV operational environment was machine actionable, producing contextual information to support a successful and timely mission, and contributed directly to the situation awareness.
💡 Research Summary
This paper introduces the concept of Dynamic Contextual Mission Data (DCMD) and demonstrates how an ontology‑driven dynamic knowledge base can be integrated on‑board unmanned ground vehicles (UGVs) to improve situation awareness, autonomous decision‑making, and operational agility at the tactical edge. Recognizing that traditional UGVs rely heavily on static a‑priori mission data, the authors propose augmenting this information with real‑time multimodal sensor inputs to create mission‑specific, actionable knowledge. The knowledge base is built upon the Basic Formal Ontology (BFO) as an upper‑level ontology, the Common Core Ontologies (CCO) as a mid‑level semantic layer, and the Relation Ontology Core (RO Core) for standardized relationships, providing a reusable semantic backbone.
The system pipeline consists of image acquisition (RealSense D435i, LiDAR, IMU), YOLOv11‑based object detection with depth estimation, identity matching against the a‑priori database, and Bayesian Network (BN) inference to assess whether an object is known, unknown, or hazardous. The BN uses discrete evidence variables and Variable Elimination for exact, low‑cost inference suitable for limited onboard compute. Results from the inference, together with object attributes and spatial‑temporal context, are encoded as DCMD updates and inserted into a TypeDB (formerly Grakn) knowledge graph using TypeQL data pipelines.
A laboratory experiment employed four TurtleBot3‑WafflePi robots equipped with NVIDIA Jetson Orin AGX GPUs. The robots were divided into an Explorer team (two agents) that scanned predefined waypoints and a Verifier team (two agents) that confirmed detected hazards. Over a 6 × 2 m testbed representing a town near an airfield, the Explorer team successfully identified all known objects and potential hazards, while the Verifier team validated each hazard, leading to mission completion. Throughout the mission, DCMD updates were propagated in real time across the team’s knowledge bases, ensuring a shared, up‑to‑date situational picture and reducing the need for human operator intervention.
The study demonstrates that an ontology‑driven dynamic knowledge base can provide machine‑actionable, context‑rich information under constrained computational resources, thereby enhancing UGV autonomy and collaborative mission performance. Future work will explore scaling the approach to larger multi‑robot formations and field deployments, as well as addressing security and robustness concerns in contested environments.
Comments & Academic Discussion
Loading comments...
Leave a Comment