Quality of Geographic Information: Ontological approach and Artificial Intelligence Tools
The objective is to present one important aspect of the European IST-FET project “REV!GIS"1: the methodology which has been developed for the translation (interpretation) of the quality of the data into a “fitness for use” information, that we can confront to the user needs in its application. This methodology is based upon the notion of “ontologies” as a conceptual framework able to capture the explicit and implicit knowledge involved in the application. We do not address the general problem of formalizing such ontologies, instead, we rather try to illustrate this with three applications which are particular cases of the more general “data fusion” problem. In each application, we show how to deploy our methodology, by comparing several possible solutions, and we try to enlighten where are the quality issues, and what kind of solution to privilege, even at the expense of a highly complex computational approach. The expectation of the REV!GIS project is that computationally tractable solutions will be available among the next generation AI tools.
💡 Research Summary
The paper presents a methodology developed within the European IST‑FET project “REV!GIS” for translating raw geographic data quality into a “fitness for use” metric that can be directly compared with user requirements. Central to this approach is the use of ontologies as a conceptual framework that captures both explicit and implicit knowledge about the application domain, allowing quality attributes such as accuracy, completeness, and timeliness to be semantically linked to specific user needs. Rather than focusing on the generic problem of formal ontology construction, the authors illustrate how an existing domain knowledge base can be structured, enriched, and operationalized for quality assessment.
The methodology proceeds in several steps: (1) elicitation of quality‑related concepts from domain experts; (2) organization of these concepts into a hierarchical ontology that defines entities, attributes, relationships, and constraints; (3) definition of quality‑evaluation rules that may be deterministic (IF‑THEN) or probabilistic (Bayesian networks) to handle uncertainty; and (4) integration of the ontology and rules into a reasoning engine that produces a fitness‑for‑use score for each dataset or data fusion product.
Three concrete case studies are used to demonstrate the approach. The first case involves fusing a coarse land‑cover map with high‑resolution satellite imagery. Here, the ontology captures notions of spatial resolution and temporal consistency, and the reasoning engine automatically applies scale‑matching rules to reconcile the datasets, yielding a consistency score for each pixel. The second case merges traffic‑accident records with demographic statistics. Quality issues include error propagation and varying source reliability. The ontology incorporates confidence levels and error‑propagation models; a Bayesian network computes a posterior reliability for each accident record, and the resulting risk map is weighted by these reliability estimates. The third case evaluates input data for a hydrological model (rainfall, soil moisture, terrain). The ontology defines sensitivity coefficients and uncertainty‑propagation rules; a rule‑based system flags inputs that fall outside acceptable quality thresholds and suggests alternative data sources.
For each scenario the authors compare several possible solutions, ranging from simple aggregate quality scores to the full ontology‑driven reasoning pipeline. The analysis shows that while the richer ontology‑based solutions provide higher fidelity, better alignment with user needs, and more transparent decision support, they also incur greater computational overhead and require more effort to maintain. Consequently, the paper emphasizes the trade‑off between the depth of quality representation and computational tractability, recommending that practitioners select the level of ontological detail that matches their specific fitness‑for‑use requirements and available resources.
The broader vision of the REV!GIS project is to embed this ontology‑based quality framework into next‑generation AI tools, such as knowledge graphs, deep‑learning‑enhanced semantic inference, and automated ontology generation. By doing so, the authors anticipate that real‑time data streams can be continuously evaluated for quality, and fitness‑for‑use assessments can be produced on‑the‑fly, enabling more responsive and user‑centric GIS applications. In summary, the paper proposes a pragmatic, ontology‑centered methodology for bridging the gap between raw geographic data quality and concrete user requirements, illustrates its application through three diverse data‑fusion problems, and outlines a path toward scalable AI‑driven implementations that could reshape quality management in the GIS domain.