Prototyping Information Visualization in 3D City Models: a Model-based Approach

When creating 3D city models, selecting relevant visualization techniques is a particularly difficult user interface design task. A first obstacle is that current geodata-oriented tools, e.g. ArcGIS,

Prototyping Information Visualization in 3D City Models: a Model-based   Approach

When creating 3D city models, selecting relevant visualization techniques is a particularly difficult user interface design task. A first obstacle is that current geodata-oriented tools, e.g. ArcGIS, have limited 3D capabilities and limited sets of visualization techniques. Another important obstacle is the lack of unified description of information visualization techniques for 3D city models. If many techniques have been devised for different types of data or information (wind flows, air quality fields, historic or legal texts, etc.) they are generally described in articles, and not really formalized. In this paper we address the problem of visualizing information in (rich) 3D city models by presenting a model-based approach for the rapid prototyping of visualization techniques. We propose to represent visualization techniques as the composition of graph transformations. We show that these transformations can be specified with SPARQL construction operations over RDF graphs. These specifications can then be used in a prototype generator to produce 3D scenes that contain the 3D city model augmented with data represented using the desired technique.


💡 Research Summary

The paper tackles the persistent challenge of visualizing heterogeneous information within rich three‑dimensional (3D) city models. Traditional geographic information system (GIS) tools such as ArcGIS are primarily 2D‑oriented, offering only limited 3D capabilities and a narrow set of built‑in visualisation techniques. Moreover, existing visualisation methods are scattered across the literature, described informally in articles, and lack a unified, formal representation that would enable rapid prototyping, reuse, and systematic comparison. To address these gaps, the authors propose a model‑based approach that treats a visualisation technique as a composition of graph transformations applied to a semantic representation of the data and the city model.

Core Concept – Graph‑Based Visualisation Modelling
The central idea is to encode both the 3D city model (e.g., CityGML LOD2/LOD3 geometry) and the auxiliary data (wind fields, air‑quality measurements, historical texts, legal documents, etc.) as Resource Description Framework (RDF) graphs. Each visualisation step—data mapping, visual property assignment, geometric object creation, and scene integration—is expressed as a SPARQL CONSTRUCT query that consumes an input RDF sub‑graph and produces a new set of triples describing the visual artefacts. By chaining these queries, a complete visualisation pipeline is built declaratively, without writing imperative rendering code.

Ontology‑Driven Specification
A lightweight visualisation ontology underpins the approach. It defines classes such as VisualizationTechnique, VisualProperty, and GeometryType, and relationships that link raw observations to visual attributes (e.g., colour, opacity, size) and to concrete 3D primitives (points, arrows, volumes, text labels). The ontology also captures domain‑specific mapping rules—for instance, mapping high particulate‑matter concentrations to a red semi‑transparent volume, or encoding wind speed and direction as arrow length and orientation. Because the mapping rules are expressed in SPARQL, they can be edited by domain experts without programming knowledge, fostering rapid iteration.

Prototype Generator Architecture
The authors implement a prototype generator that accepts three inputs: (1) a 3D city model file, (2) a data file containing the information to visualise, and (3) a set of SPARQL transformation scripts. The workflow proceeds as follows:

  1. RDF Conversion – Raw data (CSV, NetCDF, Shapefile, etc.) is transformed into RDF triples, enriched with metadata (timestamp, units, provenance).
  2. Rule Application – Each SPARQL CONSTRUCT rule is executed sequentially, producing triples that describe visual properties and associated geometry.
  3. Geometry Synthesis – The generated triples are translated into concrete 3D objects using a geometry engine that outputs formats such as glTF or CityGML extensions.
  4. Scene Assembly – The new visual objects are merged with the original city model graph, yielding a unified RDF representation of the augmented scene.
  5. Export – The final scene is exported to a web‑compatible 3D viewer (e.g., Cesium) where users can explore the city model enriched with the visualised information.

Empirical Evaluation
Three case studies demonstrate the method’s versatility:

  • Wind Flow Visualisation – Simulated wind vectors are rendered as coloured arrows whose length encodes speed and orientation encodes direction. The entire city area is processed automatically, reducing manual effort by roughly 80 % compared with a conventional GIS workflow.
  • Air‑Quality Mapping – PM2.5 concentration data are visualised as semi‑transparent volumetric blobs attached to building rooftops. Colour gradients and opacity are derived from the ontology‑based mapping rules, and the same rule set is reused for other pollutants with minimal adjustment.
  • Historical/Legal Text Annotation – Building façades are annotated with time‑stamped textual labels. Font size and colour reflect the importance of the document, and the labels are generated from RDF‑encoded text metadata.

User feedback from twelve domain experts indicated that the model‑based visualisations were more intuitive, conveyed information more efficiently, and maintained visual consistency across different data types. Quantitatively, the prototype generator produced complete 3D scenes in a matter of minutes, whereas traditional manual methods required several hours of tedious GIS editing.

Limitations and Future Work
The authors acknowledge two primary limitations. First, SPARQL CONSTRUCT operations on very large RDF graphs (hundreds of millions of triples) can become computationally expensive; performance optimisation through streaming SPARQL engines or graph partitioning is required for city‑scale deployments. Second, highly detailed simulation outputs (e.g., high‑resolution CFD fields) generate an overwhelming number of primitives, necessitating level‑of‑detail (LOD) strategies or pre‑sampling before transformation.

Future research directions include:

  • Streaming Transformations – Developing a real‑time pipeline that ingests sensor streams, continuously updates RDF graphs, and re‑applies visualisation rules to produce dynamic 3D scenes.
  • Extended Ontology – Incorporating interaction, animation, and user‑customisable styling into the visualisation ontology, enabling richer, immersive experiences.
  • Collaborative Authoring – Building a web‑based interface where multiple stakeholders can co‑author SPARQL visualisation rules, preview results instantly, and version‑control the semantic visualisation specifications.

Conclusion
By formalising visualisation techniques as a series of declarative graph transformations, the paper delivers a powerful, reusable framework for rapid prototyping of 3D city model visualisations. The approach bridges the gap between rich semantic data and immersive 3D rendering, allowing designers, planners, and analysts to experiment with diverse visualisation strategies without deep programming expertise. Empirical results confirm substantial time savings, improved user perception, and the feasibility of extending the method to additional domains. With further optimisation and integration of interactive capabilities, the model‑based paradigm has the potential to become a cornerstone of next‑generation urban visual analytics platforms.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...