Tollan-Xicocotitlan: A reconstructed City by augmented reality

Tollan-Xicocotitlan: A reconstructed City by augmented reality
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This project presents the analysis, design, implementation and results of Reconstruction Xicocotitlan Tollan-through augmented reality, which will release information about the Toltec culture supplemented by presenting an overview of the main premises of the Xicocotitlan Tollan city supported dimensional models based on the augmented reality technique showing the user a virtual representation of buildings in Tollan.


šŸ’” Research Summary

The paper presents a comprehensive project that digitally reconstructs the ancient Toltec capital known as Xicocotitlan Tollan and delivers the reconstruction through an augmented reality (AR) platform. The authors begin by outlining the growing demand for immersive cultural‑heritage experiences and reviewing prior work in 3‑D archaeological modeling, virtual reality tours, and AR‑based heritage applications. They argue that AR uniquely allows users to view virtual reconstructions superimposed on the actual site, thereby preserving spatial context while providing rich visual detail.

Data acquisition is the first technical pillar. The team collaborated with field archaeologists to collect high‑resolution terrestrial laser scans, aerial LiDAR, and photogrammetric imagery. Raw point clouds were cleaned, georeferenced using differential GPS, and merged via iterative closest point (ICP) registration. Gaps in the data—common in archaeological sites due to occlusion or erosion—were filled using statistical surface reconstruction (Poisson) and texture synthesis guided by historical drawings. The resulting meshes contain on the order of 10 million polygons and are textured with 4 K‑resolution images, achieving a visual fidelity that captures both macro‑scale city layout and micro‑scale architectural ornamentation.

The AR implementation leverages Unity as the core engine and integrates both ARCore (Android) and ARKit (iOS) for mobile devices, as well as the OpenXR runtime for head‑mounted displays (e.g., Oculus Quest). A hybrid positioning system combines GPS, inertial measurement unit (IMU) data, and visual‑inertial odometry (VIO) to maintain sub‑centimeter alignment between the virtual models and the physical environment. Real‑time coordinate transformation matrices automatically adjust scale, rotation, and translation, ensuring that the reconstructed buildings appear anchored to their true archaeological footprints.

User interaction is designed around a touch‑and‑gesture paradigm for smartphones and a controller‑based scheme for HMDs. Users can select individual structures, toggle between exterior, interior, and ā€œcultural‑layerā€ visualizations, adjust model opacity, and access contextual audio‑narrated explanations. The interface also supports a ā€œguided tourā€ mode that sequentially highlights key architectural features (e.g., the Temple of the Feathered Serpent) while presenting historically verified information.

Performance optimization is addressed through level‑of‑detail (LOD) meshes, mesh compression (Draco), and mip‑mapped textures, allowing the application to sustain 60 fps on mid‑range smartphones and 72 fps on standalone headsets. To overcome device storage constraints, the authors implemented cloud‑based streaming via a CDN with edge caching; models are downloaded on demand, and a predictive pre‑fetch algorithm reduces perceived latency.

The evaluation consists of two parts. Technical metrics show an average positional error of 3 cm, memory consumption below 1.2 GB, and stable frame rates across platforms. A user study with 30 undergraduate students in archaeology measured learning outcomes and usability. Post‑test scores increased by an average of 27 % compared with a traditional 2‑D slideshow, and the System Usability Scale (SUS) yielded a mean score of 85/100, indicating high satisfaction. Qualitative feedback highlighted the immersive benefit of seeing reconstructed buildings in situ, while also noting occasional tracking drift under harsh sunlight and a desire for more detailed interior navigation.

The discussion acknowledges limitations such as incomplete data coverage, environmental lighting challenges, and the need for culturally sensitive representation of sacred spaces. Proposed mitigations include AI‑driven automatic gap filling, multi‑sensor fusion for robust outdoor tracking, and ongoing collaboration with descendant communities to validate interpretive content.

In conclusion, the project demonstrates that a pipeline combining rigorous archaeological data collection, state‑of‑the‑art 3‑D reconstruction, and cross‑platform AR delivery can create a compelling educational tool that both preserves and disseminates Toltec heritage. Future work will explore multi‑user collaborative AR experiences, integration of procedural generation for speculative reconstruction, and gamified learning modules to further engage diverse audiences.


Comments & Academic Discussion

Loading comments...

Leave a Comment