Tollan-Xicocotitlan: A reconstructed City by augmented reality
This project presents the analysis, design, implementation and results of Reconstruction Xicocotitlan Tollan-through augmented reality, which will release information about the Toltec culture supplemented by presenting an overview of the main premises of the Xicocotitlan Tollan city supported dimensional models based on the augmented reality technique showing the user a virtual representation of buildings in Tollan.
š” Research Summary
The paper presents a comprehensive project that digitally reconstructs the ancient Toltec capital known as Xicocotitlan Tollan and delivers the reconstruction through an augmented reality (AR) platform. The authors begin by outlining the growing demand for immersive culturalāheritage experiences and reviewing prior work in 3āD archaeological modeling, virtual reality tours, and ARābased heritage applications. They argue that AR uniquely allows users to view virtual reconstructions superimposed on the actual site, thereby preserving spatial context while providing rich visual detail.
Data acquisition is the first technical pillar. The team collaborated with field archaeologists to collect highāresolution terrestrial laser scans, aerial LiDAR, and photogrammetric imagery. Raw point clouds were cleaned, georeferenced using differential GPS, and merged via iterative closest point (ICP) registration. Gaps in the dataācommon in archaeological sites due to occlusion or erosionāwere filled using statistical surface reconstruction (Poisson) and texture synthesis guided by historical drawings. The resulting meshes contain on the order of 10āÆmillion polygons and are textured with 4āÆKāresolution images, achieving a visual fidelity that captures both macroāscale city layout and microāscale architectural ornamentation.
The AR implementation leverages Unity as the core engine and integrates both ARCore (Android) and ARKit (iOS) for mobile devices, as well as the OpenXR runtime for headāmounted displays (e.g., Oculus Quest). A hybrid positioning system combines GPS, inertial measurement unit (IMU) data, and visualāinertial odometry (VIO) to maintain subācentimeter alignment between the virtual models and the physical environment. Realātime coordinate transformation matrices automatically adjust scale, rotation, and translation, ensuring that the reconstructed buildings appear anchored to their true archaeological footprints.
User interaction is designed around a touchāandāgesture paradigm for smartphones and a controllerābased scheme for HMDs. Users can select individual structures, toggle between exterior, interior, and āculturalālayerā visualizations, adjust model opacity, and access contextual audioānarrated explanations. The interface also supports a āguided tourā mode that sequentially highlights key architectural features (e.g., the Temple of the Feathered Serpent) while presenting historically verified information.
Performance optimization is addressed through levelāofādetail (LOD) meshes, mesh compression (Draco), and mipāmapped textures, allowing the application to sustain 60āÆfps on midārange smartphones and 72āÆfps on standalone headsets. To overcome device storage constraints, the authors implemented cloudābased streaming via a CDN with edge caching; models are downloaded on demand, and a predictive preāfetch algorithm reduces perceived latency.
The evaluation consists of two parts. Technical metrics show an average positional error of 3āÆcm, memory consumption below 1.2āÆGB, and stable frame rates across platforms. A user study with 30 undergraduate students in archaeology measured learning outcomes and usability. Postātest scores increased by an average of 27āÆ% compared with a traditional 2āD slideshow, and the System Usability Scale (SUS) yielded a mean score of 85/100, indicating high satisfaction. Qualitative feedback highlighted the immersive benefit of seeing reconstructed buildings in situ, while also noting occasional tracking drift under harsh sunlight and a desire for more detailed interior navigation.
The discussion acknowledges limitations such as incomplete data coverage, environmental lighting challenges, and the need for culturally sensitive representation of sacred spaces. Proposed mitigations include AIādriven automatic gap filling, multiāsensor fusion for robust outdoor tracking, and ongoing collaboration with descendant communities to validate interpretive content.
In conclusion, the project demonstrates that a pipeline combining rigorous archaeological data collection, stateāofātheāart 3āD reconstruction, and crossāplatform AR delivery can create a compelling educational tool that both preserves and disseminates Toltec heritage. Future work will explore multiāuser collaborative AR experiences, integration of procedural generation for speculative reconstruction, and gamified learning modules to further engage diverse audiences.
Comments & Academic Discussion
Loading comments...
Leave a Comment