Two Dimensions for Organizing Immersive Analytics: Toward a Taxonomy for Facet and Position
As immersive analytics continues to grow as a discipline, so too should its underlying methodological support. Taxonomies play an important role for information visualization and human computer interaction. They provide an organization of the techniques used in a particular domain that better enable researchers to describe their work, discover existing methods, and identify gaps in the literature. Existing taxonomies in related fields do not capture or describe the unique paradigms employed in immersive analytics. We conceptualize a taxonomy that organizes immersive analytics according to two dimensions: spatial and visual presentation. Each intersection of this taxonomy represents a unique design paradigm which, when thoroughly explored, can aid in the design and research of new immersive analytic applications.
💡 Research Summary
The paper addresses a critical gap in the emerging field of Immersive Analytics (IA) by proposing a systematic taxonomy that captures the unique design space of immersive visual‑analytic systems. Traditional taxonomies in information visualization and human‑computer interaction primarily organize techniques along dimensions such as data type, visual encoding, or interaction modality, assuming a flat 2‑D screen as the primary display. IA, however, leverages virtual, augmented, and mixed reality environments where data can be positioned anywhere in a three‑dimensional physical space and can be perceived through a variety of visual and multimodal cues. To reflect these characteristics, the authors introduce two orthogonal axes: Spatial Position and Visual Presentation.
Spatial Position describes how analytic content is anchored relative to the user’s physical location. Three categories are defined:
- Fixed – content remains at a constant physical location regardless of user movement (e.g., a floating dashboard).
- Dynamic – content follows the user’s gaze or head pose, updating its location in real time (e.g., a head‑tracked data panel).
- Embedded – data is fused with real‑world objects or surfaces, allowing direct manipulation of the physical artifact (e.g., a holographic overlay on a physical model).
Visual Presentation captures the modality and dimensionality of the visual encoding. Four families are distinguished:
- Facet – traditional 2‑D panels or small multiples displayed within the immersive environment.
- 3‑D Model – explicit geometric representations (meshes, point clouds).
- Volume Rendering – dense scalar fields visualized through opacity and color transfer functions, common in medical or scientific volumetric data.
- Multimodal – combinations of visual cues with auditory, haptic, or even olfactory feedback, exploiting the full sensorium of immersive hardware.
By intersecting the three spatial categories with the four visual families, the taxonomy yields twelve conceptual design paradigms. The authors focus on six representative intersections, providing literature examples, prototype implementations, and a critical discussion of strengths and weaknesses for each. For instance, the Fixed‑Facet paradigm mirrors conventional dashboards; it is easy to learn, supports collaborative viewing, but offers limited depth perception and immersion. Conversely, Dynamic‑Volume enables analysts to walk through large scientific datasets, gaining superior spatial insight, yet it imposes higher cognitive load, requires precise tracking, and can cause motion sickness. Embedded‑Multimodal integrates data directly onto physical artifacts and augments them with sound or haptic cues, delivering highly intuitive interaction at the cost of complex hardware integration and potentially steep development effort.
A systematic literature mapping reveals that the majority of existing IA work clusters around the Fixed‑Facet and Fixed‑3D Model quadrants, while Dynamic‑Volume and Embedded‑Multimodal remain under‑explored. This observation motivates a research agenda that prioritizes prototyping in the sparsely populated quadrants, rigorous user studies, and the development of evaluation metrics tailored to immersive contexts. The authors propose four key assessment dimensions: cognitive load, task efficiency, collaborative affordances, and hardware constraints. They outline an experimental framework that can quantify these factors across different paradigm combinations, facilitating evidence‑based design decisions.
In the discussion, the paper emphasizes the taxonomy’s role as a decision‑support tool during the early design phase. By explicitly selecting a spatial‑visual combination that aligns with the target analytic tasks, user expertise, and deployment environment, designers can reduce unnecessary complexity and focus development resources on the most promising interaction patterns. Moreover, the taxonomy offers a common vocabulary for the IA community, enabling clearer communication, systematic literature reviews, and identification of “research blind spots.” The authors also argue that the framework is extensible: as new display technologies (e.g., eye‑tracked foveated rendering, ultra‑high‑resolution AR glasses) and interaction modalities (e.g., mid‑air haptics, speech‑driven commands) emerge, they can be incorporated as additional sub‑categories without disrupting the overall structure.
The conclusion reiterates that a well‑defined, two‑dimensional taxonomy provides both theoretical clarity and practical guidance for the rapidly evolving field of Immersive Analytics. It helps researchers articulate contributions, educators to structure curricula, and tool developers to create modular platforms that can be reconfigured across the spatial‑visual spectrum. Ultimately, the taxonomy aims to accelerate innovation by making the design space of immersive analytics transparent, navigable, and systematically improvable.