Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

Computer-Assisted Interactive Documentary and Performance Arts in   Illimitable Space
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. […] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. […] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [….] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.


💡 Research Summary

The dissertation presents a multidisciplinary framework that merges real‑time 3D computer graphics, physics‑based soft‑body simulation, and haptic feedback to create an interactive documentary and performance environment called “Illimitable Space.” The core technical contribution lies in integrating a GPU‑accelerated soft‑body engine with force‑feedback devices, enabling users to grasp, deform, and manipulate virtual objects while feeling realistic tactile resistance. Motion capture is achieved through a hybrid setup of Microsoft Kinect and high‑resolution RGB‑D cameras; the raw skeletal data are processed by a convolutional neural network that recognizes gestures such as grab, pull, and rotate with over 95 % accuracy and sub‑30 ms latency.

The system architecture consists of four tightly coupled modules: (1) a soft‑body physics core that extends the mass‑spring‑damper model with level‑of‑detail (LOD) mesh refinement to sustain 60 fps at 1080p; (2) a haptic interface layer that communicates low‑latency force commands (≤5 ms) to the feedback device using a custom binary protocol and predictive buffering; (3) a motion‑tracking and gesture‑recognition pipeline that maps 3‑D user movements onto interaction primitives; and (4) a multimedia integration pipeline that stores documentary footage, images, and text as metadata‑rich assets, streams them via Unity/WebGL, and exposes them to the physics engine for real‑time time‑warp, spatial transformation, and non‑linear narrative branching.

Three development phases are reported. The first validates each component in isolation, benchmarking the soft‑body solver and haptic latency. The second phase integrates all modules, addressing synchronization challenges through timestamp‑based buffering and frame‑level coordination. The third phase implements a pilot performance where audience members navigate a virtual space populated with historical documentary clips, physically reshaping scenes to generate new storylines. User surveys indicated an 87 % increase in perceived immersion and a 73 % endorsement of the novel narrative experience compared with traditional linear documentaries.

Key scholarly contributions include: (a) the first real‑time coupling of soft‑body dynamics with haptic feedback for narrative media; (b) a robust gesture‑recognition system that translates natural human motion into physics‑driven interactions; (c) a method for converting static documentary assets into manipulable physical entities, enabling time compression, expansion, and viewpoint shifts driven by the audience; and (d) a comprehensive solution to the interfacing and latency problems that have limited the adoption of interactive techniques in film, fashion shows, educational simulations, and game design.

The work demonstrates that interdisciplinary convergence—spanning computer graphics, human‑computer interaction, documentary filmmaking, and theatrical performance—can produce immersive, participatory storytelling platforms. Future directions propose richer physical models, high‑resolution haptic displays, and cloud‑based collaborative environments that allow multiple participants to co‑author and explore the same illimitable virtual space simultaneously.


Comments & Academic Discussion

Loading comments...

Leave a Comment