Design and implementation of a user interface for a multi-device spatially-aware mobile system
The aim of this thesis was the design and development of an interactive system enhancing collaborative sensemaking. The system design was based on related work research and preliminary user study. The system is based on multiple spatially-aware mobile devices. The spatial awareness is established with a motion tracking system. The information about position of tablets is received by a server. The server communicates with the devices and manages the content of each device. The implemented system supports managing the elements of information across the devices basing on their relative spatial arrangement. The evaluation of the system was done by a user study.
💡 Research Summary
The thesis presents the design, implementation, and evaluation of a spatially‑aware multi‑device mobile system that supports collaborative sense‑making. The authors begin by observing the proliferation of smartphones and tablets and the growing tendency for users to operate several devices simultaneously. While existing applications treat each device in isolation, the authors argue that leveraging the physical arrangement of devices can enrich interaction and improve group work.
A comprehensive literature review covers four research areas: data exploration, multi‑device systems, multiple‑surface environments, and spatially‑aware computing. The review highlights prior work on cross‑device interaction frameworks (e.g., Conductor), tangible tabletop interfaces, and systems that use spatial information as an input modality (e.g., MochaTop, AD‑binning). These studies motivate the central research question: can real‑time tracking of device positions be used to manage and transfer information across devices in a way that supports collaborative analysis?
The system architecture consists of three layers. The motion‑tracking subsystem employs external Oqus cameras and passive markers to capture six‑degree‑of‑freedom (6‑DoF) pose data for each tablet. The tracking data are streamed through Qualisys Track Manager to a QTM Real‑Time Server, which forwards the information to a central application server. The server, written in Java, maintains a persistent Interaction Model that records which content objects (texts, images, graphs) reside on which device and how they are spatially related. It also handles device registration, database persistence (MySQL), and command generation.
The mobile client runs on Android tablets. It maintains a persistent TCP connection to the server, receives pose updates, and updates its user interface accordingly. The UI is rendered with OpenGL ES, allowing content objects to be automatically positioned according to the tablet’s current location and orientation. Interaction techniques include:
- Position‑Based Transfer – dragging a content object toward another tablet’s physical location triggers a “pass” command; the server updates the Interaction Model and the target tablet renders the object.
- Relation Visualization – lines or arcs are drawn between related objects on different tablets, providing a visual cue of the underlying data graph.
- Spatial Gestures – simultaneous gestures on two devices (e.g., a two‑hand pinch spanning two tablets) are recognized as a synchronous operation, enabling bulk actions such as grouping or filtering.
Implementation details emphasize low latency (sub‑50 ms round‑trip) and high tracking accuracy (< 2 cm error). The authors also discuss a custom binary protocol that distinguishes “DeviceMessage” (pose, status) from “ServerCommand” (update UI, transfer object).
For evaluation, a within‑subjects user study with 12 university participants was conducted. Each participant performed a 30‑minute collaborative data‑analysis task under two conditions: (a) the spatially‑aware system (experimental condition) and (b) a baseline setup where each tablet operated independently with manually synchronized content. Objective metrics (task completion time, error count) and subjective measures (NASA‑TLX cognitive load, SUS usability, post‑study questionnaire) were collected. Results showed a statistically significant improvement for the experimental condition: average task time reduced by 18 %, errors decreased by 22 %, perceived workload dropped from 58 to 45 TLX points, and SUS scores increased from 71 to 84. Qualitative feedback highlighted the intuitiveness of moving information by physically moving devices and the usefulness of visualizing cross‑device relations.
The thesis contributes three main advances: (1) a concrete interaction model that maps real‑world spatial relationships to UI operations across multiple mobile devices; (2) a robust client‑server implementation that integrates high‑frequency motion capture with responsive Android interfaces; and (3) empirical evidence that spatial awareness can materially improve collaborative sense‑making. Limitations include reliance on an external camera‑based tracking system (which limits deployment to controlled indoor environments) and a relatively small, homogeneous participant pool.
Future work proposes replacing the camera system with infrastructure‑agnostic techniques such as Wi‑Fi Round‑Trip Time or BLE beacons, integrating machine‑learning‑based hand‑gesture recognition to enrich the interaction vocabulary, and conducting longitudinal field studies in real workplaces to assess long‑term adoption and scalability.
In summary, the paper demonstrates that embedding spatial awareness into multi‑device mobile ecosystems can create more natural, efficient, and cognitively light collaborative environments, opening new avenues for research in distributed HCI and context‑aware interaction design.
Comments & Academic Discussion
Loading comments...
Leave a Comment