Experimental Evaluation of ROS-Causal in Real-World Human-Robot Spatial Interaction Scenarios

Experimental Evaluation of ROS-Causal in Real-World Human-Robot Spatial Interaction Scenarios
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Deploying robots in human-shared environments requires a deep understanding of how nearby agents and objects interact. Employing causal inference to model cause-and-effect relationships facilitates the prediction of human behaviours and enables the anticipation of robot interventions. However, a significant challenge arises due to the absence of implementation of existing causal discovery methods within the ROS ecosystem, the standard de-facto framework in robotics, hindering effective utilisation on real robots. To bridge this gap, in our previous work we proposed ROS-Causal, a ROS-based framework designed for onboard data collection and causal discovery in human-robot spatial interactions. In this work, we present an experimental evaluation of ROS-Causal both in simulation and on a new dataset of human-robot spatial interactions in a lab scenario, to assess its performance and effectiveness. Our analysis demonstrates the efficacy of this approach, showcasing how causal models can be extracted directly onboard by robots during data collection. The online causal models generated from the simulation are consistent with those from lab experiments. These findings can help researchers to enhance the performance of robotic systems in shared environments, firstly by studying the causal relations between variables in simulation without real people, and then facilitating the actual robot deployment in real human environments. ROS-Causal: https://lcastri.github.io/roscausal


💡 Research Summary

The paper presents ROS‑Causal, a ROS‑based framework that integrates data acquisition, preprocessing, and causal discovery directly into the robot’s software stack, enabling on‑board generation of causal models during human‑robot spatial interactions (HRSI). The authors first motivate the need for causal reasoning in shared workspaces, noting that existing causal discovery tools are largely offline and lack ROS integration, which hampers their deployment on real robots.

ROS‑Causal consists of four main ROS nodes: roscausal_robot and roscausal_human collect robot and human state topics (position, velocity, target, etc.), merge them into a single topic, and forward the combined stream to roscausal_data. The data node buffers the merged stream until a predefined time‑series length (e.g., 150 s at 10 Hz) is reached, then writes the batch to CSV files. An asynchronous roscausal_discovery node reads these CSV batches and runs a causal discovery algorithm—currently PCMCI or its extension F‑PCMCI—publishing the resulting causal graph on a dedicated topic. The pipeline allows simultaneous data collection for the next batch while the current batch is being analyzed, supporting near‑real‑time model updates.

The evaluation proceeds in two stages. In simulation, the authors use ROS‑Causal HRISim, a Gazebo‑based environment that simulates a TIAGo robot and a tele‑operated human avatar. They extract three variables: human velocity (v), distance to goal (dg), and collision risk (r). Using a 1‑step lag, Gaussian‑process‑based conditional independence test (GPDC) and a significance level of α = 0.05, F‑PCMCI correctly recovers the expected causal structure: v → dg, dg → v ← r, and v → r. This confirms that the framework can reconstruct known causal graphs from simulated sensor data.

For real‑world validation, the authors deploy a physical TIAGo robot equipped with a Velodyne VLP‑16 LiDAR in a lab corridor. Fifteen participants each perform a series of trials in which they must reach four sequential goal positions while avoiding the robot. Human and robot trajectories, velocities, and derived variables (dg, r) are streamed over ROS topics, processed by ROS‑Causal, and fed to the F‑PCMCI module. The resulting causal graph matches the simulated one, demonstrating consistency between simulation and physical experiments.

Quantitative analysis shows that a time series of roughly 60 seconds (≈ 600 samples at 10 Hz) is sufficient to recover the correct causal links with high confidence, and the average computation time per batch is about 2.3 seconds on a standard laptop, indicating feasibility for near‑real‑time operation. The asynchronous design permits continuous data collection while previous batches are being analyzed, enabling the robot to update its causal model during prolonged deployments.

The paper contributes (1) the first ROS‑native, runtime causal discovery framework, (2) an extensive experimental validation in both simulation and real environments with 15 human subjects, and (3) a publicly released dataset of human‑goal and human‑robot trajectories captured from a 3‑D LiDAR perspective. Limitations include the current restriction to PCMCI/F‑PCMCI (excluding newer methods such as DYNOTEARS, VARLiNGAM, etc.) and the focus on a relatively simple indoor scenario with a single robot and single human.

Future work is outlined as extending ROS‑Causal with a plug‑in architecture for additional causal discovery algorithms, leveraging GPU acceleration for faster on‑board inference, and scaling the approach to multi‑robot, multi‑human, and more dynamic environments. Such extensions aim to enhance robot decision‑making, safety, and adaptability in complex shared spaces.


Comments & Academic Discussion

Loading comments...

Leave a Comment