We present a decentralized, agent agnostic, and safety-aware control framework for human-robot collaboration based on Virtual Model Control (VMC). In our approach, both humans and robots are embedded in the same virtual-component-shaped workspace, where motion is the result of the interaction with virtual springs and dampers rather than explicit trajectory planning. A decentralized, force-based stall detector identifies deadlocks, which are resolved through negotiation. This reduces the probability of robots getting stuck in the block placement task from up to 61.2% to zero in our experiments. The framework scales without structural changes thanks to the distributed implementation: in experiments we demonstrate safe collaboration with up to two robots and two humans, and in simulation up to four robots, maintaining inter-agent separation at around 20 cm. Results show that the method shapes robot behavior intuitively by adjusting control parameters and achieves deadlock-free operation across team sizes in all tested scenarios.
Human-robot collaboration (HRC) aims to bridge the gap between human dexterity and robot precision. Compared to multi-robot collaboration, HRC in shared workspaces must account for (i) unpredictability and complexity of human motion and (ii) life-critical cost of failure and injury [1]. Hence, HRC literature mainly focuses on ensuring human safety via rich sensing [2], human intent prediction [3], and optimization and learning-based algorithms [4]- [6]. Nonetheless, these methods can be computationally expensive, model-or data-dependent, and tailored mainly to humans.
In shared workspaces, agents should be able to enter and leave freely, sharing workloads and roles. However, current approaches often treat humans and robots differently, applying unique rules for each, with robots typically adapting to human actions [7], [8]. In this paper, we challenge this distinction. Our key question is: can we move towards a lightweight, agent-agnostic, and scalable human-robot collaboration while guaranteeing human safety?
We start from the idea that robot and human agents can have shared or complementary collaborative roles. We search for a decentralized control architecture that can shape the collective behavior without relying on accurate models or large datasets. The architecture should have a low computational load and seamlessly integrate new agents without the need to Y. Zhang and O. Faris are supported by the Engineering and Physical Sciences Research Council [EP/S023917/1]. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101034337. The authors are with the Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom {yz892, of292, csh66, kfc35, fi224}@cam.ac.uk, f.forni@eng.cam.ac.uk redesign the control strategy. The goal is to derive a strategy that allows for any number of agents to participate in the task, or withdraw from it, as required, while guaranteeing a safe multi-agent collaboration of human or robots.
For this purpose, we propose a safety-aware control framework based on Virtual Model Control (VMC). In our approach, both humans and robots are embedded in a common workspace shaped with virtual components. Robot motion is regulated by virtual springs and dampers, without explicit trajectory planning. By adjusting virtual component parameters, we can vary the interaction behavior from fast to slow and change how cautious of other agents. Human’s behavior is then influenced through their natural adaptation to the workspace. With nonlinear virtual springs, we bound the force applied by the robot upon potential physical interaction.
In this paper, we validate our approach in collaborative pick-and-place task involving both humans and robots, as seen in Fig. 1. Our contributions include
• An agent-agnostic approach to multi-agent collaboration where robots and humans are treated on the same ground. The robot behavior and its interaction with other robots and human are shaped by intuitive mechanical parameters. A separation distance of about 20 cm is mantained through active avoidance and compliance. • A VMC-based implementation with conflict resolution, capable of detecting deadlocks via force balance and negotiating priority through minimal communication, eliminates the failure mode of robots getting stuck during the block placement task, reducing its occurrence from a maximum of 61.2% to zero experimentally. • Scalability: the decentralized approach allows seamlessly swapping of agents and smooth adaptation to changes in team composition, shown for up to two robots and two humans in experiment, and up to four robots in simulation.
VMC offers an intuitive control framework where virtual mechanical components (e.g., springs and dampers) are used to design physics-driven predictable robotic motion. The relevance of VMC for this paper is that it moves beyond preprogrammed or planned motion. Instead, robot behavior is a direct consequence of its interaction with virtual mechanical components. This enables a novel design approach that is particularly well-suited for interactive tasks involving a variety of low-predictability scenarios. VMC was originally introduced for bipedal locomotion in [9] and later adopted for quadruped locomotion [10], [11]. Recently, VMC was utilized for robot manipulation scenarios such as reaching under uncertainties [12], robot assisted laparoscopic surgery tasks [13], and robotic cutting [14].
In HRC, VMC was used in collaborative surgical bone drilling task for increased drill alignment accuracy [15]. In this work, we expand the use VMC towards multi-agent scenarios involving humans and robots. Additionally, we present the use of VMC in position-controlled robotic manipulators, which have not been addressed before. With VMC, all agents can be abstracted as virtual mechanisms, resulting in a decentralized control architecture that enables s
This content is AI-processed based on open access ArXiv data.