Adaptive Vision-Based Control of Redundant Robots with Null-Space Interaction for Human-Robot Collaboration

Adaptive Vision-Based Control of Redundant Robots with Null-Space Interaction for Human-Robot Collaboration
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Human-robot collaboration aims to extend human ability through cooperation with robots. This technology is currently helping people with physical disabilities, has transformed the manufacturing process of companies, improved surgical performance, and will likely revolutionize the daily lives of everyone in the future. Being able to enhance the performance of both sides, such that human-robot collaboration outperforms a single robot/human, remains an open issue. For safer and more effective collaboration, a new control scheme has been proposed for redundant robots in this paper, consisting of an adaptive vision-based control term in task space and an interactive control term in null space. Such a formulation allows the robot to autonomously carry out tasks in an unknown environment without prior calibration while also interacting with humans to deal with unforeseen changes (e.g., potential collision, temporary needs) under the redundant configuration. The decoupling between task space and null space helps to explore the collaboration safely and effectively without affecting the main task of the robot end-effector. The stability of the closed-loop system has been rigorously proved with Lyapunov methods, and both the convergence of the position error in task space and that of the damping model in null space are guaranteed. The experimental results of a robot manipulator guided with the technology of augmented reality (AR) are presented to illustrate the performance of the control scheme.


💡 Research Summary

The paper addresses the challenge of enabling safe and efficient human‑robot collaboration (HRC) with redundant manipulators operating in uncalibrated visual environments. It proposes a two‑layer control architecture that separates the task‑space controller, which drives the robot’s end‑effector to a desired pixel location, from a null‑space controller that allows a human operator to influence the robot’s redundant joints without disturbing the primary task.

In the task space, the authors adopt an eye‑to‑hand visual configuration. The depth of the visual feature and the image Jacobian are both linearly parameterized by regressor matrices (Yz, Yk) and unknown parameter vectors (θz, θk). Adaptive laws (13) and (14) update the estimates of these parameters online, driven by the visual error (x – xd). The control input u_T uses the estimated depth and the pseudo‑inverse of the estimated image Jacobian to generate a proportional feedback term Kp(x – xd). This design guarantees convergence of the visual error even when the camera is completely uncalibrated.

The null‑space controller exploits the redundancy of the robot (n > m). By constructing the null‑space projector N(q) = I – J⁺J, the authors formulate a desired damping model N(q)(c_d·q̇ – d) = 0, where c_d > 0 is a tunable damping coefficient and d represents the human’s effort (force, torque, or a command from an AR interface). The null‑space control input is u_N = N(q)(c_d⁻¹ d). Because N(q)J = 0, this term does not affect the end‑effector motion, yet it directly shapes the redundant joint velocities according to the human’s intent, providing compliance and safety.

Stability is proven using a Lyapunov candidate that combines the visual error norm and the parameter estimation errors weighted by positive‑definite gain matrices Lz and Lk. Substituting the adaptive laws yields V̇ = –(x – xd)ᵀKp(x – xd) ≤ 0, guaranteeing that V is bounded and that the visual error converges to zero. The null‑space dynamics are shown to exactly satisfy the desired damping model, ensuring that the human‑generated effort is faithfully reproduced in the redundant joints.

Experimental validation is performed on a UR5 robot equipped with an ArUco marker on its wrist, a fixed but uncalibrated Basler camera, and a mixed‑reality HoloLens 2 interface. The operator manipulates virtual sliders in the AR view to apply forces or motion commands to a selected joint while the robot simultaneously tracks a target pixel location. Results demonstrate rapid convergence of the visual error, smooth and stable null‑space responses to human inputs, and overall robustness of the adaptive scheme despite the lack of camera calibration.

The contributions of the work are threefold: (1) an adaptive vision‑based task‑space controller that works with unknown depth and image Jacobian; (2) a null‑space interaction scheme that renders the redundant joints passive yet responsive to human effort, enabling on‑the‑fly role switching; and (3) a rigorous Lyapunov‑based stability proof covering both task‑space convergence and null‑space damping. Limitations include sensitivity of parameter convergence speed to initial guesses and the need for careful tuning of the damping coefficient c_d to balance responsiveness and safety. Future research directions suggested are multi‑camera fusion, predictive modeling of human intent, and extension to force‑controlled or hybrid task scenarios.


Comments & Academic Discussion

Loading comments...

Leave a Comment