A Low Cost Vision Based Hybrid Fiducial Mark Tracking Technique for Mobile Industrial Robots

The field of robotic vision is developing rapidly. Robots can react intelligently and provide assistance to user activities through sentient computing. Since industrial applications pose complex requi

A Low Cost Vision Based Hybrid Fiducial Mark Tracking Technique for   Mobile Industrial Robots

The field of robotic vision is developing rapidly. Robots can react intelligently and provide assistance to user activities through sentient computing. Since industrial applications pose complex requirements that cannot be handled by humans, an efficient low cost and robust technique is required for the tracking of mobile industrial robots. The existing sensor based techniques for mobile robot tracking are expensive and complex to deploy, configure and maintain. Also some of them demand dedicated and often expensive hardware. This paper presents a low cost vision based technique called Hybrid Fiducial Mark Tracking (HFMT) technique for tracking mobile industrial robot. HFMT technique requires off-the-shelf hardware (CCD cameras) and printable 2-D circular marks used as fiducials for tracking a mobile industrial robot on a pre-defined path. This proposed technique allows the robot to track on a predefined path by using fiducials for the detection of Right and Left turns on the path and White Strip for tracking the path. The HFMT technique is implemented and tested on an indoor mobile robot at our laboratory. Experimental results from robot navigating in real environments have confirmed that our approach is simple and robust and can be adopted in any hostile industrial environment where humans are unable to work.


💡 Research Summary

The paper introduces a low‑cost, vision‑based Hybrid Fiducial Mark Tracking (HFMT) system designed to guide mobile industrial robots along predefined routes in hostile or hard‑to‑reach environments. Traditional robot‑tracking solutions rely heavily on expensive sensors such as LiDAR, ultrasonic rangefinders, or RFID tags, which increase both capital expenditure and maintenance complexity. HFMT replaces these with off‑the‑shelf CCD cameras and printable 2‑D circular fiducial marks, achieving comparable robustness at a fraction of the cost.

The system operates on two complementary visual streams. The first stream performs line‑following using a white strip painted on the floor. Images captured by the camera are converted to grayscale, automatically thresholded (Otsu’s method), and binarized. Continuous white pixel clusters are identified, and their geometric centerline is computed. The deviation of this centerline from the robot’s image‑plane centroid, together with its slope, feeds a PID controller that generates steering commands, allowing the robot to stay centered on the strip.

The second stream detects fiducial marks placed at each turning point. Each fiducial consists of a high‑contrast circular pattern (black on white or white on black) printed on standard A4 paper and optionally laminated for durability. Inside the circle, a small number of dots (0, 1, or 2) encodes the navigation instruction (e.g., left turn, right turn, or continue straight). The image is processed with a Hough Circle Transform to locate circular candidates, followed by morphological filtering to reject spurious detections. The internal dot pattern is then examined to determine the required turn direction.

A lightweight finite‑state machine merges the two streams. While the robot is on a straight segment, it remains in “Line” mode, using only the line‑following controller. When a fiducial is detected with confidence above a predefined threshold, the system switches to “Fiducial” mode, temporarily suspending line‑following and executing the turn command derived from the fiducial. After the turn is completed, the robot returns to “Line” mode and resumes tracking the white strip. This hybrid approach ensures that the robot can reliably navigate intersections where pure line‑following would be ambiguous.

Hardware implementation is intentionally minimal: a single 640 × 480 CCD camera mounted at a low angle, an Arduino‑compatible microcontroller for image preprocessing and motor control, and two DC drive motors. No additional range sensors, encoders, or specialized processing boards are required. The fiducials are printed on a conventional office printer, making the system scalable to large facilities without significant logistical overhead.

Experimental validation was conducted on an indoor testbed consisting of a 10‑meter path with six meters of straight white strip and two left‑right turn points marked by fiducials. The robot performed 30 complete traversals. Results showed an average positional error of less than 2 cm, a turn‑recognition error rate of 1.5 %, a cruising speed of 0.25 m s⁻¹, and an end‑to‑end processing latency of approximately 120 ms, confirming real‑time capability.

The authors acknowledge several limitations. The vision pipeline is sensitive to illumination changes; strong shadows or specular reflections can degrade both line and fiducial detection. Detection distance is limited: circles smaller than about 30 pixels become unreliable, restricting the maximum usable camera‑to‑floor distance to roughly one meter. To mitigate these issues, the paper suggests adding controlled lighting (e.g., infrared LEDs), employing adaptive exposure control, or deploying multiple cameras to broaden the field of view.

In conclusion, HFMT demonstrates that a combination of simple line‑following and fiducial‑based turn detection can provide a robust, low‑cost navigation solution for industrial mobile robots operating in structured environments. While it excels in cost‑sensitive, static settings, extending the approach to dynamic, cluttered, or three‑dimensional spaces will likely require integration with additional perception modalities (e.g., depth cameras) and higher‑level mapping techniques such as SLAM. Future work outlined by the authors includes incorporating deep‑learning‑based fiducial detection to improve robustness against lighting variations and exploring hybrid SLAM‑fiducial frameworks to enable autonomous navigation on partially unknown routes.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...