Comprehensive Machine Learning Benchmarking for Fringe Projection Profilometry with Photorealistic Synthetic Data

Comprehensive Machine Learning Benchmarking for Fringe Projection Profilometry with Photorealistic Synthetic Data
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Machine learning approaches for fringe projection profilometry (FPP) are hindered by the lack of large, diverse datasets and standardized benchmarking protocols. This paper introduces the first open-source, photorealistic synthetic dataset for FPP, generated using NVIDIA Isaac Sim, comprising 15,600 fringe images and 300 depth reconstructions across 50 objects. We apply this dataset to single-shot FPP, where models predict 3D depth maps directly from individual fringe images without temporal phase shifting. Through systematic ablation studies, we identify optimal learning configurations for long-range (1.5-2.1 m) depth prediction. We compare three depth normalization strategies and show that individual normalization, which decouples object shape from absolute scale, yields a 9.1x improvement in object reconstruction accuracy over raw depth. We further show that removing background fringe patterns severely degrades performance across all normalizations, demonstrating that background fringes provide essential spatial phase reference rather than noise. We evaluate six loss functions and identify Hybrid L1 loss as optimal. Using the best configuration, we benchmark four architectures and find UNet achieves the strongest performance, though errors remain far above the sub-millimeter accuracy of classical FPP. The small performance gap between architectures indicates that the dominant limitation is information deficit rather than model design: single fringe images lack sufficient information for accurate depth recovery without explicit phase cues. This work provides a standardized benchmark and evidence motivating hybrid approaches combining phase-based FPP with learned refinement. The dataset is available at https://huggingface.co/datasets/aharoon/fpp-ml-bench and code at https://github.com/AnushLak/fpp-ml-bench.


💡 Research Summary

This paper tackles two long‑standing bottlenecks in learning‑based fringe projection profilometry (FPP): the absence of a large, high‑quality dataset with perfect ground‑truth geometry, and the lack of standardized benchmarking protocols. Using NVIDIA Isaac Sim, the authors generate a photorealistic synthetic dataset—dubbed VIR‑TUS‑FPP—containing 15,600 fringe images and 300 depth maps for 50 diverse objects drawn from the YCB and NVIDIA AI Warehouse collections. The virtual system reproduces realistic optical phenomena (multi‑bounce illumination, surface reflectivity variations, occlusions, and background scattering) and is calibrated to sub‑pixel accuracy (stereo reprojection error ≈0.055 px). Data are split at the object level (80 % train, 10 % validation, 10 % test) to guarantee that test objects are never seen during training.

The core research problem is single‑shot depth reconstruction: predicting a full 3D depth map from a single fringe image (the first frame of an 18‑step phase‑shifting sequence) without any temporal or spatial unwrapping. Because a single sinusoidal fringe encodes depth ambiguously (each 2π phase cycle can correspond to many depth values), a neural network must rely on learned shape priors and statistical regularities rather than explicit geometric cues.

The authors conduct a three‑phase ablation study.
Phase 1 – Depth Normalization. Three strategies are compared: (i) raw depth in millimetres, (ii) globally normalized depth (mm → m), and (iii) individual normalization where each depth map is scaled to


Comments & Academic Discussion

Loading comments...

Leave a Comment