Depth-Copy-Paste: Multimodal and Depth-Aware Compositing for Robust Face Detection
Data augmentation is crucial for improving the robustness of face detection systems, especially under challenging conditions such as occlusion, illumination variation, and complex environments. Traditional copy paste augmentation often produces unrealistic composites due to inaccurate foreground extraction, inconsistent scene geometry, and mismatched background semantics. To address these limitations, we propose Depth Copy Paste, a multimodal and depth aware augmentation framework that generates diverse and physically consistent face detection training samples by copying full body person instances and pasting them into semantically compatible scenes. Our approach first employs BLIP and CLIP to jointly assess semantic and visual coherence, enabling automatic retrieval of the most suitable background images for the given foreground person. To ensure high quality foreground masks that preserve facial details, we integrate SAM3 for precise segmentation and Depth-Anything to extract only the non occluded visible person regions, preventing corrupted facial textures from being used in augmentation. For geometric realism, we introduce a depth guided sliding window placement mechanism that searches over the background depth map to identify paste locations with optimal depth continuity and scale alignment. The resulting composites exhibit natural depth relationships and improved visual plausibility. Extensive experiments show that Depth Copy Paste provides more diverse and realistic training data, leading to significant performance improvements in downstream face detection tasks compared with traditional copy paste and depth free augmentation methods.
💡 Research Summary
The paper “Depth-Copy-Paste: Multimodal and Depth-Aware Compositing for Robust Face Detection” presents a novel data augmentation framework designed to generate highly realistic and physically plausible composite images for training robust face detection models. It directly addresses the key shortcomings of traditional copy-paste augmentation, which often yields unrealistic composites due to semantic mismatch between foreground and background, inaccurate foreground extraction (including occluded regions), and a lack of geometric consistency leading to floating objects or incorrect scaling.
The proposed Depth-Copy-Paste framework integrates multimodal semantic understanding and monocular depth estimation into a cohesive three-stage pipeline. The first stage, Multimodal Background Retrieval, leverages both BLIP and CLIP models. BLIP generates a textual caption describing the context of the foreground face image (e.g., “a person smiling indoors”). This caption and the foreground image are then encoded via CLIP’s text and image encoders, respectively. These embeddings are used to retrieve the most semantically and visually compatible background images from a large pool, ensuring the pasted person contextually belongs in the new scene.
The second stage, the Foreground Extraction Module (FEM), combines precise instance segmentation with visibility reasoning. It uses SAM3 to obtain an initial high-quality mask of the person. To avoid copying occluded body parts (e.g., face regions hidden behind hair or hands), it employs Depth-Anything to estimate a depth map. By analyzing local depth discontinuities within the SAM3 mask, it filters out occluded regions, resulting in a final mask containing only the visible, non-occluded parts of the person. This prevents corrupted textures from being pasted.
The third and most innovative stage is Depth-Guided Placement (DGP). This module ensures the pasted foreground blends geometrically into the background. It normalizes the depth maps of both the foreground and the selected background. Then, using a sliding-window approach over the background depth map, it searches for the optimal paste location. The search is guided by a scoring function that evaluates three criteria at each candidate window: similarity in mean depth level and depth variance between the foreground and the background patch, and the local smoothness of the background depth (avoiding edges or cluttered areas). The location maximizing this score ensures the person is placed at a depth-consistent scale and appears naturally integrated without floating artifacts.
The authors conduct extensive experiments on the challenging WIDER Face dataset. They train a standard face detector using a mix of original training data and synthetic images generated by their method. The results are compared against baselines including traditional random copy-paste and other depth-agnostic augmentation methods. The evaluation across Easy, Medium, and Hard subsets demonstrates that models trained with Depth-Copy-Paste augmented data achieve significantly higher Average Precision (AP), with particularly notable gains on the Medium and Hard sets, which contain small, occluded, and complexly posed faces. This validates that the framework’s core contributions—multimodal background matching, occlusion-aware foreground extraction, and depth-guided geometric placement—collectively generate more diverse and realistic training data, leading to substantially improved robustness and generalization in downstream face detection tasks. The work highlights the critical role of integrating 3D geometric reasoning and high-level semantics for advancing synthetic data generation in computer vision.
Comments & Academic Discussion
Loading comments...
Leave a Comment