Planar Geometry and Image Recovery from Motion-Blur

Planar Geometry and Image Recovery from Motion-Blur
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Existing works on motion deblurring either ignore the effects of depth-dependent blur or work with the assumption of a multi-layered scene wherein each layer is modeled in the form of fronto-parallel plane. In this work, we consider the case of 3D scenes with piecewise planar structure i.e., a scene that can be modeled as a combination of multiple planes with arbitrary orientations. We first propose an approach for estimation of normal of a planar scene from a single motion blurred observation. We then develop an algorithm for automatic recovery of number of planes, the parameters corresponding to each plane, and camera motion from a single motion blurred image of a multiplanar 3D scene. Finally, we propose a first-of-its-kind approach to recover the planar geometry and latent image of the scene by adopting an alternating minimization framework built on our findings. Experiments on synthetic and real data reveal that our proposed method achieves state-of-the-art results.


💡 Research Summary

The paper introduces a novel framework for jointly estimating planar geometry and recovering a latent sharp image from a single motion‑blurred observation of a 3D scene composed of multiple arbitrarily oriented planes. The authors first show that the spatially varying point spread functions (PSFs) extracted from a blurred image encode the surface normal of the underlying plane. By approximating camera motion with three degrees of freedom (in‑plane translation and rotation about the optical axis) and linearizing the homography for small rotations, they derive simple linear relations between pixel shifts in the PSFs and the normal components. Using PSFs from at least three locations, a least‑squares solution yields the normal up to scale, requiring minimal correspondence.

Building on this normal estimation, the method proceeds to recover the full scene geometry: the number of planes, each plane’s depth scale, segmentation masks, and the temporal sampling of camera poses (TSF). Inlier PSFs are identified via RANSAC, and a linear system k = M ω links the PSFs to the unknown camera motion ω. An alternating optimization scheme refines ω (using an L1‑sparsity prior solved by ADMM) and the depth scale factors by exhaustive search around a reference depth.

Finally, an alternating minimization loop jointly updates the latent sharp image and the plane masks/depths while keeping the current motion estimate fixed. This loop minimizes the discrepancy between the observed blurred image and the forward blur model, effectively exploiting the depth‑dependent blur cues.

Experiments on synthetic datasets and real handheld photographs demonstrate that the proposed approach outperforms state‑of‑the‑art uniform‑blur deblurring, depth‑aware deblurring, and learning‑based depth estimation methods, especially in scenarios involving camera rotation and inclined planes. The method requires only a single blurred image, avoids dependence on large training datasets, and provides accurate normal, depth, and deblurred image estimates, making it suitable for mobile imaging, robotics, and augmented‑reality applications.


Comments & Academic Discussion

Loading comments...

Leave a Comment