Non-Line-of-Sight Reconstruction using Efficient Transient Rendering

Non-Line-of-Sight Reconstruction using Efficient Transient Rendering
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Being able to see beyond the direct line of sight is an intriguing prospective and could benefit a wide variety of important applications. Recent work has demonstrated that time-resolved measurements of indirect diffuse light contain valuable information for reconstructing shape and reflectance properties of objects located around a corner. In this paper, we introduce a novel reconstruction scheme that, by design, produces solutions that are consistent with state-of-the-art physically-based rendering. Our method combines an efficient forward model (a custom renderer for time-resolved three-bounce indirect light transport) with an optimization framework to reconstruct object geometry in an analysis-by-synthesis sense. We evaluate our algorithm on a variety of synthetic and experimental input data, and show that it gracefully handles uncooperative scenes with high levels of noise or non-diffuse material reflectance.


💡 Research Summary

The paper tackles the problem of reconstructing the shape of an object that is hidden from direct view—so‑called non‑line‑of‑sight (NLOS) imaging—by exploiting time‑resolved measurements of indirect, three‑bounce light that has scattered off a diffuse wall. Traditional approaches in this domain rely on ellipsoidal back‑projection: each measured photon arrival time defines an ellipsoidal locus of possible scattering points, and all measurements vote on these loci. While computationally cheap, back‑projection treats the hidden object as a volumetric scatterer, ignores surface orientation, self‑occlusion, and does not correspond to the adjoint of a physically plausible forward light‑transport operator. Consequently, reconstructions are low‑resolution, noisy, and limited to diffuse‑like objects.

The authors propose an analysis‑by‑synthesis framework that directly optimizes a physically based forward model against the measured transient data. Their pipeline consists of three key components:

  1. Scene Representation – The hidden geometry is encoded implicitly as an isosurface of a scalar field (\mathcal{M}_P(\mathbf{x})). This field is a sum of globally supported Gaussian basis functions, each defined by a center (\mathbf{x}_i) and a scale (\sigma_i). The parameter vector (P = {(\mathbf{x}i,\sigma_i)}{i=1}^m) is low‑dimensional, making the subsequent optimization tractable while still allowing complex shapes.

  2. Efficient Transient Renderer – A custom GPU‑accelerated renderer simulates the three‑bounce light path “laser spot → hidden object → wall → detector”. The laser spot on the wall acts as an area light; each wall pixel is treated as an omnidirectional detector that records a time‑of‑flight histogram. The renderer performs per‑triangle temporal filtering and a shadow‑test to correctly handle occlusions. By focusing exclusively on the three‑bounce contribution and exploiting the regular geometry of the wall, the renderer produces physically plausible transient images in milliseconds, a dramatic speed‑up compared with general‑purpose transient renderers that require hours or days.

  3. Global Non‑Linear Optimization – The reconstruction problem is cast as a non‑linear least‑squares minimization:
    \


Comments & Academic Discussion

Loading comments...

Leave a Comment