Single-shot incoherent imaging with extended and engineered field of view using coded phase apertures
A large field of view of an optical system is needed for many applications, and optical systems with high magnification often suffer from a limited field of view due to the limited size of the camera sensor. This study proposes a novel technique for engineering the field of view of an optical system without compromising the magnification. In the proposed method, an object response pattern is recorded on a camera by introducing a coded phase mask (CPM) in the imaging system. The coded phase mask is a multiplexing of N distinct scattering phases, where N-1 represents the number of isolated object areas to be brought within the field of view. Each scattering phase yields a point spread function of a unique sparse dot pattern on the camera. With the introduction of a coded phase mask, the objects’ images are brought within the region of the camera sensor, which, without the CPM, would have remained outside the inherent field of view of the system. To reconstruct the original object plane with N objects at their respective locations, the zero-padded object response pattern is deconvolved with the system’s zero-padded and shifted point spread function. A simulation study followed by experimental results for N = 2 and N = 3 is presented in this article.
💡 Research Summary
The paper addresses the long‑standing problem of limited field‑of‑view (FOV) in high‑magnification optical systems whose sensor size constrains the observable scene. The authors propose a single‑shot incoherent imaging technique that expands the effective FOV without sacrificing magnification by inserting a coded phase mask (CPM) into the optical train. The CPM is a multiplexed mask comprising N distinct scattering phase patterns; N‑1 of these correspond to isolated object regions that would otherwise lie outside the camera’s native FOV. Each scattering phase generates a unique sparse dot pattern (K dots) in the camera plane, effectively folding the off‑axis object information back into the sensor area.
Mask design is performed with a modified Gerchberg–Saxon algorithm (GSA). Starting from a uniform amplitude and random phase, the algorithm iteratively enforces the desired sparse dot pattern in the Fourier (camera) plane while preserving phase, converging to a phase distribution that yields the target PSF. Separate phase masks for each object region are generated, multiplied by binary spatial sections that isolate them, and summed to produce the final multiplexed CPM. The CPM is displayed on a spatial light modulator (SLM) and combined with a diffractive lens (L1) to form the entrance aperture; a second lens (L2) creates a Fourier relationship between the CPM and the sensor.
Mathematically, a point source at position rₙ in the nth object area passes through the CPM, acquiring the nth scattering phase, and is Fourier‑transformed by the L1‑L2 system into a point‑spread function (PSF) consisting of K replicas. The overall image formation is a linear convolution of the object intensity with a composite PSF that contains shifted copies of each region’s PSF. To recover the full object plane, the recorded object response pattern (ORP) is zero‑padded to the size of the magnified object field, and the shifted PSFs are placed at the corresponding locations in a large matrix. Wiener deconvolution (with a user‑defined noise‑to‑signal ratio) is then applied to retrieve the original scene.
Simulation in MATLAB uses an object plane of 1080 × 1080 px, a camera of 500 × 500 px (8 µm pixel size), distances zs = zh = 20 cm, zo = 10 cm, and focal lengths f₁ = 20 cm, f₂ = 10 cm. For N = 3, the authors vary K from 3 to 12, adding Poisson noise to both PSF and ORP. Signal‑to‑noise ratio (SNR) versus K shows a peak at K = 11, indicating the optimal sparsity level. The results demonstrate that with the CPM, objects that would be outside the sensor’s native FOV appear as central replicas, and Wiener deconvolution successfully separates and restores each object with high fidelity.
Experimental validation uses a real SLM to implement CPMs for N = 2 and N = 3. In the first experiment, two identical objects are placed at different transverse positions; without the CPM only the central object is captured, whereas with the CPM both objects are folded into the sensor and recovered after deconvolution. The second experiment further reduces the native FOV, yet the CPM still brings the peripheral object into view. Measured reconstructions match the simulated predictions, confirming the practicality of the approach.
The authors highlight several advantages: (1) no additional bulk optics are required beyond the programmable mask; (2) a single exposure suffices, making the method suitable for dynamic scenes; (3) magnification and resolution are preserved because the optical transfer function is unchanged, only the PSF is engineered. Limitations include increased computational load for larger N and K, potential overlap of sparse dot patterns leading to cross‑talk, and the need for precise calibration of the CPM‑to‑sensor Fourier relationship. Future work is suggested on optimizing dot distributions, integrating deep‑learning‑based deconvolution, and extending the concept to multi‑spectral or depth‑extended imaging. Overall, the paper presents a compelling blend of optical engineering and computational reconstruction that effectively decouples field‑of‑view from sensor size in high‑magnification incoherent imaging systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment