Maximum-likelihood reconstruction of photon returns from simultaneous analog and photon-counting lidar measurements
We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining (“gluing”) methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.
💡 Research Summary
The paper addresses a fundamental limitation in modern lidar transient recorders, which typically acquire back‑scattered signals simultaneously with a fast analog‑to‑digital converter (ADC) and a photon‑counting module. Conventional “gluing” techniques combine these two traces by first correcting the photon‑counting data for dead‑time effects, then calibrating the analog trace to the photon counts over a manually selected overlap region, and finally stitching the two traces together at an arbitrarily chosen threshold. This approach suffers from several drawbacks: it discards background information, relies on multiple regression steps and ad‑hoc overlap definitions, suffers from singularities in the inverse dead‑time correction, and often leaves a substantial portion of the measured data unused.
The authors propose a statistically rigorous maximum‑likelihood (ML) framework that simultaneously exploits the full information content of both measurement modes. They model the analog signal as a linear conversion a = αp + β, with a constant electronic noise variance γ², and the photon‑counting channel as a non‑paralyzable (dead‑time) counter described by m = p/(1 + δp), where δ = τ/Δt is the dead‑time fraction. For the analog channel they assume a Gaussian distribution centered on the linear model, while for the photon‑counting channel they approximate the probability with a Poisson distribution whose mean is the dead‑time‑affected count C(p). The joint likelihood for a single time bin i is therefore the product of these two probabilities.
Taking the negative logarithm yields a deviance D that is the sum of a Gaussian term (equivalent to a χ² contribution) and a Poisson term. The total deviance depends on the four system parameters (α, β, γ², δ) and on the unknown true photon numbers p_i for each range bin. To make the problem tractable, the authors split the optimization into an inner loop (estimating each p_i given the current system parameters) and an outer loop (optimizing the system parameters). The inner problem reduces to finding the root of a fourth‑order polynomial for each p_i, which they solve efficiently with Newton‑Raphson iterations, enforcing the physical constraint p_i ≥ 0. The outer loop minimizes the summed deviance over all bins using standard nonlinear optimization algorithms (e.g., Levenberg‑Marquardt). The electronic noise variance γ² is fixed after an initial estimate from the low‑signal region.
A key practical addition is the inclusion of a relative acquisition delay t_offset between the analog and photon‑counting traces, reflecting hardware‑induced timing mismatches. By scanning t_offset and observing the deviance, they locate the optimal offset (four sampling intervals, ≈100 ns) that aligns the two traces, thereby improving feature consistency (e.g., a thin haze layer at 13.5 km appears at the same range in both channels after correction).
Because lidar returns are heavily skewed toward low‑signal values, the authors recognize that a naïve likelihood maximization would be biased toward the densely populated lower‑left region of the (a, m) plane. To mitigate this, they introduce a binning scheme that partitions the (a, m) space into fan‑shaped sectors and assigns weights inversely proportional to the number of points in each sector, ensuring that the high‑signal region (critical for estimating the dead‑time fraction δ) contributes appropriately to the likelihood.
The method is validated on real data collected with a 355 nm back‑scatter lidar (20 Hz pulse repetition, 20‑fold trace summation). Initial parameter estimates are obtained from linear fits in the low‑signal region (for α, β, γ²) and from the saturation plateau in the high‑signal region (for δ). After the ML optimization, the reconstructed photon‑return profile exhibits a seamless transition between the analog‑dominated and photon‑counting‑dominated regimes, without the discontinuities or artifacts typical of gluing methods. Moreover, the approach yields an internally consistent estimate of the dead‑time fraction, eliminating the need for manufacturer‑provided values.
In summary, the paper delivers a comprehensive, statistically sound solution for merging analog and photon‑counting lidar data. By modeling both measurement processes, incorporating timing offsets, and addressing data‑distribution bias, the proposed maximum‑likelihood reconstruction extends the usable dynamic range, preserves background information, and removes the reliance on arbitrary overlap definitions. This framework promises to improve the accuracy and reliability of lidar atmospheric profiling, especially in applications requiring precise quantitative retrievals over several orders of magnitude in signal strength.
Comments & Academic Discussion
Loading comments...
Leave a Comment