Optimization of mapping modes for heterodyne instruments
Astronomic line mapping with single-pixel instruments is usually performed in an on-the-fly (OTF) or a raster-mapping mode depending on the capabilities of the telescope and the instrument. The observing efficiency can be increased by combining several source-point integrations with a common reference measurement. This is implemented at many telescopes, but a thorough investigation of the optimum calibration of the modes and the best way of performing these observations is still lacking. We use knowledge of the instrumental stability obtained by an Allan variance measurement to derive a mathematical formalism for optimizing the setup of mapping observations. Special attention has to be paid to minimizing of the impact of correlated noise introduced by the common OFF integrations and to the correction of instrumental drifts. Both aspects can be covered using a calibration scheme that interpolates between two OFF measurements and an appropriate OFF integration time. The total uncertainty of the calibrated data consisting of radiometric noise and drift noise can be minimized by adjusting the source integration time and the number of data points observed between two OFF measurements. It turns out that OTF observations are very robust. They provide a low relative noise, even if their setup deviates considerably from the optimum. Fast data readouts are often essential to minimize the drift contributions. In particular, continuum measurements may be easily spoiled by instrumental drifts. The main drawback of the described mapping modes is the limited use of the measured data at different spatial or spectroscopic resolutions obtained by additional rebinning.
💡 Research Summary
The paper addresses the long‑standing problem of how to calibrate and optimise single‑pixel heterodyne line‑mapping observations that are usually carried out in either on‑the‑fly (OTF) or raster‑mapping mode. While many observatories already employ a “common OFF” strategy—using a single reference measurement for several source integrations—the optimal way to set the integration times, the number of source points per OFF, and the handling of instrumental drifts has never been rigorously quantified.
The authors start by characterising the instrumental stability through an Allan variance measurement. The Allan time, τA, separates two regimes: for times shorter than τA the noise is essentially white (radiometric noise), while for longer times systematic drifts dominate. By modelling the total uncertainty of a calibrated data point as the sum of radiometric noise and drift noise, they derive an analytical expression that depends on three controllable parameters: the OFF integration time (toff), the source integration time per point (tsrc), and the number of source points observed between two OFF measurements (N).
A key insight is that the optimal calibration scheme is a linear interpolation between two OFF measurements taken before and after a block of source data. This interpolation cancels the first‑order drift component, provided the interval between the two OFFs is not much larger than τA. The total variance then becomes
σ²total = σ²rad / N + σ²drift(toff, N),
where σrad is the radiometric noise per integration and σdrift is a function that grows with the OFF interval and with N. Minimising σ²total with respect to tsrc and N yields the simple closed‑form solution
tsrc* = √(2 τA toff), N* = toff / tsrc*.
Thus, for a given instrument stability (τA) and a chosen OFF integration time, the source integration time and the number of points per OFF are uniquely defined. The authors verify these relations with Monte‑Carlo simulations that include realistic noise and drift models.
When the optimisation is applied to OTF scanning, the performance surface is very flat: even if the actual tsrc or N deviate substantially from the theoretical optimum, the increase in total noise is modest. This robustness stems from the fact that OTF continuously samples the sky, so many source points share the same pair of OFFs, effectively averaging out residual drift. In contrast, raster mapping, which alternates between discrete rows or columns, is more sensitive to the choice of toff and N because the OFF‑ON intervals can become long, allowing drift to accumulate.
The paper also discusses the special case of continuum observations. Since a continuum signal does not benefit from line‑averaging, any drift directly biases the measured intensity. The authors therefore recommend very short readout cycles (≤10 ms) and short OFF intervals to keep drift contributions below the radiometric noise floor.
A practical limitation highlighted by the authors is the reduced flexibility after data acquisition. The optimisation assumes a fixed spatial and spectral resolution; if the data are later rebinned to coarser resolution, the original OFF calibration no longer matches the new sampling, and the noise advantage can be lost. Consequently, observers who anticipate multiple resolutions should either repeat the optimisation for each desired binning or accept a sub‑optimal noise level.
In summary, the study provides a rigorous, Allan‑variance‑based framework for planning heterodyne mapping observations with a common OFF reference. It shows that OTF mode is intrinsically tolerant to non‑optimal settings, while raster mode benefits more from careful tuning of toff and N. Fast data acquisition and short OFF intervals are essential to suppress drift, especially for continuum work. Finally, the authors caution that post‑processing re‑binning can diminish the gains achieved by the optimal OFF strategy, and they advise incorporating the optimisation step into the overall observing plan.
Comments & Academic Discussion
Loading comments...
Leave a Comment