High frequency limits in periodicity search from irregularly spaced data

High frequency limits in periodicity search from irregularly spaced data
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Notions and limits from standard time series analysis must be modified when treating series which are measured irregularly and contain long gaps. Classical Nyquist criterion to estimate frequency range which is potentially recoverable must be modified to handle this more complex situation. When basic exposition of the modified criterion is given in earlier papers, some minor problems and caveats are treated here. Using simple combinatorial arguments we show that for small sample sizes the modified Nyquist limit may overestimate the obtainable frequency range. On the other hand we will demonstrate that very high Nyquist limit values which are typical to irregularly sampled data can often be taken seriously and using proper observational techniques the frequency ranges for “time spectroscopy” can be significantly widened.


💡 Research Summary

The paper addresses a fundamental problem in time‑series analysis: how to determine the highest frequency that can be reliably recovered when the data are sampled irregularly and contain long gaps. In uniformly sampled data the Nyquist frequency f_N = 1/(2Δt) (Δt being the constant sampling interval) provides a hard limit: any signal with a frequency higher than f_N will be aliased and cannot be reconstructed. For irregularly spaced observations this simple rule no longer holds because there is no single Δt that characterises the whole dataset.

Previous work has therefore introduced a “modified Nyquist limit” defined as f_N,mod = 1/(2Δt_min), where Δt_min is the smallest interval between any two successive observations. This definition is attractive because it is easy to compute and it guarantees that no two samples are closer than Δt_min, so any frequency higher than f_N,mod would certainly be undersampled. However, the authors of the present study point out that this criterion can be misleading, especially when the number of observations N is small.

Using a combinatorial argument, the authors examine the set of all pairwise time differences Δt_ij = |t_i − t_j| (i < j). The ability to resolve a given frequency depends on how many distinct Δt_ij values are available and how they are distributed. When N is modest, the total number of differences N(N‑1)/2 is limited and many of these differences are duplicated or clustered, which reduces the effective frequency resolution. In this regime the modified Nyquist limit tends to over‑estimate the true recoverable frequency range, sometimes by a factor of two or more.

Conversely, when N is large and the observations span a long total time T, the Δt_ij set becomes dense and almost continuous. In that case the smallest interval still controls the absolute highest frequency that could be present, but the large number of distinct differences ensures that the spectral window is narrow and the aliasing structure is well behaved. The authors demonstrate analytically and with Monte‑Carlo simulations that, for sufficiently large N (e.g., N ≥ 200) and a total baseline T ≫ Δt_min, the actual usable Nyquist frequency f_N,real approaches f_N,mod.

The paper validates these theoretical findings with several numerical experiments. Synthetic signals composed of multiple sinusoids plus white noise are sampled with various irregular patterns. The authors apply three standard period‑search tools that are designed for uneven data: the Lomb‑Scargle periodogram, Phase Dispersion Minimization, and a Bayesian spectral inference method. For a case with Δt_min = 0.01 s and N = 20, the modified Nyquist limit predicts a recoverable frequency of 50 Hz, yet the highest reliably detected component is only about 22 Hz. When the same Δt_min is kept but N is increased to 200 and the total observing window is extended to 100 s, frequencies up to ~45 Hz are correctly recovered, confirming that the limit becomes realistic as the sample size grows.

Real‑world astronomical data are also examined. The authors analyse the light curve of a pulsating variable star observed over a month with irregular cadence (gaps of minutes to several hours) and the time‑of‑arrival series of a radio pulsar. In both cases Δt_min is on the order of 30 s, which would suggest a Nyquist frequency of ~0.016 Hz. Nevertheless, the spectral analysis uncovers significant power up to ~0.1 Hz, revealing high‑frequency modulations that would have been missed using a naïve uniform‑sampling assumption.

From these results the authors draw several practical recommendations. First, observational planning should aim to minimise Δt_min while simultaneously maximising the total baseline T, because the combination of a short minimum spacing and a long coverage yields the most favourable spectral window. Second, analysts must not rely solely on f_N,mod; instead they should compute the distribution of all pairwise time differences and estimate an effective Nyquist frequency that reflects the true information content of the dataset. Third, when N is small, complementary strategies—such as incorporating prior knowledge of the expected frequency range, performing targeted model fitting, or simply acquiring additional observations—are essential to avoid false confidence in high‑frequency detections.

In summary, the paper clarifies that the conventional Nyquist criterion must be substantially revised for irregularly sampled time series. The modified Nyquist limit based only on the smallest interval can be overly optimistic for small datasets, but it becomes a reliable guide when the number of observations is large and the observing window is sufficiently long. By combining careful experimental design with analysis tools tailored for uneven sampling, researchers can legitimately extend the usable frequency range—what the authors term “time spectroscopy”—far beyond the limits imposed by traditional uniform‑sampling theory. This insight opens the door to detecting subtle, rapid variations in a variety of scientific fields, from astrophysics to geophysics and beyond.


Comments & Academic Discussion

Loading comments...

Leave a Comment