Deeper detection limits in astronomical imaging using self-supervised spatiotemporal denoising

Deeper detection limits in astronomical imaging using self-supervised spatiotemporal denoising
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The detection limit of astronomical imaging observations is limited by several noise sources. Some of that noise is correlated between neighbouring image pixels and exposures, so in principle could be learned and corrected. We present an astronomical self-supervised transformer-based denoising algorithm (ASTERIS), that integrates spatiotemporal information across multiple exposures. Benchmarking on mock data indicates that ASTERIS improves detection limits by 1.0 magnitude at 90% completeness and purity, while preserving the point spread function and photometric accuracy. Observational validation using data from the James Webb Space Telescope (JWST) and Subaru telescope identifies previously undetectable features, including low-surface-brightness galaxy structures and gravitationally-lensed arcs. Applied to deep JWST images, ASTERIS identifies three times more redshift > 9 galaxy candidates, with rest-frame ultraviolet luminosity 1.0 magnitude fainter, than previous methods.


💡 Research Summary

The paper introduces ASTERIS (Astronomical Self‑supervised Transformer‑based Image Denoising), a novel self‑supervised deep‑learning framework that leverages spatiotemporal correlations across multiple exposures to push the detection limits of astronomical imaging beyond what traditional co‑addition can achieve. Building on the Noise2Noise (N2N) concept, ASTERIS requires a set of 16 aligned exposures of the same field, which are split into an input set and a target set (each containing eight exposures). Because the underlying astronomical signal is common to both sets while the noise realizations are independent, the network can be trained without any external clean reference. The loss function combines a mean‑squared error (MSE) between the co‑added input and target stacks with a mean‑absolute error (MAE) computed on a per‑exposure basis, encouraging both global signal recovery and local noise suppression.

The architecture employs a transformer‑based attention mechanism that processes three‑dimensional voxels (pixel × pixel × exposure) rather than conventional 2‑D convolutions. This design captures long‑range spatial patterns (e.g., PSF wings, structured background) and temporal consistency across exposures simultaneously. During inference, only pixels below a 3σ flux threshold are fed to the network; brighter pixels are temporarily clipped, median‑combined, and later re‑inserted, preserving the original dynamic range while focusing the model’s capacity on the low‑S/N regime where faint source detection matters most.

Performance is evaluated on both simulated and real data. Mock tests inject 50 000 synthetic point sources (magnitudes 27.5–31.5) into real JWST NIRCam F115W backgrounds, generating 2000 independent background realizations from 168 available exposures. ASTERIS reduces image histogram width by ~20 % relative to simple co‑addition, improves the 5σ sensitivity by ~0.9 mag over N2N and 1.0 mag over co‑addition, and shifts the 90 % completeness limit by ~1.0 mag. At 90 % purity, the gain is ~1.5 mag versus co‑addition and ~1.0 mag versus N2N, reflecting a substantial reduction in false positives. The combined F‑score (completeness × purity) shows a depth improvement of 1.7 mag over co‑addition and 1.4 mag over N2N. Crucially, Kolmogorov–Smirnov tests confirm that the point‑spread function (PSF) after ASTERIS processing is statistically indistinguishable from the original co‑added PSF (p = 0.9), whereas N2N degrades the PSF (p < 0.05). Photometric accuracy shows no systematic bias, and faint‑source photometric precision is modestly better than co‑addition.

Real‑world validation uses JWST GLIMPSE program data. With only eight exposures, ASTERIS recovers 97 of the 169 sources identified in a deep 168‑exposure stack, whereas standard eight‑exposure co‑addition finds only 50. Low‑surface‑brightness structures—stellar disks, outer spiral arms, and a gravitationally lensed arc—are dramatically clearer: structural similarity (SSIM) improves from 0.37 to 0.67 for a faint galaxy disk and from 0.59 to 0.81 for the lensed arc. Applying the same JWST‑trained model to Subaru/MOIRCS K‑short data initially yields some spurious detections, but retraining on Subaru exposures restores performance, demonstrating that ASTERIS can be transferred across instruments with appropriate fine‑tuning.

Finally, the method is applied to the search for very high‑redshift galaxies (z ≳ 9). ASTERIS identifies three times more candidate objects than conventional pipelines and reaches rest‑frame UV luminosities ~1 mag fainter, effectively expanding the observable volume for early‑universe studies without additional telescope time.

In summary, ASTERIS exploits transformer‑based self‑supervision to learn and remove correlated noise across multiple exposures, delivering a ~1 mag gain in detection depth, preserving PSF fidelity, and maintaining photometric integrity. Its ability to reveal previously hidden low‑surface‑brightness features and to boost high‑z galaxy searches makes it a powerful new tool for forthcoming deep surveys with JWST, Roman, Euclid, and ground‑based facilities.


Comments & Academic Discussion

Loading comments...

Leave a Comment