Noise Injection: Improving Out-of-Distribution Generalization for Limited Size Datasets

Reading time: 2 minute
...

📝 Original Info

  • Title: Noise Injection: Improving Out-of-Distribution Generalization for Limited Size Datasets
  • ArXiv ID: 2511.03855
  • Date: 2025-11-05
  • Authors: 논문에 명시된 저자 정보가 제공되지 않았습니다. (GitHub 저장소 URL에 기반해 추정하면 Duong Mai 등일 가능성이 있으나, 정확한 저자 명단은 원문을 확인해 주세요.)

📝 Abstract

Deep learned (DL) models for image recognition have been shown to fail to generalize to data from different devices, populations, etc. COVID-19 detection from Chest X-rays (CXRs), in particular, has been shown to fail to generalize to out-of-distribution (OOD) data from new clinical sources not covered in the training set. This occurs because models learn to exploit shortcuts - source-specific artifacts that do not translate to new distributions - rather than reasonable biomarkers to maximize performance on in-distribution (ID) data. Rendering the models more robust to distribution shifts, our study investigates the use of fundamental noise injection techniques (Gaussian, Speckle, Poisson, and Salt and Pepper) during training. Our empirical results demonstrate that this technique can significantly reduce the performance gap between ID and OOD evaluation from 0.10-0.20 to 0.01-0.06, based on results averaged over ten random seeds across key metrics such as AUC, F1, accuracy, recall and specificity. Our source code is publicly available at https://github.com/Duongmai127/Noisy-ood

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut