Calibrating Tabular Anomaly Detection via Optimal Transport
Tabular anomaly detection (TAD) remains challenging due to the heterogeneity of tabular data: features lack natural relationships, vary widely in distribution and scale, and exhibit diverse types. Consequently, each TAD method makes implicit assumptions about anomaly patterns that work well on some datasets but fail on others, and no method consistently outperforms across diverse scenarios. We present CTAD (Calibrating Tabular Anomaly Detection), a model-agnostic post-processing framework that enhances any existing TAD detector through sample-specific calibration. Our approach characterizes normal data via two complementary distributions, i.e., an empirical distribution from random sampling and a structural distribution from K-means centroids, and measures how adding a test sample disrupts their compatibility using Optimal Transport (OT) distance. Normal samples maintain low disruption while anomalies cause high disruption, providing a calibration signal to amplify detection. We prove that OT distance has a lower bound proportional to the test sample’s distance from centroids, and establish that anomalies systematically receive higher calibration scores than normals in expectation, explaining why the method generalizes across datasets. Extensive experiments on 34 diverse tabular datasets with 7 representative detectors spanning all major TAD categories (density estimation, classification, reconstruction, and isolation-based methods) demonstrate that CTAD consistently improves performance with statistical significance. Remarkably, CTAD enhances even state-of-the-art deep learning methods and shows robust performance across diverse hyperparameter settings, requiring no additional tuning for practical deployment.
💡 Research Summary
The paper tackles the long‑standing problem that no single tabular anomaly detection (TAD) method works well across the wide variety of tabular datasets. Heterogeneity in feature types, scales, and distributions means each detector implicitly assumes a particular anomaly pattern, leading to dataset‑specific performance. Rather than designing a new detector, the authors propose a model‑agnostic post‑processing framework called CTAD (Calibrating Tabular Anomaly Detection) that can be attached to any existing TAD model to improve its scores in a sample‑specific way.
CTAD’s core idea is to represent the normal data distribution from two complementary perspectives. First, an empirical distribution P is built by randomly sampling M normal training points; this captures local variability. Second, a structural distribution Q is obtained by applying K‑means clustering to the training set and treating the K centroids as a coarse representation of the normal manifold. Because both P and Q aim to describe the same underlying normal population, they should be highly compatible when only normal points are involved.
When a test sample x is added to P, the compatibility between P∪{x} and Q may change. This change is quantified using the optimal transport (OT) distance:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment