An Efficient Variant of One-Class SVM with Lifelong Online Learning Guarantees

An Efficient Variant of One-Class SVM with Lifelong Online Learning Guarantees
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study outlier (a.k.a., anomaly) detection for single-pass non-stationary streaming data. In the well-studied offline or batch outlier detection problem, traditional methods such as kernel One-Class SVM (OCSVM) are both computationally heavy and prone to large false-negative (Type II) errors under non-stationarity. To remedy this, we introduce SONAR, an efficient SGD-based OCSVM solver with strongly convex regularization. We show novel theoretical guarantees on the Type I/II errors of SONAR, superior to those known for OCSVM, and further prove that SONAR ensures favorable lifelong learning guarantees under benign distribution shifts. In the more challenging problem of adversarial non-stationary data, we show that SONAR can be used within an ensemble method and equipped with changepoint detection to achieve adaptive guarantees, ensuring small Type I/II errors on each phase of data. We validate our theoretical findings on synthetic and real-world datasets.


💡 Research Summary

The paper addresses the challenging problem of outlier (anomaly) detection in a single‑pass, non‑stationary streaming setting, where traditional batch kernel One‑Class SVM (OCSVM) is both computationally prohibitive and prone to high false‑negative (Type II) rates under distribution shift. The authors propose SONAR, an SGD‑based solver that leverages Random Fourier Features (RFF) to approximate shift‑invariant kernels (e.g., Gaussian) with a finite‑dimensional linear map. By doing so, they avoid the need for the full Gram matrix and enable unbiased single‑sample stochastic gradients.

A key theoretical obstacle is that the standard OCSVM objective is not strongly convex in the parameters (w, ρ), preventing standard convergence guarantees for SGD. To overcome this, the authors introduce a strongly convex modification of the OCSVM objective:
F(w, ρ) = ‖w‖² + ρ² − λ·ρ + E


Comments & Academic Discussion

Loading comments...

Leave a Comment