Asymmetric conformal prediction with penalized kernel sum-of-squares

Asymmetric conformal prediction with penalized kernel sum-of-squares
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Conformal prediction (CP) is a distribution-free method to construct reliable prediction intervals that has gained significant attention in recent years. Despite its success and various proposed extensions, a significant practical feature which has been overlooked in previous research is the potential skewed nature of the noise, or of the residuals when the predictive model exhibits bias. In this work, we leverage recent developments in CP to propose a new asymmetric procedure that bridges the gap between skewed and non-skewed noise distributions, while still maintaining adaptivity of the prediction intervals. We introduce a new statistical learning problem to construct adaptive and asymmetric prediction bands, with a unique feature based on a penalty which promotes symmetry: when its intensity varies, the intervals smoothly change from symmetric to asymmetric ones. This learning problem is based on reproducing kernel Hilbert spaces and the recently introduced kernel sum-of-squares framework. First, we establish representer theorems to make our problem tractable in practice, and derive dual formulations which are essential for scalability to larger datasets. Second, the intensity of the penalty is chosen using a novel data-driven method which automatically identifies the symmetric nature of the noise. We show that consenting to some asymmetry can let the learned prediction bands better adapt to small sample regimes or biased predictive models.


💡 Research Summary

This paper addresses a notable gap in the conformal prediction (CP) literature: the handling of asymmetric noise or biased residuals while preserving the distribution‑free marginal coverage guarantee. Standard split‑CP uses absolute residuals as a score, yielding symmetric prediction intervals that can be overly conservative or under‑cover in the presence of skewed errors. Existing asymmetric extensions either modify the calibration step (e.g., signed scores) or replace the score with quantile‑regression‑based functions (CQR, DCP). Both approaches suffer when the true noise is symmetric, because they discard useful information and often produce wider intervals.

The authors propose a unified, data‑driven framework that learns an asymmetric score function within the kernel‑sum‑of‑squares (kSoS) paradigm introduced by Allain et al. (2025). Two non‑negative functions, (f_{\text{low}}(x)) and (f_{\text{up}}(x)), are modeled as kernel‑based quadratic forms (f_{\text{low}}(x)=\langle\phi_{\text{low}}(x),A_{\text{low}}\phi_{\text{low}}(x)\rangle) and similarly for the upper side, where (A_{\text{low}},A_{\text{up}}) are positive‑semidefinite operators in reproducing kernel Hilbert spaces (RKHS). The asymmetric score is defined as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment