Sparse Support Vector Infinite Push
In this paper, we address the problem of embedded feature selection for ranking on top of the list problems. We pose this problem as a regularized empirical risk minimization with $p$-norm push loss function ($p=\infty$) and sparsity inducing regularizers. We leverage the issues related to this challenging optimization problem by considering an alternating direction method of multipliers algorithm which is built upon proximal operators of the loss function and the regularizer. Our main technical contribution is thus to provide a numerical scheme for computing the infinite push loss function proximal operator. Experimental results on toy, DNA microarray and BCI problems show how our novel algorithm compares favorably to competitors for ranking on top while using fewer variables in the scoring function.
💡 Research Summary
This paper tackles the problem of simultaneous feature selection and top‑ranking learning, a setting often encountered when only the highest‑scored items matter (e.g., recommendation, medical diagnosis, brain‑computer interfaces). The authors formulate the task as a regularized empirical risk minimization problem that combines the ∞‑norm push loss (also called infinite push) with sparsity‑inducing regularizers such as the ℓ₁ norm or the ℓ₁/ℓ₂ group‑lasso.
The infinite push loss is defined as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment