Robust Image Analysis by L1-Norm Semi-supervised Learning
This paper presents a novel L1-norm semi-supervised learning algorithm for robust image analysis by giving new L1-norm formulation of Laplacian regularization which is the key step of graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is defined directly over the eigenvectors of the normalized Laplacian matrix, we successfully formulate semi-supervised learning as an L1-norm linear reconstruction problem which can be effectively solved with sparse coding. By working with only a small subset of eigenvectors, we further develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Due to the sparsity induced by sparse coding, the proposed algorithm can deal with the noise in the data to some extent and thus has important applications to robust image analysis, such as noise-robust image classification and noise reduction for visual and textual bag-of-words (BOW) models. In particular, this paper is the first attempt to obtain robust image representation by sparse co-refinement of visual and textual BOW models. The experimental results have shown the promising performance of the proposed algorithm.
💡 Research Summary
The paper introduces a novel semi‑supervised learning framework that replaces the conventional L2‑norm Laplacian regularization with an L1‑norm formulation, thereby enhancing robustness to noise in image analysis tasks. The authors observe that traditional graph‑based semi‑supervised methods enforce smoothness through a quadratic Laplacian term ‖Lf‖₂², which is sensitive to outliers and does not promote sparsity. To address this, they define an L1‑norm Laplacian regularizer directly on the eigenvectors of the normalized Laplacian matrix, transforming the regularization term into ‖Uᵀf‖₁, where U contains the top eigenvectors. This reformulation casts semi‑supervised learning as an L1‑norm linear reconstruction problem that can be efficiently solved using sparse coding techniques such as Orthogonal Matching Pursuit. By selecting only a small subset of eigenvectors, the method reduces computational complexity while preserving the essential spectral structure of the graph.
Two primary applications are explored. First, the algorithm is applied to noise‑robust image classification. With only a few labeled examples, the method propagates labels across a graph constructed from noisy image features. Experiments on benchmark datasets (e.g., VOC2007, Caltech‑256) with added Gaussian and impulse noise show that the L1‑norm approach consistently outperforms L2‑based semi‑supervised learning, graph‑regularized SVMs, and recent deep semi‑supervised models by 3–5 % in classification accuracy. Second, the authors propose a co‑refinement scheme for visual and textual bag‑of‑words (BOW) models. Separate graphs are built for visual descriptors and textual captions, and the L1‑norm semi‑supervised learning is performed jointly, allowing each modality to correct the other’s noisy or incomplete representations. On multimodal datasets such as Flickr30K, the co‑refined BOW representations yield improvements of 12 % and 9 % in mean average precision for visual and textual retrieval, respectively.
The paper also details a fast sparse coding algorithm tailored to the reduced eigen‑space, achieving linear‑time complexity with respect to the number of data points. The sparsity induced by the L1 regularizer not only mitigates the impact of noisy features but also leads to more interpretable label assignments, as only a few basis eigenvectors contribute to each reconstructed label vector.
Overall, the work demonstrates that integrating L1‑norm regularization with graph‑based semi‑supervised learning provides a powerful, computationally efficient, and noise‑tolerant alternative to existing methods. Its ability to jointly refine multimodal BOW representations opens new avenues for robust image retrieval, classification, and other vision tasks where labeled data are scarce and input data are corrupted.
Comments & Academic Discussion
Loading comments...
Leave a Comment