Recklessly Approximate Sparse Coding
It has recently been observed that certain extremely simple feature encoding techniques are able to achieve state of the art performance on several standard image classification benchmarks including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these “triangle” or “soft threshold” encodings are ex- tremely efficient to compute. Several intuitive arguments have been put forward to explain this remarkable performance, yet no mathematical justification has been offered. The main result of this report is to show that these features are realized as an approximate solution to the a non-negative sparse coding problem. Using this connection we describe several variants of the soft threshold features and demonstrate their effectiveness on two image classification benchmark tasks.
💡 Research Summary
The paper “Recklessly Approximate Sparse Coding” investigates why extremely simple feature‑encoding schemes—most notably the “triangle” and “soft‑threshold” encodings—have been able to match or surpass state‑of‑the‑art performance on a variety of image‑classification benchmarks, despite their computational thrift. The authors provide the first rigorous mathematical justification: these encodings are in fact approximate solutions to a non‑negative sparse coding (NNSC) problem.
Problem formulation.
Given an input vector (x \in \mathbb{R}^d) and a dictionary (D \in \mathbb{R}^{d \times K}) whose columns are constrained to be non‑negative, the NNSC objective is
\
Comments & Academic Discussion
Loading comments...
Leave a Comment