Few-Shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor
📝 Original Paper Info
- Title: On-chip Few-shot Learning with Surrogate Gradient Descent on a Neuromorphic Processor- ArXiv ID: 1910.04972
- Date: 2019-11-06
- Authors: Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci
📝 Abstract
Recent work suggests that synaptic plasticity dynamics in biological models of neurons and neuromorphic hardware are compatible with gradient-based learning (Neftci et al., 2019). Gradient-based learning requires iterating several times over a dataset, which is both time-consuming and constrains the training samples to be independently and identically distributed. This is incompatible with learning systems that do not have boundaries between training and inference, such as in neuromorphic hardware. One approach to overcome these constraints is transfer learning, where a portion of the network is pre-trained and mapped into hardware and the remaining portion is trained online. Transfer learning has the advantage that pre-training can be accelerated offline if the task domain is known, and few samples of each class are sufficient for learning the target task at reasonable accuracies. Here, we demonstrate on-line surrogate gradient few-shot learning on Intel's Loihi neuromorphic research processor using features pre-trained with spike-based gradient backpropagation-through-time. Our experimental results show that the Loihi chip can learn gestures online using a small number of shots and achieve results that are comparable to the models simulated on a conventional processor.💡 Summary & Analysis
This paper explores the integration of gradient-based learning with neuromorphic hardware to perform few-shot learning, specifically using Intel's Loihi processor. The main issue addressed is that traditional gradient-based learning methods are time-consuming and require datasets to be independently and identically distributed, which is incompatible with systems like neuromorphic hardware where there's no clear boundary between training and inference phases. To overcome this challenge, the authors use transfer learning, where a portion of the network is pre-trained offline and then mapped into hardware, while the rest continues to learn online. The key innovation lies in leveraging spike-based gradient backpropagation for feature extraction before transferring it to Loihi for further online training. Experimental results demonstrate that Loihi can achieve comparable performance to conventional processors when learning gestures with a small number of samples. This breakthrough signifies that neuromorphic hardware can support efficient and fast online learning, opening up possibilities in real-time data processing applications.📄 Full Paper Content (ArXiv Source)
📊 논문 시각자료 (Figures)
