An Internal Clock Based Space-time Neural Network for Motion Speed Recognition

Reading time: 5 minute
...

📝 Original Info

  • Title: An Internal Clock Based Space-time Neural Network for Motion Speed Recognition
  • ArXiv ID: 2001.10159
  • Date: 2020-01-29
  • Authors: 원문에 저자 정보가 제공되지 않았습니다.

📝 Abstract

In this work we present a novel internal clock based space-time neural network for motion speed recognition. The developed system has a spike train encoder, a Spiking Neural Network (SNN) with internal clocking behaviors, a pattern transformation block and a Network Dynamic Dependent Plasticity (NDDP) learning block. The core principle is that the developed SNN will automatically tune its network pattern frequency (internal clock frequency) to recognize human motions in a speed domain. We employed both cartoons and real-world videos as training benchmarks, results demonstrate that our system can not only recognize motions with considerable speed differences (e.g. run, walk, jump, wonder(think) and standstill), but also motions with subtle speed gaps such as run and fast walk. The inference accuracy can be up to 83.3% (cartoon videos) and 75% (real-world videos). Meanwhile, the system only requires six video datasets in the learning stage and with up to 42 training trials. Hardware performance estimation indicates that the training time is 0.84-4.35s and power consumption is 33.26-201mW (based on an ARM Cortex M4 processor). Therefore, our system takes unique learning advantages of the requirement of the small dataset, quick learning and low power performance, which shows great potentials for edge or scalable AI-based applications.

💡 Deep Analysis

Figure 1

📄 Full Content

Nowadays Artificial Neural Networks (ANNs) [1] achieve huge successes and become one of the key factors leading to the next generation industrial revolution. And it is a game-changing player in some industrial fields such as face recognition [2], auto-driving and natural language processing [3]. It progresses rapidly and meanwhile, it suffers several main constraints such as requirements of a large amount of training data, low fault tolerances and without cognitive computing functions [4]. This is fundamentally different from how our brains process information [5], and these issues are not solved yet. Therefore, there is a small portion of researchers follow the other path and try to overcome this dilemma: Spiking Neural Networks (SNNs) come of the age [6] and use temporalspatial based processing and event-driven mechanisms [7] [8]. And the core principles of SNNs are to replicate fascinate brain computing behaviours [9][10]: ultra-low power consumption, selflearning and strong fault tolerances. Unfortunately, up to now there is still a considerable gap between ANNs and SNNs regarding the application levels. Based on our limited knowledge, we conclude several issues as below:

The mainstream SNN training algorithms such as Spiking-timing dependent plasticity (STDP) are widely used in the neuromorphic computing field. For example, ODIN [11] develops a 10-neuron SNN and employed SDSP learning algorithm for MINST dataset testing, the system demonstrates its capability with 84.5% classification accuracy. Meanwhile, [12][13] [14] shows similar results by using SNNs based STDP learning algorithms. However, STDP is a local training algorithm which strongly limits its application. Also, there is a large number of groups investigate SNNs based backpropagation or gradient descent algorithms which similar to ANNs training framework [15][16]. However, these kinds of algorithms seem feeble and don’t fit SNNs nature computing features.

Simulation of a brain computing can be either from a high bioplausible level Hodgkin-Huxley neuron model [17] or a high mathematical level leakage-and-integration neuron model [18].

Similarly at a network level, modelling of a small neural network can perform plasticity, adaption and compensation [19][20][21], while formulating a large scale network (100,000) takes advantage of cognitive computing features [22][23]. We are confused about at which level the neuromorphic system should learn from a brain. The obvious reason is the brain is not fully understood yet [24], and more importantly, neuromorphic engineers are not well recognized this point when they develop systems. As a result of this, the developed system doesn’t reflect SNN computing features properly.

Currently neuromorphic computing fields are largely focused on hardware architecture design such as Neurogrid [25],

TrueNorth [26] and neural processors [27]. They all made a significant contribution on this field and demonstrate the capabilities to simulate either a million neurons or complicated ion channel mechanisms in real-time. One potential risk of this bottomup approach is that the emerging algorithms may not well fit into developed hardware, and results of no killer applications. The algorithm, software, hardware, and application should be fully taken into accounts when we design a neuromorphic computing system.

Therefore, by considering these factors above and inspired by the biological cerebellum Passenger-of-Timing (POT) mechanism [28][29], we propose a novel SNN based learning system for speed recognitions. As it is shown in Figure . 1, the system consists of a spike train encoder, an internal clock based SNN, a pattern transformation block and a Network-Dynamic Dependent Plasticity (NDDP) learning block. The main principle is that motion speed can be differentiated via a trained SNN internal clock timing information. By applying both cartoon and real-world videos, results demonstrate that under a constrained hardware resources environment, the proposed system can not only recognize motions with considerable speed differences (e.g. run, walk, jump, wonder and standstill) but also motions with subtle speed gaps such as slow run and fast walk. Therefore, the key contributions are as followed: • Applications level: the proposed system can be applied on IoT fields for speed recognition due to its ultra-low power consumption(33.26mW), short-latency (0.84s) and usage of limited hardware resources (can be implemented on a typical ARM Cortex M4 controller).

And this will enable system learning capabilities on edges or end devices.

An internal clock based SNN learning system has three stages for training and learning: 1)information translation: the input motion videos are transformed into spike trains via a spike train encoder; 2)training: by given pre-defined learning signals, the SNN modify its global dynamic pattern frequency (internal clock frequency) via NDDP learning rules to minimize errors (cost function); 3

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut