Reservoir Computing and Extreme Learning Machines using Pairs of Cellular Automata Rules
📝 Abstract
A framework for implementing reservoir computing (RC) and extreme learning machines (ELMs), two types of artificial neural networks, based on 1D elementary Cellular Automata (CA) is presented, in which two separate CA rules explicitly implement the minimum computational requirements of the reservoir layer: hyperdimensional projection and short-term memory. CAs are cell-based state machines, which evolve in time in accordance with local rules based on a cells current state and those of its neighbors. Notably, simple single cell shift rules as the memory rule in a fixed edge CA afforded reasonable success in conjunction with a variety of projection rules, potentially significantly reducing the optimal solution search space. Optimal iteration counts for the CA rule pairs can be estimated for some tasks based upon the category of the projection rule. Initial results support future hardware realization, where CAs potentially afford orders of magnitude reduction in size, weight, and power (SWaP) requirements compared with floating point RC implementations.
💡 Analysis
A framework for implementing reservoir computing (RC) and extreme learning machines (ELMs), two types of artificial neural networks, based on 1D elementary Cellular Automata (CA) is presented, in which two separate CA rules explicitly implement the minimum computational requirements of the reservoir layer: hyperdimensional projection and short-term memory. CAs are cell-based state machines, which evolve in time in accordance with local rules based on a cells current state and those of its neighbors. Notably, simple single cell shift rules as the memory rule in a fixed edge CA afforded reasonable success in conjunction with a variety of projection rules, potentially significantly reducing the optimal solution search space. Optimal iteration counts for the CA rule pairs can be estimated for some tasks based upon the category of the projection rule. Initial results support future hardware realization, where CAs potentially afford orders of magnitude reduction in size, weight, and power (SWaP) requirements compared with floating point RC implementations.
📄 Content
Reservoir Computing & Extreme Learning
Machines using Pairs of Cellular Automata Rules
Nathan McDonald
Air Force Research Laboratory/ Information Directorate
Rome, NY, USA
Nathan.McDonald.5@us.af.mil
Abstract— A framework for implementing reservoir
computing (RC) and extreme learning machines (ELMs),
two types of artificial neural networks, based on 1D
elementary Cellular Automata (CA) is presented, in which
two separate CA rules explicitly implement the minimum
computational requirements of the reservoir layer:
hyperdimensional projection and short-term memory. CAs
are cell-based state machines, which evolve in time in
accordance with local rules based on a cell’s current state
and those of its neighbors. Notably, simple single cell shift
rules as the memory rule in a fixed edge CA afforded
reasonable success in conjunction with a variety of
projection rules, potentially significantly reducing the
optimal solution search space. Optimal iteration counts for
the CA rule pairs can be estimated for some tasks based
upon the category of the projection rule. Initial results
support
future
hardware
realization,
where
CAs
potentially afford orders of magnitude reduction in size,
weight, and power (SWaP) requirements compared with
floating point RC implementations.
Keywords— reservoir computing (RC), cellular automata
(CA), extreme learning machine (ELM), cellular automata
based reservoirs (ReCA)
I.
INTRODUCTION
Reservoir computing (RC) is a relatively recent addition to
the field of artificial neural networks (ANN). RC’s dynamical
behavior makes them well suited to address time-dependent
data analysis, which may be found in many machine learning
tasks. Unlike typical ANNs which require iterative training for
all synaptic connections between all neurons/nodes in the
network to be useful, RC works with arbitrarily, sparsely, and
statically connected hidden layer neurons called a reservoir [1-
2]. Only output neurons’ weights are trained to be application
specific, and these weights are calculated once via matrix
inversion instead of recursive incremental changes. The rest of
the neural connections remain static for the duration of the
network. The mathematical requirements for this reservoir
layer are a) high dimensional projection and b) fading memory
[1]. Dynamical systems possessing these characteristics are
said to operate at “the edge of chaos.” That said,
hyperdimensional projection is a powerful computational tool
itself and is used by Extreme Learning Machines (ELMs) in a
manner similar to RC’s reservoir layer but without the short-
term memory component [3].
The short list of requirements for a reservoir layer has
encouraged research into novel hardware implementations
previously unrealistic for other neural network designs,
including a bucket of water [4], electronic circuits [5], optics
[6, 7], and carbon nanotubes [8]. By exploiting the physics of
a hardware reservoir layer itself, the network drastically
reduces the many floating point matrix multiplications
typically required for ANNs. This makes RC attractive for
hardware implementation in size, weight, and power (SWaP)
constrained platforms [10].
Interestingly, even networks of Boolean logic gates can
demonstrate dynamical behavior. Random Boolean networks
(RBNs) are networks of N random Boolean logic functions of
K inputs each, allowing for recursive and non-local
connections [9]. Though each node N can only have a pair of
possible states {0,1}, “edge of chaos” behavior can typically
be seen in RBNs of K = 2, though such dynamics may be
found for other K values [9].
Cellular automata (CA), a special class of RBNs, are
attractive as hardware reservoirs because, unlike RBNs
generally, CAs follow a homogeneous rule for state transitions
based on local interactions between a cell and its immediate
neighbors. In particular, one dimensional (1D) CAs, also
known as Elementary Cellular Automata (ECA), only have
two neighboring cells, the left and right cell, K = 3; however,
these simple local interactions are sufficient to demonstrate
rich dynamical behavior [14, 15].
CAs have only recently been considered for RC and
ELMs. A 1D CA based reservoir (ReCA) was first presented
in [10-12]. A binary input is randomly projected into a binary
vector space and evolved according to a CA rule. The CA
state vector is then combined with the next input to create the
recurrent connectivity. The entire history of the CA reservoir
is used is computing the network output. Important design
features included the use of zero buffer vectors to either end of
the binary input vector, the use of multiple initial random
projections, and the vectorization of the CA reservoir for the
purposes of calculating the output weights. Demonstrated
applications concerned pathological sequence learning tasks
[11] and connectionist-symbolic machine intelligence [10,12],
for which the input data were
This content is AI-processed based on ArXiv data.