Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces

Reading time: 5 minute
...

📝 Original Info

  • Title: Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces
  • ArXiv ID: 1811.11876
  • Date: 2018-12-31
  • Authors: : - Author1 - Author2 - Author3 - …

📝 Abstract

The field of brain-computer interfaces is poised to advance from the traditional goal of controlling prosthetic devices using brain signals to combining neural decoding and encoding within a single neuroprosthetic device. Such a device acts as a "co-processor" for the brain, with applications ranging from inducing Hebbian plasticity for rehabilitation after brain injury to reanimating paralyzed limbs and enhancing memory. We review recent progress in simultaneous decoding and encoding for closed-loop control and plasticity induction. To address the challenge of multi-channel decoding and encoding, we introduce a unifying framework for developing brain co-processors based on artificial neural networks and deep learning. These "neural co-processors" can be used to jointly optimize cost functions with the nervous system to achieve desired behaviors ranging from targeted neuro-rehabilitation to augmentation of brain function.

💡 Deep Analysis

Figure 1

📄 Full Content

A brain-computer interface (BCI) [1,2,3,4] is a device that can (a) allow signals from the brain to be used to control devices such as prosthetics, cursors or robots, and (b) allow external signals to be delivered to the brain through neural stimulation. The field of BCIs has made enormous strides in the past two decades. The genesis of the field can be traced to early efforts in the 1960s by neuroscientists such as Eb Fetz [5] who studied operant conditioning in monkeys by training them to control the movement of a needle in an analog meter by modulating the firing rate of a neuron in their motor cortex. Others such as Delgado and Vidal explored techniques for neural decoding and stimulation in early versions of neural interfaces [6,7]. After a promising start, there was a surprising lull in the field until the 1990s when, spurred by the advent of multi-electrode recordings as well as fast and cheap computers, the field saw a resurgence under the banner of brain-computer interfaces (BCIs; also known as brain-machine interfaces and neural interfaces) [1,2].

A major factor in the rise of BCIs has been the application of increasingly sophisticated machine learning techniques for decoding neural activity for controlling prosthetic arms [8,9,10], cursors [11,12,13,14,15,16], spellers [17,18] and robots [19,20,21,22]. Simultaneously, researchers have explored how information can be biomimetically or artificially encoded and delivered via stimulation to neuronal networks in the brain and other regions of the nervous system for auditory [23], visual [24], proprioceptive [25], and tactile [26,27,28,29,30] perception.

Building on these advances in neural decoding and encoding, researchers have begun to explore bi-directional BCIs (BBCIs) which integrate decoding and encoding in a single system. In this article, we review how BBCIs can be used for closed-loop control of prosthetic devices, reanimation of paralyzed limbs, restoration of sensorimotor and cognitive function, neuro-rehabilitation, enhancement of memory, and brain augmentation. Motivated by this recent progress, we propose a new unifying framework for combining decoding and encoding based on “neural co-processors” which rely on artificial neural networks and deep learning. We show that these “neural co-processors” can be used to jointly optimize cost functions with the nervous system to achieve goals such as targeted rehabilitation and augmentation of brain function, besides providing a new tool for testing computational models and understanding brain function [31].

Closed-Loop Prosthetic Control Consider the problem of controlling a prosthetic hand using brain signals. This involves (1) using recorded neural responses to control the hand, (2) stimulating somatosensory neurons to provide tactile and proprioceptive feedback, and (3) ensuring that stimulation artifacts do not corrupt the recorded signals being used to control the hand. Several artifact reduction methods have been proposed for (3) -we refer the reader to [32,33,34]. We focus here on combining (1) decoding with (2) encoding.

Most state-of-the-art decoding algorithms for intracortical BCIs are based on a linear decoder such as the Kalman filter. Typically, the state vector x for the Kalman filter is chosen to be a vector of kinematic quantities to be estimated, such as hand position, velocity, and acceleration. The likelihood (or measurement) model for the Kalman filter specifies how the kinematic vector xt at time t relates linearly (via a matrix B) to the measured neural activity vector yt : while a dynamics model specifies how xt linearly changes (via matrix A) over time: nt and mt are zero-mean Gaussian noise processes. The Kalman filter computes the optimal estimates for kinematics xt (both mean and covariance) given current and all past neural measurements.

One of the first studies to combine decoding and encoding was by O’Doherty, Nicolelis, and colleagues [35] who showed that stimulation of somatosensory cortex could be used to instruct a rhesus monkey which of two targets to move a cursor to; the cursor

x Ax n was subsequently controlled using a BCI based on linear decoding to predict the X-and Y-coordinate of the cursor. A later study by the same group [36] demonstrated true closed-loop control. Monkeys used a BCI based on primary motor cortex (M1) recordings and Kalman-filter-based decoding to actively explore virtual objects on a screen with artificial tactile properties. The monkeys were rewarded if they found the object with particular artificial tactile properties. During brain-controlled exploration of an object, the associated tactile information was delivered to somatosensory cortex (S1) via intracortical stimulation. Tactile information was encoded as a high-frequency biphasic pulse train (200 Hz for rewarded object, 400 Hz for others) presented in packets at a lower frequency (10 Hz for rewarded, 5 Hz for unrewarded objects).

Because stimulation artifacts masked neur

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut