Biological neural networks (BNNs) are increasingly explored for their rich dynamics, parallelism, and adaptive behavior. Beyond understanding their function as a scientific endeavour, a key focus has been using these biological systems as a novel computing substrate. However, BNNs can only function as reliable information-processing systems if inputs are delivered in a temporally and structurally consistent manner. In practice, this requires stimulation with precisely controlled structure, microsecond-scale timing, multi-channel synchronization, and the ability to observe and respond to neural activity in real-time. Existing approaches to interacting with BNNs face a fundamental trade-off: they either depend on low-level hardware mechanisms, imposing prohibitive complexity for rapid iteration, or they sacrifice temporal and structural control, undermining consistency and reproducibility - particularly in closed-loop experiments. The Cortical Labs Application Programming Interface (CL API) enables real-time, sub-millisecond closed-loop interactions with BNNs. Taking a contract-based API design approach, the CL API provides users with precise stimulation semantics, transactional admission, deterministic ordering, and explicit synchronization guarantees. This contract is presented through a declarative Python interface, enabling non-expert programmers to express complex stimulation and closed-loop behavior without managing low-level scheduling or hardware details. Ultimately, the CL API provides an accessible and reproducible foundation for real-time experimentation with BNNs, supporting both fundamental biological research and emerging neurocomputing applications.
With rapidly growing interest in alternative computing methods, the study of how biological neurons process information and function in a way as to produce intelligence holds unique promise. This promise has also resulted in attempts to use the biological substrate itself as biological neural network (BNN) that can be harnessed directly in controllable ways as exemplified in the fields of Synthetic Biological Intelligence (SBI) [1][2][3] and Organoid Intelligence [4][5][6][7][8]. Biological neurons are highly power-efficient and sample-efficient, requiring a fraction of a percentage of either resource required by artificial intelligence (AI) systems [9,10]. Beyond their extreme power-and sampleefficiency [11], BNNs display rich parallel dynamics and the capacity for robust synaptic plasticityand functional connectivity changes unavailable to silicon von Neumann architectures [12][13][14][15][16][17][18][19][20][21].
Interacting with BNN, including for neurocomputing, requires a rigorous digital-biological interface. Stimulation must be temporally and structurally consistent, building on foundational demonstrations of writing information into neural networks [22,23]. Every input -including stim subcomponent count, duration, amplitude, polarity and timing -must be precisely enacted and outcomes faithfully recorded [24][25][26]. Without such time-correct control and the ability to define the stimulation into, and read the activity from, BNN, the software-tissue interface becomes unreliable and experimental outcomes fragile.
Advancing this technology requires three pillars:
- Wetware: Ethical, scalable, and specific neural cells; 2. Hardware: Viable interfaces maintaining cell health; 3. Software: Real-time frameworks for closed-loop algorithms.
Substantial work using synthetic biology to differentiate pluripotent stem cells into functional neural cells has provided a pathway to resolve the first point [27][28][29]. The development of scalable hardware has also recently been reported that resolves the second point [30], although such areas remain an active area requiring further development. Yet the third challenge has remained unsolved. As such, this paper aims to describe a method to address the third and final requirement.
While BNNs may be explored using optogenetic [31,32] or chemical methods [33], electrophysiological methods have grown to be the primary method for deep insight into neural dynamics at both the single cell [34] and population level [35]. Fundamentally, biological neurons generate measurable electrical pulses during action potentials. Consequently, electrophysiological dynamics can be measured across a neural population using devices such as microelectrode arrays (MEAs) [24][25][26]. First demonstrations that in vitro BNNs are responsive and would adapt to electrical stimulation via MEA stimulation established that not only can activity be recorded, but external electrical stimulation can be used to write information into BNNs [22,23]. The use of open-loop paradigms that examine how BNNs may transform information encoded via electrical information, often called reservoir computing (RC), has also been informative. Evidence supports that these open-loop patterns of structured stimulation will induce meaningful changes in neural activity, suggesting even relatively simple BNNs have capabilities to distinguish different patterns and even engage in blind-source separation tasks [36][37][38]. Finally, work exploring closed-loop algorithms has established that these algorithms can rapidly induce robust and complex synaptic plasticity and functional connectivity changes, often with highly nuanced population-wide dynamics [11,[18][19][20][21]. However, there is significant variability in the methods and reproducibility across these experiments. Moreover, key limitations exist ranging from long-latencies and high jitter, to inflexible setups limiting additional meaningful controls. Although hardware differences exist and are widely discussed, e.g., [6,[39][40][41], underlying software differences are less frequently described in this area. Yet without transparent, specific, and controllable software to facilitate electrophysiological interactions with BNNs, progress will be stymied.
Algorithms facilitating sub-millisecond interactions with BNNs have previously been demonstrated. However, these systems typically interact directly with a Field Programmable Gate Array (FPGA) and can have slow development times with limited transparency [42]. Approaches that exclusively use object-oriented programming languages such as Python allow rapid algorithmic iteration but are limited in stimulation generation and exhibit relatively slow and variable response latencies of >60 ms [43]. One promising approach is to combine these ideas into a unified framework. For example, have an accessible object-oriented programming language be interpreted via embedded Linux systems to an FPGA to provide the necessary cont
This content is AI-processed based on open access ArXiv data.