A Personalized and Adaptable User Interface for a Speech and Cursor Brain-Computer Interface
Communication and computer interaction are important for autonomy in modern life. Unfortunately, these capabilities can be limited or inaccessible for the millions of people living with paralysis. While implantable brain-computer interfaces (BCIs) show promise for restoring these capabilities, little has been explored on designing BCI user interfaces (UIs) for sustained daily use. Here, we present a personalized UI for an intracortical BCI system that enables users with severe paralysis to communicate and interact with their computers independently. Through a 22-month longitudinal deployment with one participant, we used iterative co-design to develop a system for everyday at-home use and documented how it evolved to meet changing needs. Our findings highlight how personalization and adaptability enabled independence in daily life and provide design implications for developing future BCI assistive technologies.
💡 Research Summary
This paper presents a longitudinal case study of a personalized and adaptable user interface (UI) for a multimodal intracortical brain‑computer interface (BCI) that combines speech decoding and cursor control. The target user is a 45‑year‑old man (T15) with tetraplegia and severe dysarthria caused by amyotrophic lateral sclerosis (ALS), enrolled in the BrainGate2 clinical trial. Over a 22‑month period, the research team employed an iterative co‑design methodology, engaging the participant twice weekly for brief feedback, periodic design workshops, text‑message exchanges, and six‑month surveys. This continuous collaboration allowed the UI to evolve in direct response to the participant’s changing abilities, daily routines, and emerging challenges.
The system architecture is built on the open‑source BRAND platform, which runs a distributed network of Linux processes. Two primary nodes constitute the UI: a logic node implemented as a finite‑state machine (FSM) that manages task states (idle, speaking, cursor navigation, error correction, etc.) and a graphics node written in Python‑Pyglet that renders visual “screens” corresponding to each state. This decoupling enables rapid prototyping; new screens or features can be added by modifying only the graphics node while the FSM remains unchanged.
Four input modalities are supported: (1) a transformer‑based speech decoder that converts neural activity into phoneme probabilities, then uses an n‑gram model and a large language model (OPT‑6.7B) for rescoring to produce text in real time; (2) a linear cursor decoder that maps neural signals to 2‑D velocity and a click decoder that yields binary click probabilities, allowing precise pointer control on both the BCI UI and the participant’s personal computer; (3) user‑defined gestures that can be mapped to discrete actions such as clicks or scrolling; and (4) eye‑gaze tracking as a supplemental modality. Early in the study the participant used a gyroscopic head‑mouse and a skilled interpreter, but after a few weeks all communication transitioned to the BCI‑driven interface.
Key design challenges identified during the deployment included: (a) lack of an immediate error‑correction mechanism for speech decoding mistakes; (b) reduced cursor accuracy leading to slower navigation; and (c) cognitive and physical fatigue during prolonged sessions. The team addressed these by adding a “re‑type” button for quick correction, sliders for adjusting cursor speed and acceleration, and a “single‑click” mode that minimizes the number of required actions. The UI also incorporated customizable layouts, shortcut keys, and context‑aware prompts to streamline common tasks such as email composition, web browsing, and media consumption.
Quantitative and qualitative evaluation was performed through three semi‑annual surveys (assistive‑technology effectiveness, everyday‑task performance, and user‑centered‑design questionnaire) and analysis of usage logs. After the personalized UI was fully deployed, the participant’s average typing speed roughly doubled from ~15 wpm to ~30 wpm, and he reported high satisfaction with independence, reduced reliance on caregivers, and overall quality‑of‑life improvements. The participant was able to independently compose emails, browse the internet, and control media applications without external assistance.
From this experience the authors distilled four design principles for future BCI assistive technologies: (1) Ability‑based design – the system should adapt to the user’s current motor and speech capabilities rather than forcing the user to adapt; (2) Modular UI architecture – decoupled logic and graphics layers facilitate rapid addition of new features; (3) Continuous user feedback loop – regular, low‑burden interactions with end‑users guide iterative refinements; and (4) Multimodal input support – offering speech, neural cursor, gestures, and eye‑gaze ensures flexibility across varying contexts and user preferences.
In conclusion, the study demonstrates that a thoughtfully engineered, user‑centered UI can transform a high‑performance intracortical BCI from a laboratory prototype into a practical daily‑life tool, delivering tangible independence for a person with severe paralysis. The long‑term co‑design approach and the adaptable system architecture provide a scalable blueprint for future development of BCI‑based assistive technologies across diverse user populations.
Comments & Academic Discussion
Loading comments...
Leave a Comment