Two tiny arrays of implanted electrodes relayed information from the brain area that controls the hands and arms to an algorithm, which translated it into letters that appeared on a screen.
The ancient art of handwriting has just pushed the field of brain-computer interface (BCI) to the next level. Researchers have devised a system that allows a person to communicate directly with a computer from his brain by imagining creating handwritten messages. The approach enables communication at a rate more than twice as fast as previous typing-by-brain experiments.
Researchers at Stanford University performed the study on a 65-year-old man with a spinal cord injury who had had an electrode array implanted in his brain. The scientists described the experiment recently in the journal Nature.
“The big news from this paper is the very high speed,” says Cynthia Chestek, a biomedical engineer at the University of Michigan, who was not involved in the study. “It’s at least half way to able-bodied typing speed, and that’s why this paper is in Nature.”
For years, researchers have been experimenting with ways to enable people to directly communicate with computers using only their thoughts, without verbal commands, hand movement, or eye movement. This kind of technology offers a life-giving communication method for people who are “locked in” from brainstem stroke or disease, and unable to speak.
Successful BCI typing-by-brain approaches so far typically involve a person imagining moving a cursor around a digital keyboard to select letters. Meanwhile, electrodes record brain activity, and machine learning algorithms decipher the patterns associated with those thoughts, translating them into the typed words. The fastest of these previous typing-by-brain experiments allowed people to type about 40 characters, or 8 words, per minute.
That we can do this at all is impressive, but in real life that speed of communication is quite slow. The Stanford researchers were able to more than double that speed with a system that decodes brain activity associated with handwriting.
In the new system, the participant, who had been paralyzed for about a decade, imagines the hand movements he would make to write sentences. “We ask him to actually try to write—to try to make his hand move again, and he reports this somatosensory illusion of actually feeling like his hand is moving,” says Frank Willett, a researcher at Stanford who collaborated on the experiment.
A microelectrode array implanted in the motor cortex of the participant’s brain records the electrical activity of individual neurons as he tries to write. “He hasn’t moved his hand or tried to write in more than ten years and we still got these beautiful patterns of neural activity,” says Willett.
A machine learning algorithm then decodes the brain patterns associated with each letter, and a computer displays the letters on a screen. The participant was able to communicate at about 90 characters, or 18 words, per minute.
By comparison, able-bodied people close in age to the study participant can type on a smartphone at about 23 words per minute, the authors say. Adults can type on a full keyboard at an average of about 40 words per minute.
The Stanford researchers achieved the feat by repurposing a machine learning algorithm that was originally developed for speech recognition. The deep learning algorithm, called a recurrent neural network, trained over the course of a few hours to recognize the participant’s neural activity when he imagined handwriting sentences in English.
Neural networks are typically trained to recognize speech and images using tens of thousands of hours of audio data and millions of images, Willett says. So the challenge with the handwriting experiment was to achieve high accuracy with a limited amount of data.
To overcome this, the team applied data augmentation techniques, says Willett. “We only had the opportunity to collect maybe 100-500 different sentences that we could ask the participant to write,” Willett says. “So we took those sentences and chopped them up into individual letters and rearranged them into an infinite number of different sentences, and we found that that really helped teach these algorithms.”
It was also difficult to decode when, exactly, the man was writing a letter and when he wasn’t. To help with this, Willett and his team borrowed a tool from speech recognition—a hidden Markov model—which helped label the relevant data. Once the data was labeled with the model, the neural network could more easily be taught what patterns of neural activity it should be associating with each letter.
Willett says it’s the unique art of handwriting that makes it a faster way to communicate using BCI. “Why it works so much better than plain typing...is because each handwritten letter has a different pen trajectory associated with it, and a very different pattern of finger motions and motor actions. This evokes a unique pattern of neural activity that’s easy to distinguish,” he says.
By contrast, point-and-click systems, where the participant is imagining moving a cursor around a screen, involve making straight line movements to different keys. This evokes similar patterns of neural activity that aren’t easily distinguished and slows down the system, Willett says.
The techniques and algorithms presented in the experiment are applicable to other areas of research, such as connecting the brain to prosthetic hands, says Chestek at the University of Michigan. “Regardless of whether this is the best way to do communication, the overall approach is really promising for motor control generally,” she says.
The algorithms, in their current form, have to be trained for and customized to each participant. They also have to be recalibrated over time because neurons tend to change over time, and the electrode array may move around slightly. As a next step, Willett says he hopes to reduce the amount of initial training time and come up with a way for the algorithms to automatically recalibrate.