Devices that let people with paralysis walk and talk are rapidly improving. Some see a future in which we alter memories and download skills – but major challenges remain.
A cyborg bested me. When I played the online game WebGrid, using my finger on a laptop trackpad to click on squares appearing unpredictably on a grid, my speed was 42 squares per minute. When self-described cyborg Noland Arbaugh played it, he used a chip embedded in his brain to send telepathic signals to his computer. His speed? 49.
Chris Malbon |
Arbaugh was paralysed from the neck down in 2016. In January, he became the first person to be surgically implanted with a chip made by Neuralink, a company founded by Elon Musk. Since then, Arbaugh has been operating his phone and computer with his thoughts, surfing the web and playing Civilization and chess.
But Neuralink isn’t the only outfit melding human minds with machines using brain-computer interfaces (BCIs). Thanks to a series of trials, a growing number of people paralysed from spinal cord injuries, strokes or motor conditions are regaining lost abilities. The successes are taking some researchers by surprise, says neurosurgeon Jaimie Henderson at Stanford University in California. “It’s been an incredible ride.”
Where that takes us remains to be seen. Musk recently mused about making a bionic implant that will allow us to compete with artificial superintelligence. Others are contemplating more profound implications. “In the future, you could manipulate human perception and memories and behaviour and identity,” says Rafael Yuste at Columbia University in New York.
But while BCIs are undeniably impressive, as Arbaugh’s WebGrid score demonstrates, the relationship between brain activity, thoughts and actions is incredibly complex. A future in which memories can be implanted and skills downloaded is enticing, but there will be incredible challenges to overcome.
Noland Arbaugh is the first person to have a Neuralink chip implanted in his brain Rebecca Noble/New York Times/Redux/eyevine |
BCIs work by first detecting electrical signals from neurons using metal discs, wires or electrodes that are either inserted into the brain, under the skull or placed over the scalp. This information is then sent to a computer, where it is processed and translated into commands that, for example, enable a person to type a sentence or control a robotic device.
We have been able to siphon data from the brain in this way for decades. In 1998, researchers implanted the first invasive BCI, consisting of two electrodes, into the brain of a builder named Johnny Ray who had become almost totally paralysed after a stroke. Ray learned to tune the signals from his implant to slowly spell words by imagining moving his hand to move a cursor over letters on a keyboard. But the functionality and reliability of such early BCIs was poor. Typically, these devices took weeks or months of training before they could be used. Even then, they only allowed people to select a few characters per minute and were prone to errors.
One issue was that devices made of only a few electrodes didn’t gather enough data. There are billions of intricately connected neurons in the human brain, and research had begun to indicate that it is the patterns of activity in groups of neurons – not single cells – that specify our thoughts, actions and perceptions. To decode these patterns, BCI researchers wanted to test and deploy technology that would pick up the individual signals of many neurons at once.
To enable this, they began working to adapt a technology originally invented by Richard Normann at the University of Utah to stimulate the brain’s visual cortex to restore sight. Normann’s 4-millimetre-square chip, called the Utah array, was studded with about 100 microelectrodes that could penetrate the outer layer of the brain. The array was redesigned to track the firing of individual neurons, and each array could record from about 100 of them at once. “That gave us the ability to look at populations of neurons and see really rich signals,” says Carlos Vargas-Irwin at Brown University in Rhode Island, who began working with Utah Arrays as a college student in 2000. The collective output of these populations represented the brain’s language, guiding functions such as reaching, writing, walking, talking, smiling and thinking – and it was ripe for interpretation.
An array containing 64 electrodes that can be implanted to collect brain signals REUTERS/Emmanuel Foudrot |
The signals most relevant to devices for people with paralysis reside in the motor cortex, a strip of tissue that wraps like a headband across the top of the brain and is charged with planning and executing movements. It is roughly organised by body part. For instance, in its face region, there are neurons that control face muscles, and in the leg region, there are neurons that operate the legs, and so on. BCI researchers often put electrodes in the hand region because people tend to find it easy to imagine moving their hands to do useful things, such as type or manipulate a joystick or robotic arm.
In 2004, researchers at BCI consortium BrainGate reported implanting Utah arrays in people with paralysis. One by one, people volunteered for brain surgery, moving the field forward. A man with a paralysed upper and lower body due to a knife wound used his thoughts to direct a cursor, opening simulated emails and operating a television, as well as opening and closing a prosthetic hand. Two people who had the same sort of paralysis following strokes telepathically manipulated a robotic arm to reach and grab objects; one of them drank coffee from a bottle. A woman with muscle weakness due to amyotrophic lateral sclerosis, a neurodegenerative condition that leads to paralysis, directed a cursor towards up to eight targets on a screen.
“Every time we do one of these types of surgeries and work with a participant, we learn so much,” says Henderson, who is part of BrainGate and also an adviser for Neuralink, in which he has equity.
Brain-computer interfaces
In the past few years, the capabilities of experimental devices that translate brain activity into movement and even speech have surged ahead, says Yuste. With ever-more powerful algorithms at their disposal, researchers can decipher the meaning of increasingly complex patterns of activity from groups of neurons. “There is starting to be wholesale access to brain processes,” says Yuste.
In 2021, it was reported that Dennis DeGray, whose spinal cord was severed in a bad fall over a decade earlier, set a new speed record for virtual typing enabled by a Utah Array of 90 characters per minute. Unlike Ray and his bionic descendants, who moved cursors by thinking broadly about moving a hand, DeGray “typed” by visualising himself writing on a legal pad, so his brain fired off signals for the fine, multi-joint movements that would have been required for this. An AI tool then decoded the neural signals of this imagined handwriting and mapped it to individual letters.
Similarly, in unpublished work, Vargas-Irwin and his colleagues say they have ferreted out the brain’s “codes” for dozens of hand gestures as well as individual finger movements involving both hands at the same time. In research presented by Vargas-Irwin in 2022, a man paralysed from the neck down who has two implanted Utah arrays played a simulated piano with 10 keys by imagining moving specific fingers on either hand. It doesn’t allow him to play anything by Tchaikovsky just yet – more like Mary Had a Little Lamb, says Vargas-Irwin. “But it has been proof of concept that they can control each finger independently.”
In one of the most astounding developments, BCIs have emerged that can reanimate paralysed limbs themselves. Last year, Grégoire Courtine at the Swiss Federal Institute of Technology in Lausanne and his team reported using a less invasive “electrocorticography” array sitting under the skull just above the leg regions of the motor cortex, without penetrating brain tissue, along with AI to read signals in the brain and relay them to a stimulator in the spinal cord. This enabled Gert-Jan Oskam, whose legs are paralysed, to stand and walk – even navigating stairs and uneven terrain. “[We have built] a digital bridge that turns the thought into action,” says Courtine.
Gert-Jan Oskam, who was paralysed in a cycling accident, is able to walk using a brain-computer interface Jimmy Ravier |
Moreover, the patterns of activity identified by AI are challenging our understanding of the brain. On a more basic level, work on BCIs has revealed that the jurisdictions of groups of motor cortex cells extend beyond single body parts to multiple joints and multiple body parts. It was found that DeGray’s array, despite being placed in the “hand area” of the motor cortex, could also pick up patterns for the movements needed to produce speech. “We had him speak and, to our great surprise, there were pretty strong signals for words and syllables in an area of brain that we thought was pretty highly specialised for hand function,” says Henderson.
This and other discoveries led to a more nuanced understanding of the brain region charged with orchestrating actions. Instead of being dedicated to one body part, as, say, a cardiologist might focus on the heart, motor cortex neurons seem to be more like general practitioners with a subspeciality. “Even in one patch of [motor] cortex, you can get some information about what the entire body is doing – so arms, legs, face, tongue. There’s echoes of all that information,” says Vargas-Irwin. This suggests that the motor cortex is organised according to complex concepts, such as actions, rather than body parts.
Decoding thoughts
Still, the hand region is the wrong place to put a BCI if the primary goal is to produce speech. Last year, it was reported that a woman named Ann Johnson, who had lost her ability to speak following a stroke, used an electrocorticography array to operate an avatar on a computer near Johnson that spoke for her at 78 words per minute.
The avatar’s AI-generated voice was trained on recordings of Johnson’s voice made before the stroke and made life-like movements of its mouth. It could also display facial expressions such as happiness, sadness or surprise based on readings from the array whenever Johnson tried to produce the facial expressions related to these emotions. “We are looking at Ann’s embodiment in that digital form,” says neurosurgeon Edward Chang at the University of California, San Francisco, who led the study.
After seeing Johnson’s avatar in action, Chang was astounded by what his team had done. He called Yuste in the middle of the night to talk about the implications. Had technology effectively expanded the boundaries of our bodies and minds? “It’s not moving a cursor or playing a video game,” he says. “This is another level where you are looking at an embodied form of yourself.”
Henderson’s team, among others, is attempting to expand this ethereal communication to inner speech, as a way of understanding how thoughts become communications. “One can start to see how conceptual info is represented throughout the brain,” he says. One day, this understanding might lead to devices that can restore speech in people who, after a stroke, say, have an idea to express but can’t put it into words. It also comes closer to truly reading a person’s mind. “If you have the means to map and manipulate brain activity, by definition, you can map and manipulate mental activity,” says Yuste, who sees few limitations to the near-future possibilities of BCIs.
Engineering challenges still dog the field. Brain-embedded devices, like the Utah and Neuralink arrays, pick up brain activity more precisely but can lose their signal quality over time. Meanwhile, some BCI set-ups involve wires which sprout from ports that present a persistent infection risk, and most eventually have to be removed or replaced.
These troubles aside, truly melding human minds with machines, as Musk envisions, will require a two-way stream of information that both reads from and writes to “every aspect of your brain”, Musk said in a Neuralink Show and Tell in 2022.
There has been gradual progress on that front. In 2021, a study reported on a man with a spinal cord injury using his mind to direct a robot to pick up objects and place them on a platform while electrodes stimulated sensory regions of his brain. The feedback made him feel as if he were touching the objects with his own palm and fingers. It also allowed him to complete each movement cycle twice as fast as he could without it.
Brain implants allow people with paralysis to control exoskeletons so that they can walk again REUTERS/Emmanuel Foudrot |
But this is a long way from visions in which memories are implanted or skills like kung fu are instantly downloaded, says Juan Alvaro Gallego at Imperial College London. “Remembering the texture of the muffin that you ate for breakfast when you were a kid is different from feeling that someone is rubbing your thumb,” he says. “We don’t know how to write memories or knowledge into the brain.”
Moreover, experimental BCIs don’t allow for the rapid switching between disparate activities that occupy most people’s days: typing one minute, chatting to a friend, then going to the kitchen for a snack. “We cannot build a model that works across all these things that we can do in our daily living,” says Gallego.
Universal implant
As a step in that direction, he and his colleagues have found a way to make programming the individual tasks easier. BCIs are currently personalised for each participant, but Gallego’s team has created a universal decoder that works across different brains, at least in monkeys and mice. It does this by tracking patterns that emerge in groups of neurons that are shared across a species. “You can build a model in animal 1 to predict how animal 1 is moving his hand, for example, and use it in animal 2,” he says. If the algorithm also works across different human brains, it could pave the way to more versatile BCIs, he says.
Advances like these continue to close the gulf between our mental worlds and the neural activity from which these worlds emerge. Yet many BCI experts dare only to dream as far as fashioning these devices into a standard treatment for movement and speech limitations. “We could have therapies for people in a much sooner time frame than I ever thought,” says Henderson.
Along with similar firms such as Synchron and Blackrock Neurotech (which produces the Utah array), Neuralink is often seen as a partner in that goal – even if researchers can’t yet see a clear path, or rationale, for that company’s aim of creating cyborgs who can keep pace with superintelligent AIs. Neuralink didn’t respond to requests for comment. “We are still struggling to match the function [of non-disabled people],” says Vargas-Irwin. “We’re not at the point yet where we can think about enhancement.”
Important Note: This article has been shared from NewScientist under creative commons License for our readers. We strongly suggest to read the original article here.