Wireless Devices to Read Words in the Brain

Wireless brain-machine interfaces could one day scan minds in real-time for speech data to help people with brain injuries talk, new research suggests.

Recently, scientists have developed brain-machine interfaces that help restore communication to people who can no longer speak by reading brainwaves using electrodes stuck on their heads. Unfortunately, these have proved very slow, at roughly one word typed per minute, making normal conversations and social interactions virtually impossible.

Now cognitive neuroscientist Frank Guenther at Boston University and his colleagues reveal a brain-machine interface that uses electrodes implanted directly into the brain for research into real-time speech.

"It should soon be possible for profoundly paralyzed individuals who are currently incapable of speaking to produce speech through a laptop computer," Guenther told LiveScience.

The scientists worked with a 26-year-old male volunteer who experiences near-total paralysis due to a stroke he suffered when he was 16 years old. They implanted an electrode that had two wires into a part of the brain that helps plan and execute movements related to speech.

The electrode recorded brain signals when the volunteer attempted to talk and wireless transmitted them across the scalp to help drive a speech synthesizer. The delay between brain activity and sound output was just 50 milliseconds on average, roughly that seen with regular speech.

"He was quite excited, particularly on the first few days we used the system, as he got used to its properties," Guenther recalled. "I am sure the work seems to proceed slowly from his perspective, as it does from ours at times. Nonetheless he was very excited about getting real-time audio feedback of his intended speech and happy to work very hard with us throughout the experiments."

The researchers focused on vowels, since the sound components involved have been studied for decades and software is available to quickly synthesize them. The accuracy of the volunteer's productions of vowels with the synthesizer improved quickly with practice from 45 to 89 percent accuracy over the course of 25 sessions in five months.

"Our volunteer was able to produce vowel-to-vowel sequences like 'uh-ee,' which are relatively easy speech 'movements,'" Guenther explained. "The next challenge is consonant production. This will require a different kind of synthesizer — an articulatory synthesizer, where the user will control movements of a 'virtual tongue.'"

"Such a synthesizer will allow whole words to be produced, but at the cost of a more complicated system for the user to control," he continued. "This, coupled with increases in the number of electrodes that can be recorded from and transmitted across the scalp, should eventually lead to a system that will allow the user to produce words and whole sentences."

The current system uses data from just two wires. "Within a year it will be possible to implant a system with 16 times as many," Guenther said. "This will allow us to tap into many more neurons, which in the end means much better control over a synthesizer and thus much better speech."

The scientists detailed their findings Dec. 9 in the journal PLoS ONE.

Charles Q. Choi
Live Science Contributor
Charles Q. Choi is a contributing writer for Live Science and Space.com. He covers all things human origins and astronomy as well as physics, animals and general science topics. Charles has a Master of Arts degree from the University of Missouri-Columbia, School of Journalism and a Bachelor of Arts degree from the University of South Florida. Charles has visited every continent on Earth, drinking rancid yak butter tea in Lhasa, snorkeling with sea lions in the Galapagos and even climbing an iceberg in Antarctica.