Artificial Intelligence
Brain-to-Voice Tech Helps Paralyzed Man Speak Fluently

A team of engineers from the University of California, Davis, has created a brain-to-voice system that enables individuals suffering from communication disorders to speak fluently. The system combines a Brain-computer interface (BCI), advanced AI, and new research to support real-time, intelligible, and expressive speech. Hear what you need to know.
What Are Communication Disorders?
When you think of what defines you, it may be your style or perspective. Few people would say their voice. However, every time you speak, you hear your voice, reaffirming that it’s you. Consequently, your voice is a vital part of your identity. Consequently, losing this part of yourself can be detrimental to your mental health.
Sadly, this scenario is the reality for millions who suffer from neurological conditions that can limit their ability to speak fluently. Conditions like a stroke, dysarthria, and dysphonia can lead to slurred or incoherent speech, limiting a person’s capabilities to communicate effectively moving forward.
This scenario isn’t a rare occurrence. According to recent studies, close to 800,000 people will suffer from a stroke this year. The same data suggests that 1 out of every 3 stroke victims will experience some form of communication problems following the event. These ailments are devastating to the patient and can result in depression and other harmful conditions forming.
How Brain-Computer Interfaces (BCIs) Aid Communication
Thankfully, engineers have put forth a lot of effort attempting to help solve these issues. From computers controlled by breath to eye-tracking software, technology has seemed to provide some solutions. To that extent, Brain computer interfaces are seen by many as the logical evolution of this technology.
Ever since Hans Berger registered electrical brain activity in the 1920s, scientists have attempted to use these signals to peer into the brain’s functionality. However, it took nearly 80 years of research before they were able to begin decoding neuronal firings to reproduce images and movements.
Today, BCIs are seen as an emerging technology with applications in VR, automation, system management, and the medical sectors. Specifically, the medical sector has seen the use of these devices to help those who suffer from mobility or communication disorders.
Interestingly, the first BCIs enabled better communication between patients and their loved ones. These early systems relied on a text display to communicate. Later, the text system was updated to read the words aloud, creating speech responses. While the audible responses were helpful, they lacked any human feel.
Limitations of Traditional BCI Speech Systems
There have been several limiting factors detracting from the success of BCIs to date. For one, the text communication method is not natural. It’s clunky, and the timing is off compared to speaking with an individual.
The delayed response times of the speech feature and its robotic voice also detach the user from the feeling of having a normal conversation with friends. Vital aspects like the ability to hear your voice, interject, or even enunciate words were missing from this approach.
Brain-to-Voice Study
Thankfully, a team of scientists may have figured out how to solve these problems after decades of research. The recent “An instantaneous voice-synthesis neuroprosthesis” study1 introduces a novel brain-to-voice neuroprosthesis that can instantaneously translate brain activity into speech. While still in its early stages, it has the potential to improve millions of lives globally.
Like its predecessors, the device allows users to “speak” through a computer. However, this approach relies on sensors surgically implanted into the brain’s Broca and Wernicke’s areas. These are the regions of the brain responsible for producing speech.

Source – University of California
Specifically, four microelectrode arrays that contain 256 microelectrodes enable the engineers to map neural activity to intended sounds. Notably, the system can decode paralinguistic features by deciphering intracortical activity with less than a second delay.
AI Algorithms Used in Brain-to-Voice System
The engineers utilized data gathered from a test subject to create a data set for their proprietary AI algorithm. The test subjects were shown sentences that they attempted to read aloud. This step allowed the team to map their neural activity and use it to create a closed loop for voice synthesization.
Impressively, the algorithm can sense when a person is attempting to speak and automatically convert their neural activity into syllables without any movement on their part. This conversion happens seamlessly, making it very similar to the tempo of a traditional conversation.
Additionally, the closed-loop audio feedback system allows the protocol to synthesize the user’s voice, allowing them to hear their speech like a real conversation. The system synthesizes these voice patterns, enabling full control over the patient’s cadence and timing.
Brain-to-Voice Test
The testing phase of that experiment began with finding a suitable patient. The team located a man suffering from ALS (amyotrophic lateral sclerosis), also called Lou Gehrig’s disease in the US. This devastating ailment leads to nerve degeneration occurring around the spinal cord and brain. Those suffering from ALS can lose the ability to speak accurately as they lose control over their facial muscles.
The patient selected for the study suffers from ALS and severe dysarthria. After implanting the sensors and programming the AI algorithm, the patient was asked to read sentences aloud. This technique enabled engineers to register his brain activity.
Once calibrated, the prototype instantly synthesizes his voice, allowing him to communicate in real time. The patient was even able to communicate with his family, change intonation during conversations to highlight his points. Impressively, the test subject even sang short melodies.
Brain-to-Voice Test Results
The test results demonstrate that the brain-to-voice system is effective. It accurately synthesized the patient’s voice and speech patterns with a high level of accuracy. Specifically, the system delivers 60% accuracy, compared to only 4% when not using the machine.
The engineers noted that the patient could alter and modulate his BCI-synthesized voice in real time. They recorded the process completed in 1/14 of a second, which is about the same time as any normal conversational delay. They also noted instances where the patient altered his voice to show that they had asked a question.
Interestingly, the technology isn’t limited to the patient’s current vocabulary. The team documented several instances where the patient was taught new words and the system pronounced them correctly. This data demonstrates the capabilities of this approach to dealing with speech disorders.
Brain-to-Voice Benefits
There are many benefits that the brain-to-voice technology brings to the market. For one, it provides a reliable and natural way for those suffering from paralysis and other life-altering ailments to regain some semblance of normal daily life.
The protocol can instantaneously deliver speech responses. The digital vocal tract supports patients’ unique sounding vocals and can respond with no detectable delay. The responses have been proven to be accurate and can be created using only neural signal data.
A New Era for Patients With Communication Disorders
One of the biggest benefits of this technology is that it will allow those suffering from communication ailments to finally share their story in a relatable way. This technology will enable neuroprosthesis users to participate in the conversation and do their part to help others with similar issues.
Real-World Applications and Timeline for Brain-to-Voice Tech
There is a long list of applications for the brain-to-office computer technology. Computers have been around for a long time, and despite having nearly all technological aspects updated, like better graphics, processing, and hardware, the keyboard remains relatively untouched.
The introduction of a reliable investigational brain-computer interface changes the game. It opens the door for seamless interaction between humans and computers. As such, this technology could open the door to more advanced treatments and technologies.
Brain-to-Voice Timeline
There has been no timeline set in place by researchers. However, given the dire state of those suffering from these ailments, it’s possible that this technology could hit the market in the next 5-10 years. There will be much more research needed on the long-term effects of the implant before receiving approval from regulators.
Brain-to-Voice Researchers
The brain-to-voice study was led by researchers at the University of California. Specifically, the paper lists Maitreyee Wairagkar, Nicholas S. Card, Tyler Singer-Clark, Xianda Hou, Carrina Iacobacci, Lee M. Miller, Leigh R. Hochberg, David M. Brandman, and Sergey D. Stavisky as key contributors on the project.
Brain-to-Voice Future
According to the engineers, there is still much work to be done on the brain-to-voice system. They hope to expand their testing to include more patients in the coming months. They also want to include patients suffering from a broad array of disorders. Those seeking to participate in the study can contact [email protected] to see if they qualify.
Investing in the AI Healthcare Sector
Artificial intelligence countries play an expanding role in healthcare and treatments. This system can help healthcare providers determine ailments faster, more effectively treat them, and even discover the root causes of traumatic disorders. Today, there are several players in the AI healthcare arena. Here’s one company that managed to carve out a niche in the market.
SoundHound AI, Inc.
Santa Clara, California-based SoundHound AI (SOUN -3.97%) has had an interesting journey from its launch as the music identification app, Midomi, to becoming one of the most reputable names in AI conversational interfaces.
The company’s journey began in 2005 after Keyvan Mohajer began working with advanced audio recognition protocols. In 2015, the company rebranded into SoundHound and transferred its focus to its proprietary voice AI platform.
SoundHound AI, Inc. (SOUN -3.97%)
This protocol expanded AI speech recognition capabilities and was designed using Deep Meaning Understanding technologies. Today, SoundHound is a leading AI speech analysis systems provider that holds +250 patents and can support +20 international languages.
Latest SoundHound AI (SOUN) Stock News and Developments
Wall Street Analysts See SoundHound AI (SOUN) as a Buy: Should You Invest?
These 3 Stocks with High Short Interest Have Been Explosive
Is SoundHound AI Stock A Buy?
SoundHound AI, Inc. (SOUN) Laps the Stock Market: Here's Why
Where Will SoundHound AI Stock Be in 1 Year?
Is SoundHound AI Stock a Buy Now?
Brain-to-Voice Technology – The Future is Seamless
When you examine the technological advantages that the Brain-to-voice study introduced to the market, it’s easy to see why there are many who consider it a major milestone in the medical sector. In the future, you may control robots, vehicles, and other vital aspects of your life using a BCI.
This latest study highlights how this technology could improve the lives of many. As such, the researchers deserve a standing ovation for their efforts.
Learn about other cool healthcare breakthroughs here.











