Stanford University researchers have achieved a breakthrough in neurotechnology by successfully decoding inner speech—the silent thoughts in people’s heads—with up to 74% accuracy using brain-computer interfaces. Published Thursday in the journal Cell, the study represents the first time scientists have decoded imagined words in real time from the brain’s motor cortex.
The research team, led by postdoctoral scholar Erin Kunz and assistant professor Frank Willett, worked with four participants with severe paralysis from amyotrophic lateral sclerosis (ALS) or brainstem stroke. Using microelectrode arrays implanted in the motor cortex—the brain region controlling voluntary movement including speech—the system captured neural patterns as participants either attempted to speak or simply imagined saying words.
How the Technology Works
The Stanford team discovered that inner speech and attempted speech activate overlapping brain regions, though inner speech produces weaker neural signals. Researchers trained artificial intelligence models on these patterns to interpret imagined words from a vocabulary of 125,000 words.
“This is the first time we’ve managed to understand what brain activity looks like when you just think about speaking,” Kunz said. The system achieved 74% accuracy in decoding imagined sentences, though performance varied between participants and trials.
Co-first author Benyamin Meschede-Krasa noted: “If you just have to think about speech instead of actually trying to speak, it’s potentially easier and faster for people.”
Privacy Protection Built-In
Recognizing ethical concerns about unintended thoughts, the research team implemented a password protection system. Users must internally vocalize the phrase “chitty chitty bang bang” to activate the decoding interface, which the system recognized with 98.75% accuracy. The passphrase was chosen because it’s unlikely to occur in normal thought.
The study also showed that attempted speech and inner speech produce sufficiently different neural patterns, allowing BCIs to distinguish between them. This means future systems can ignore inner speech when users employ attempted-speech interfaces, preventing accidental “leakage” of private thoughts.
Medical Applications and Future Prospects
The technology offers hope for people with conditions like ALS who struggle with communication. Participants reported preferring the inner speech system because it required less physical effort.
Current speech BCIs that decode attempted speech can achieve up to 98% accuracy but demand exhausting effort. Normal conversation occurs at about 150 words per minute, while existing BCIs reach about 90 words per minute.
While the technology cannot yet decode spontaneous free-form thoughts, Willett expressed optimism: “This work gives real hope that speech BCIs can one day restore communication that is as fluent, natural, and comfortable as conversational speech.”