[Science] Hearing device picks out right voice from a crowd by reading your mind – AI


10’000 Hours/GettyBy Michael Le Page Sometimes it is hard to make out what people are saying in a noisy crowded environment. A device that reads your mind to work out which voices to amplify may be able to help. The experimental device can separate two or three different voices of equal loudness in real time. It can then work out which voice someone is trying to listen to from their brainwaves and amplify that voice. Advertisement The device, created by Nima Mesgarani at Columbia University in New York, is a step towards creating smart hearing aids that solve the classic cocktail party problem — how to separate voices in a crowd. First, Mesgarani’s team worked on a system that could separate the voices of two or three people speaking into a single microphone at the same loudness. Read more: Mind-reading implant can decode what your ears are hearing Several big companies like Google and Amazon have developed similar AI-based ways of doing this to improve voice assistants like Alexa. But these systems separate voices after people have finished speaking, Mesgarani says. His system works in real time, as people are speaking. Next, the team played recordings of people telling stories to three people who were in hospital with electrodes placed into their brains to monitor epileptic seizures. In 2012, Mesgarani showed that the brainwaves in a certain part of the auditory cortex can reveal which of several voices a person is focusing on. Brainwaves By monitoring the brainwaves of the three volunteers, the hearing device could tell which voice people were listening to and selectively amplify just that voice. When the volunteers were asked to switch attention to a different voice, the device could detect the shift and respond. There’s still a long way to go before a practical hearing aid could be created. For starters, of course, people are not going to want electrodes in their heads. But Mesgarani says it is possible to detect the relevant brainwaves with scalp electrodes or even electrodes built into earphones. “The signal quality degrades but it is still possible to decode it,” he says. There are other possible ways to select the voice to be amplified, such as the direction a person is looking or even a manual switch. But Mesgarani does not think that people will want to have to stare fixedly in one direction. A hearing aid would also have to be able cope with more than three voices and other kinds of noise. This should be achievable with further development. For instance, more distant voices at a party merge together into “babble noise” that is simple to filter out. More on these topics: artificial intelligence

Leave a Reply