Eye on AI - August 21st, 2020
Welcome to Aigora's "Eye on AI" series, where we round up exciting news at the intersection of consumer science and artificial intelligence!
This week, we’ll be looking at an exciting new neurological AI that translates brain patterns into speech (if only Stephen Hawking could’ve seen this!), then shift gears to discuss two new apps that use AI cameras to identify animal species and understand their emotions.
I Think, Therefore I… Speak?
We begin with some encouraging news in AI speech. Using computational models known as neural networks, three research teams recently collaborated on a new method of speech translation that uses electrodes surgically placed on the brains of patients who have lost the ability to speak to reconstruct words and sentences that were, in some cases, intelligible to human listeners.
“We are trying to work out the pattern of … neurons that turn on and off at different time points, and infer the speech sound,’ says Nima Mesgarani, a computer scientist at Columbia University. "The mapping from one to the other is not very straightforward.’ How these signals translate to speech sounds varies from person to person, so computer models must be "trained" on each individual. And the models do best with extremely precise data, which requires opening the skull.”
Thus far, researchers have only been able to train the technology on patients during limited windows of opportunity, such as the removal of a brain tumor or when a person with epilepsy is implanted with electrodes for several days to pinpoint the origin of seizures before surgical treatment, and then only on patients who had impaired speech. They collected data in different ways by connecting electrodes to the auditory cortex of their patients, which they fed into neural networks that were able to process complex brain patterns by passing information through layers of computational ‘nodes’ (in layman’s terms, the neural networks were looking to identify brain patterns representing speech). By adjusting connections between nodes, the networks began to understand attempted speech in patients by simultaneously monitoring brain activity, then reconstructed words derived from the neural data.
“Decoding imagined speech will require ‘a huge jump,’ says Gerwin Schalk, a neuroengineer at the National Center for Adaptive Neurotechnologies at the New York State Department of Health in Albany…. One approach [notes one of the researchers] might be to give feedback to the user of the brain-computer interface: If they can hear the computer's speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.”
In America alone, there are over two million people who without the ability to speak. The goal of the researchers, according to Science Magazine contributor Kelly Servickif, is to create a brain-computer interface may soon be able to re-create speech in speech-impaired patients in nuanced ways, allowing them to gain more than simply the ability to re-create speech through a computer: the ability to regain control over tone and inflection, or the ability to interject in a fast-moving conversation. They’re still a long way off, but this research is certainly encouraging.
AI Advances Help Researchers ID Birds & Interpret Animal Emotion
To finish up, let’s transition to exciting news for you animal lovers out there. Yahoo! News contributor Alex Lasker, in his article “Artificial intelligence camera can identify different bird species for you,” describes how a new wildlife-spotting camera app called ‘Birdsy’ gives homeowners the ability to record and identify birds and other animals exploring their property.
“The Wi-Fi-connected camera can monitor your bird feeders and yard 24/7, identify each species it spots, record and log each animal visitor, and send the data directly to a smartphone app for you to enjoy,” writes Lasker. “The app even makes it easy to share your favorite bird spots on social media.”
Birdsy, which only recently passed its kickstarter stage, has already attracted over 500 backers to help bring the project to life. Its AI can currently identify bird species in North America and Europe. As its AI continues to learn, the number of birds and other animals it can recognize will only continue to grow, giving animal lovers around the world the ability to watch and connect with the nature unfolding around them. Supplement this news with the article “Ever Wondered What Your Dog is Thinking?” for the complete animal lover experience.
That's it for now. If you'd like to receive email updates from Aigora, including weekly video recaps of our blog activity, click on the button below to join our email list. Thanks for stopping by!