New Networks for Verbal Working Memory Architecture : Useful For Artificial Intelligence

The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by NYU researchers—a discovery that has implications for the creation of artificial intelligence (AI) systems, such as speech translation tools. (c)iStock/NicoElNino

New Networks for Verbal Working Memory Architecture : Useful For Artificial Intelligence

The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by NYU researchers–a discovery that has implications for the creation of artificial intelligence systems.

NYU News

The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by researchers at New York University. It shows that processing information in working memory involves two different networks in the brain rather than one—a discovery that has implications for the creation of artificial intelligence (AI) systems, such as speech translation tools.

“Our results show there are at least two brain networks that are active when we are manipulating speech and language information in our minds,” explains Bijan Pesaran, an associate professor at New York University’s Center for Neural Science and the senior author of the research.

The work appears in the journal Nature Neuroscience.

Past studies had emphasized how a single “Central Executive” oversaw manipulations of information stored in working memory. The distinction is an important one, Pesaran observes, because current AI systems that replicate human speech typically assume computations involved in verbal working memory are performed by a single neural network.

“Artificial intelligence is gradually becoming more human like,” says Pesaran. “By better understanding intelligence in the human brain, we can suggest ways to improve AI systems. Our work indicates that AI systems with multiple working memory networks are needed.”

The paper’s first author was Greg Cogan, an NYU postdoctoral fellow at the time of the study and now a postdoctoral fellow at Duke University; other co-authors were Professor Orrin Devinsky, director of the Comprehensive Epilepsy Center at NYU Langone Medical Center, Werner Doyle, an associate professor at NYU Langone’s Department of Neurosurgery, Dan Friedman, an associate professor at NYU Langone’s Department of Neurology, and Lucia Melloni, an assistant professor at NYU Langone’s Department of Neurology.

The study focused on a form of working memory critical for thinking, planning, and creative reasoning and involves holding in mind and transforming the information necessary for speech and language.

The researchers examined human patients undergoing brain monitoring to treat drug-resistant epilepsy. Specifically, they decoded neural activity recorded from the surface of the brain of these patients as they were listening to speech sounds and speaking after a short delay. This method required the study’s subjects to use a rule provided by the researchers to transform speech sounds they heard into different spoken utterances—for example, the patients were told to repeat the same sound they had heard while at other times the researchers instructed the patients to listen to the sound and make a different utterance.

The researchers decoded the neural activity in each patient’s brain as the patients applied the rule to convert what they heard into what they needed to say. The results revealed that manipulating information held in working memory involved the operation of two brain networks. One network encoded the rule that the patients were using to guide the utterances they made (the rule network). Surprisingly, however, the rule network did not encode the details of how the subjects converted what they heard into what they said. The process of using the rule to transform the sounds into speech was handled by a second, transformation network. Activity in this network could be used to track how the input (what was heard) was being converted into the output (what was spoken) moment-by-moment.

Translating what you hear in one language to speak in another language involves applying a similar set of abstract rules. People with impairments of verbal working memory find it difficult to learn new languages. Modern intelligent machines also have trouble learning languages, the researchers add.

“One way we can enhance the development of more intelligent systems is with a fuller understanding of how the human brain and mind works,” notes Pesaran. “Diagnosing and treating working memory impairments in people involves psychological assessments. By analogy, machine psychology may one day be useful for diagnosing and treating impairments in the intelligence of our machines. This research examines a uniquely human form of intelligence, verbal working memory, and suggests new ways to make machines more intelligent.”

This work was supported, in part, by the National Institute on Deafness and Other Communication Disorders, part of the National Institutes of Health (R03-DC010475), and the Simons Collaboration for the Global Brain.

Follow ScipreneurFollow on FacebookTweet about this on TwitterFollow on LinkedInPin on PinterestEmail this to someone
0
0
About Scipreneur Private Limited 563 Articles
“Scipreneur” is a combination of two words; “Science and Entrepreneur”. Scipreneur provides a bridge to connect various ventures, businesses, researchers, and service providers. We also publish news and current advancements in the science that has a strong potential to enter the market.

Be the first to comment

Leave a Reply

Your email address will not be published.


*