shot-button
Subscription Subscription
Home > Technology News > Meta building AI that processes speech and text as humans do

Meta building AI that processes speech and text as humans do

Updated on: 03 May,2022 10:42 AM IST  |  New Delhi
IANS |

Meta (formerly Facebook) has announced a long-term artificial intelligence (AI) research initiative to better understand how the human brain processes speech and text, and build AI systems that learn like people do

Meta building AI that processes speech and text as humans do

Representative Image. Pic/iStock

Meta (formerly Facebook) has announced a long-term artificial intelligence (AI) research initiative to better understand how the human brain processes speech and text, and build AI systems that learn like people do.

In collaboration with neuroimaging center Neurospin (CEA) and Inria, Meta said it is comparing how AI language models and the brain respond to the same spoken or written sentences.

"We'll use insights from this work to guide the development of AI that processes speech and text as efficiently as people," the social network said in a statement.

Over the past two years, Meta has applied deep learning techniques to public neuro-imaging data sets to analyse how the brain processes words and sentences.

Children learn that "orange" can refer to both a fruit and colour from a few examples, but modern AI systems can't do this as efficiently as people.

Meta research has found that language models that most resemble brain activity are those that best predict the next word from context (like once upon a... time).

"While the brain anticipates words and ideas far ahead in time, most language models are trained to only predict the very next word," said the company.

Unlocking this long-range forecasting capability could help improve modern AI language models.

Meta recently revealed evidence of long-range predictions in the brain, an ability that still challenges today's language models.

For the phrase, "Once upon a..." most language models today would typically predict the next word, "time," but they're still limited in their ability to anticipate complex ideas, plots and narratives, like people do.

In collaboration with Inria, Meta research team compared a variety of language models with the brain responses of 345 volunteers who listened to complex narratives while being recorded with fMRI.

"Our results showed that specific brain regions are best accounted for by language models enhanced with far-off words in the future," the team said.


Also Read: Instagram exploring new feature allowing users to pin posts above photo grid


This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever.


"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!

Register for FREE
to continue reading !

This is not a paywall.
However, your registration helps us understand your preferences better and enables us to provide insightful and credible journalism for all our readers.

Mid-Day Web Stories

Mid-Day Web Stories

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK