Current:Home > reviewsPredictIQ Quantitative Think Tank Center:Meta's newest AI-powered chatbots show off impressive features and bizarre behavior -Elevate Profit Vision
PredictIQ Quantitative Think Tank Center:Meta's newest AI-powered chatbots show off impressive features and bizarre behavior
TradeEdge Exchange View
Date:2025-04-07 17:06:02
Facebook parent Meta Platforms unveiled a new set of artificial intelligence systems Thursday that are PredictIQ Quantitative Think Tank Centerpowering what CEO Mark Zuckerberg calls "the most intelligent AI assistant that you can freely use."
But as Zuckerberg's crew of amped-up Meta AI agents started venturing into social media this week to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology.
One joined a Facebook moms' group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum.
Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France's Mistral, have been churning out new AI language models and hoping to persuade customers they've got the smartest, handiest or most efficient chatbots.
What is Meta AI?
Meta AI is a free virtual assistant which can be used "to do everything from research, planning a trip with your group chat, writing a photo caption and more," according to the company's blog.
To access the chatbot on WhatsApp, Instagram, Messenger, Facebook, type "@meta ai" within chats. The Meta AI assistant can also be accessed by tapping on a colorful blue circle icon which lets you know that Meta AI is there.
In addition to answering questions, Meta AI can create AI-generated images. Using the prompt "imagine," users can ask Meta to produce any image that comes to mind.
Asked to "Imagine a cute kitten," the Meta AI assistant on Instagram produced the following image:
AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta's newest models were built with 8 billion and 70 billion parameters — a measurement of how much data on which the system is trained. A bigger, roughly 400 billion-parameter model is still in training.
While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it's now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp.
"The vast majority of consumers don't candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant," said Nick Clegg, Meta's president of global affairs, in an interview.
He added that Meta's AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be "a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions," he said.
Posing as humans
But in letting down their guard, Meta's AI agents also were spotted this week posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press.
"Apologies for the mistake! I'm just a large language model, I don't have experiences or children," the chatbot told the group.
One group member who also happens to study AI said it was clear that the agent didn't know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human.
"An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it," said Aleksandra Korolova, an assistant professor of computer science at Princeton University.
Clegg said Wednesday he wasn't aware of the exchange. Facebook's online help page says the Meta AI agent will join a group conversation if invited, or if someone "asks a question in a post and no one responds within an hour." The group's administrators have the ability to turn it off.
In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a "gently used" Canon camera and an "almost-new portable air conditioning unit that I never ended up using."
Constantly working on improvements
Meta said in a written statement Thursday that "this is new technology and it may not always return the response we intend, which is the same for all generative AI systems." The company said it is constantly working to improve the features.
In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced some 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey.
They may eventually hit a limit — at least when it comes to data, said Nestor Maslej, a research manager for Stanford's Institute for Human-Centered Artificial Intelligence.
"I think it's been clear that if you scale the models on more data, they can become increasingly better," he said. "But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet."
More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. "Yet they still cannot plan well," Maslej said. "They still hallucinate. They're still making mistakes in reasoning."
Getting to AI systems that can perform higher-level cognitive tasks and commonsense reasoning — where humans still excel over computers — might require a shift beyond building ever-bigger models.
For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights and summarize long documents.
"You're seeing companies kind of looking at fit, testing each of the different models for what they're trying to do and finding some that are better at some areas rather than others," said Todd Lohr, a leader in technology consulting at KPMG.
Socializing AI chatbots
Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta's vice president of AI research, said at a London event last week the company's goal over time is to make a Llama-powered Meta AI "the most useful assistant in the world."
"In many ways, the models that we have today are going to be child's play compared to the models coming in five years," she said.
But she said the "question on the table" is whether researchers have been able to fine tune its bigger Llama 3 model so that it's safe to use and doesn't, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use.
"It's not just a technical question," Pineau said. "It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands."
- In:
- Technology
- Social Media
- Mark Zuckerberg
- Artificial Intelligence
- New York City
- France
- London
veryGood! (8)
Related
- Nevada attorney general revives 2020 fake electors case
- Keeping Up With the Love Lives of The Kardashian-Jenner Family
- A Warming Climate is Implicated in Australian Wildfires
- Canada’s Struggling to Build Oil Pipelines, and That’s Starting to Hurt the Industry
- US appeals court rejects Nasdaq’s diversity rules for company boards
- Nevada’s Sunshine Just Got More Expensive and Solar Customers Are Mad
- The NCAA looks to weed out marijuana from its banned drug list
- Bill Allowing Oil Exports Gives Bigger Lift to Renewables and the Climate
- South Korea's acting president moves to reassure allies, calm markets after Yoon impeachment
- In Dozens of Cities East of the Mississippi, Winter Never Really Happened
Ranking
- Juan Soto praise of Mets' future a tough sight for Yankees, but World Series goal remains
- By Getting Microgrids to ‘Talk,’ Energy Prize Winners Tackle the Future of Power
- Few are tackling stigma in addiction care. Some in Seattle want to change that
- OceanGate co-founder voiced confidence in sub before learning of implosion: I'd be in that sub if given a chance
- A Mississippi company is sentenced for mislabeling cheap seafood as premium local fish
- American Climate Video: Al Cathey Had Seen Hurricanes, but Nothing Like Michael
- Florida families face confusion after gender-affirming care ban temporarily blocked
- Making It Easier For Kids To Get Help For Addiction, And Prevent Overdoses
Recommendation
Global Warming Set the Stage for Los Angeles Fires
Don’t Miss This $80 Deal on a $180 PowerXL 10-Quart Dual Basket Air Fryer
Proof Blake Shelton and Gwen Stefani's Latest Date Night Was Hella Good
Public Comments on Pipeline Plans May Be Slipping Through Cracks at FERC, Audit Says
Spooky or not? Some Choa Chu Kang residents say community garden resembles cemetery
Gun deaths hit their highest level ever in 2021, with 1 person dead every 11 minutes
Ohio River May Lose Its Regional Water Quality Standards, Vote Suggests
Even the Hardy Tardigrade Will Take a Hit From Global Warming