The Menlo Park-based social media giant says its new artificial intelligence enforcement technology outperforms human review teams on key metrics, such as detecting fake accounts and sexual solicitation content.
The company announced on its website Thursday that it is “rolling out the Meta AI support assistant globally on Facebook and Instagram, not just to provide 24/7 help for account issues like updating passwords and profile settings, but also “to transform our approach to content enforcement, more accurately finding and removing severe content violations like scams and illegal content.”
Meta said the AI expansion will occur “over the next few years.”
“Today we’re launching new AI tools for support and content enforcement on our apps to make them work better for you,” the company said. “As technology advances, we’re applying AI in more ways so you can get reliable, action-oriented help when you need it, and we can catch more severe violations like scams faster and more accurately, with fewer over-enforcement mistakes.”
Launching the Meta AI Support Assistant
In December, Meta previewed its AI support assistant and it’s now rolling it out in countries and territories where Meta AI is currently available on the Facebook and Instagram apps for iOS and Android, and within Help Center on Facebook and Instagram.
The new Meta AI support assistant is designed to help resolve account problems and offers answers for question about notification settings or new features and also can handle:
- Reports of scams, impersonation accounts, or problematic content
- Questions about why content was taken down and how to appeal these decisions
- Managing privacy settings
- Resetting passwords
- Updating profile settings
The Meta AI support assistant is built into Facebook and Instagram, and Meta promised responses “typically in under five seconds.”
Meta called its AI support assistant “a major step in our work to deliver stronger support on our apps.”
It is being rolled out in all languages supported by Facebook and Instagram for support topics.
Improving Content Enforcement
Faced with increasing criticism about its apparent easing of content moderation, Meta said it is continuing working on “cutting down on mistakes and focusing our proactive enforcement toward illegal and the most severe content on our platforms like terrorism, child exploitation, drugs, fraud, and scams.”
Meta said it is experimenting with more advanced AI systems for content enforcement that “we believe can catch more of these violations more accurately while also stopping more scams and responding faster to real-world events with fewer over-enforcement mistakes.”
Meta said its new AI systems can:
- Reduce the chance that scammers trick people into giving away their login details, ultimately finding and mitigating 5,000 scam attempts per day that no existing review team had caught before
- Identify and prevent more accounts from impersonating celebrities and other high-profile people, which helped us to reduce user reports of the most impersonated celebrities by over 80%
- Catch two times more violating adult sexual solicitation content than our review teams, while also decreasing the rate of mistakes by more than 60%
- Prevent an account takeover by noticing it was suddenly accessed from a new location, the password was changed, and edits were made to the profile — changes that, in isolation, look harmless to a person reviewing the account, but AI was able to recognize as a threat
- Detect a fake site spoofing a legitimate web address and pretending to be a popular sporting goods store by noticing the real logo being used with unusually low prices and a suspicious web address.
“These more advanced AI systems can do all of this in languages spoken by 98% of people online — far beyond our previous coverage of around 80 languages,” according to Meta.
More advanced AI systems
Over the next few years, we’ll be deploying these more advanced AI systems across our apps once we’ve seen them consistently perform better than our current methods of content enforcement, transforming our approach.
Meta says it will reduce its reliance on third-party vendors for content enforcement and focus on strengthening internal systems and workforce.
“While we’ll still have people who review content, these systems will be able to take on work that’s better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drugs sales or scams.”
“AI can help us move faster and operate at scale, but it doesn’t replace human judgment — it helps us apply it more consistently across billions of pieces of content on our platforms,” the company said in its announcement. “Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions. For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.”
Meta also pledged that its Community Standards won’t be changed as part of the shift to AI, and that it will be “ improving our methods for reporting violating content and for appealing mistakes.”

