Where you learn something new every day.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Meta plans to replace humans with AI to assess privacy and societal risks

People talk near a Meta sign outside of the company's headquarters in Menlo Park, Calif.
Jeff Chiu
/
AP
People talk near a Meta sign outside of the company's headquarters in Menlo Park, Calif.

For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users' privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?

Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.

But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.

In practice, this means things like critical updates to Meta's algorithms, new safety features and changes to how content is allowed to be shared across the company's platforms will be mostly approved by a system powered by artificial intelligence — no longer subject to scrutiny by staffers tasked with debating how a platform change could have unforeseen repercussions or be misused.

Inside Meta, the change is being viewed as a win for product developers, who will now be able to release app updates and features more quickly. But current and former Meta employees fear the new automation push comes at the cost of allowing AI to make tricky determinations about how Meta's apps could lead to real world harm.

"Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you're creating higher risks," said a former Meta executive who requested anonymity out of fear of retaliation from the company. "Negative externalities of product changes are less likely to be prevented before they start causing problems in the world."

Meta said in a statement that it has invested billions of dollars to support user privacy.

Since 2012, Meta has been under the watch of the Federal Trade Commission after the agency reached an agreement with the company over how it handles users' personal information. As a result, privacy reviews for products have been required, according to current and former Meta employees.

In its statement, Meta said the product risk review changes are intended to streamline decision-making, adding that "human expertise" is still being used for "novel and complex issues," and that only "low-risk decisions" are being automated.

But internal documents reviewed by NPR show that Meta is considering automating reviews for sensitive areas including AI safety, youth risk and a category known as integrity that encompasses things like violent content and the spread of falsehoods.

Former Meta employee: 'engineers are not privacy experts'

A slide describing the new process says product teams will now in most cases receive an "instant decision" after completing a questionnaire about the project. That AI-driven decision will identify risk areas and requirements to address them. Before launching, the product team has to verify it has met those requirements.

Meta Founder and CEO Mark Zuckerberg speaks at LlamaCon 2025, an AI developer conference, in Menlo Park, Calif., Tuesday, April 29, 2025. (AP Photo/Jeff Chiu)
Jeff Chiu/AP / AP
/
AP
Meta Founder and CEO Mark Zuckerberg speaks at LlamaCon 2025, an AI developer conference, in Menlo Park, Calif., Tuesday, April 29, 2025. (AP Photo/Jeff Chiu)

Under the prior system, product and feature updates could not be sent to billions of users until they received the blessing of risk assessors. Now, engineers building Meta products are empowered to make their own judgements about risks.

In some cases, including projects involving new risks or where a product team wants additional feedback, projects will be given a manual review by humans, the slide says, but it will not be by default, as it used to be. Now, the teams building products will make that call.

"Most product managers and engineers are not privacy experts and that is not the focus of their job. It's not what they are primarily evaluated on and it's not what they are incentivized to prioritize," said Zvika Krieger, who was director of responsible innovation at Meta until 2022. Product teams at Meta are evaluated on how quickly they launch products, among other metrics.

"In the past, some of these kinds of self-assessments have become box-checking exercises that miss significant risks," he added.

Krieger said while there is room for improvement in streamlining reviews at Meta through automation, "if you push that too far, inevitably the quality of review and the outcomes are going to suffer."

Meta downplayed concerns that the new system will introduce problems into the world, pointing out that it is auditing the decisions the automated systems make for projects that are not assessed by humans.

The Meta documents suggest its users in the European Union could be somewhat insulated from these changes. An internal announcement says decision making and oversight for products and user data in the European Union will remain with Meta's European headquarters in Ireland. The EU has regulations governing online platforms, including the Digital Services Act, which requires companies including Meta to more strictly police their platforms and protect users from harmful content.

Some of the changes to the product review process were first reported by The Information, a tech news site. The internal documents seen by NPR show that employees were notified about the revamping not long after the company ended its fact-checking program and loosened its hate speech policies.

Taken together, the changes reflect a new emphasis at Meta in favor of more unrestrained speech and more rapidly updating its apps — a dismantling of various guardrails the company has enacted over the years to curb the misuse of its platforms. The big shifts at the company also follow efforts by CEO Mark Zuckerberg to curry favor with President Trump, whose election victory Zuckerberg has called a "cultural tipping point."

Is moving faster to assess risks 'self-defeating'?

Another factor driving the changes to product reviews is a broader, years-long push to tap AI to help the company move faster amid growing competition from TikTok, OpenAI, Snap and other tech companies.

Meta said earlier this week it is relying more on AI to help enforce its content moderation policies.

"We are beginning to see [large language models] operating beyond that of human performance for select policy areas," the company wrote in its latest quarterly integrity report. It said it's also using those AI models to screen some posts that the company is "highly confident" don't break its rules.

"This frees up capacity for our reviewers allowing them to prioritize their expertise on content that's more likely to violate," Meta said.

Katie Harbath, founder and CEO of the tech policy firm Anchor Change, who spent a decade working on public policy at Facebook, said using automated systems to flag potential risks could help cut down on duplicative efforts.

"If you want to move quickly and have high quality you're going to need to incorporate more AI, because humans can only do so much in a period of time," she said. But she added that those systems also need to have checks and balances from humans.

Another former Meta employee, who spoke on condition of anonymity because they also fear reprisal from the company, questioned whether moving faster on risk assessments is a good strategy for Meta.

"This almost seems self-defeating. Every time they launch a new product, there is so much scrutiny on it — and that scrutiny regularly finds issues the company should have taken more seriously," the former employee said.

Michel Protti, Meta's chief privacy officer for product, said in a March post on its internal communications tool, Workplace, that the company is "empowering product teams" with the aim of "evolving Meta's risk management processes."

The automation roll-out has been ramping up through April and May, said one current Meta employee familiar with product risk assessments who was not authorized to speak publicly about internal operations.

Protti said automating risk reviews and giving product teams more say about the potential risks posed by product updates in 90% of cases is intended to "simplify decision-making." But some insiders say that rosy summary of removing humans from the risk assessment process greatly downplays the problems the changes could cause.

"I think it's fairly irresponsible given the intention of why we exist," said the Meta employee close to the risk review process. "We provide the human perspective of how things can go wrong."

Do you have information about Meta's changes? Reach out to these authors through encrypted communications on Signal. Bobby Allyn is available at ballyn.77 and Shannon Bond is available at shannonbond.01

Copyright 2025 NPR

Bobby Allyn is a business reporter at NPR based in San Francisco. He covers technology and how Silicon Valley's largest companies are transforming how we live and reshaping society.
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.
More News