Facebook Says It Removed 1.3 Billion Fake Accounts, Explains How It Handles Misinformation

Facebook Says It Removed 1.3 Billion Fake Accounts, Explains How It Handles Misinformation
A Facebook panel is seen during the Cannes Lions International Festival of Creativity, in Cannes, France, on June 20, 2018. (Eric Gaillard/Reuters)
3/22/2021
Updated:
3/22/2021

Facebook on March 22 issued an announcement on how it plans to combat misinformation on its platforms. The technology giant also said it took down 1.3 billion fake accounts between October and December 2020.

“Tackling misinformation actually requires addressing several challenges including fake accounts, deceptive behavior, and misleading and harmful content,” Guy Rosen, vice president of integrity at Facebook, wrote in the statement.

Rosen said they have a group of “more than 80 independent fact-checkers, who review content in more than 60 languages,” and if they judge something as untrue, the content’s dissemination is limited.

Facebook CEO Mark Zuckerberg testifies via video conference before the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law in the Rayburn House Office Building on Capitol Hill in Washington on July 29, 2020. (Graeme Jennings/Pool via Reuters)
Facebook CEO Mark Zuckerberg testifies via video conference before the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law in the Rayburn House Office Building on Capitol Hill in Washington on July 29, 2020. (Graeme Jennings/Pool via Reuters)

“When they rate something as false, we reduce its distribution so fewer people see it and add a warning label with more information for anyone who sees it,” Rosen wrote.

He also noted that once one of these labels is applied, the vast majority of people don’t click on the post.

“We know that when a warning screen is placed on a post, 95% of the time people don’t click to view it,” he said.

The company published its policies before a U.S. House Committee on Energy and Commerce investigation into how tech platforms are tackling misinformation.

Rosen wrote that Facebook suppresses the distribution of “Pages, Groups, and domains who repeatedly share misinformation,” with a particular emphasis on “false claims about COVID-19 and vaccines and content that is intended to suppress voting.”

Rosen said the platform uses both people and artificial intelligence to detect activity they’re looking to combat, adding that they now have 35,000 people working on it.

“As a result, we’ve removed more than 12 million pieces of content about COVID-19 and vaccines,” he said.

Facebook Fact-Checker Funded by Chinese Money

While Facebook portrays its army of fact-checkers as independent, the money behind at least one carries a distinct taint.
Lead Stories is partly paid through a partnership with TikTok, a social media platform run by a Chinese company that owes its allegiance to the Chinese Communist Party (CCP).

Moreover, the organization that’s supposed to oversee the quality of fact-checkers is run by Poynter Institute, another TikTok partner.

Lead Stories says it’s been contracted by ByteDance “for fact-checking-related work,” referring to TikTok’s announcement earlier this year that it has partnered with several organizations “to further aid our efforts to reduce the spread of misinformation,” particularly regarding the CCP virus pandemic, which originated in China and was exacerbated by the CCP regime’s coverup.

Lead Stories was started in 2015 by Belgian website developer Maarten Schenk, CNN veteran Alan Duke, and two lawyers from Florida and Colorado. It listed operating expenses of less than $50,000 in 2017, but had expanded sevenfold by 2019, largely because of the more than $460,000 Facebook paid it for fact-checking services in 2018 and 2019. The company took on more than a dozen staffers, about half of them CNN alumni, and became one of Facebook’s most prolific fact-checkers of U.S. content.

Petr Svab contributed to this report.