Demoting groups that spread misinformation
Among the biggest changes Facebook announced Wednesday was that it would start demoting the reach of groups that repeatedly spread false news stories, images and videos.
“When people in a group repeatedly share content that has been rated false by independent fact-checkers, we will reduce that group’s overall News Feed distribution,” wrote Guy Rosen, Facebook’s vice president of integrity, and Tessa Lyons, head of news feed integrity, in the blog post.
Facebook has caught flack over the past few months for the spread of anti-vaccine conspiracy theories, many of which started in groups and then spread to the rest of the platform. In response to both media pressure and pressure from American politicians, the company outlined a plan in early March to curb antivaxxer content on its platform.
In it, Facebook announced that groups and pages that share anti-vaccine misinformation would be removed from its recommendation algorithm — but not removed altogether. The move was a tacit acknowledgment of the power that groups have in spreading bogus content.
BuzzFeed News reported in March 2018 that the feature, often lauded by Facebook leadership — and prioritized in News Feed — had become “a global honeypot of spam, fake news, conspiracies, health misinformation, harassment, hacking, trolling, scams and other threats to users.” Why?
“Propagandists and spammers need to amass an audience, and groups serve that up on a platter,” Renee DiResta, a security researcher, told BuzzFeed. “There’s no need to run an ad campaign to find people receptive to your message if you can simply join relevant groups and start posting.”
And, while the company has taken several steps to limit the spread of fakery in News Feed, until Wednesday, it was doing little to combat misinformation specifically in groups.
“There’s no concerted effort to get rid of false news, misinformation, whatever,” a former Facebook employee who worked on groups told Poynter in January. “It’s so much worse because it sits there and it’s hidden … it’s just as bad as a false news misinformation generation machine as it ever was on News Feed.”
Leonard Lam, a spokesman for Facebook groups, told Poynter that the same anti-misinformation policies that govern products like News Feed apply to the entire platform. That means bogus articles, images and videos debunked by Facebook’s fact-checking partners will appear with the relevant fact check displayed below them — even in groups.
Those signals will also be used to determine which groups are repeat misinformers, one of the first things Facebook has done specifically to combat misinformation in groups.
Crowdsourcing trust in news
Wednesday’s announcement comes as Facebook expands its partnership with fact-checking outlets around the world — arguably the company’s most visible effort to combat misinformation on the platform.
Facebook launched the program in December 2016 with American fact-checkers like (Poynter-owned) PolitiFact, Snopes and Factcheck.org. The goal: To identify, debunk and reduce the reach of false news stories on the platform. Once a hoax is flagged as false, its future reach in the News Feed is decreased and a fact check is appended to it. (Disclosure: Being a signatory of Poynter’s International Fact-Checking Network’s code of principles is a necessary condition for joining the project.)
Since then, it has expanded to let fact-checkers debunk false images and videos. The partnership has grown to 47 projects writing in 23 languages around the world. And while projects like Snopes and CBS have pulled out for different reasons, outlets like the Associated Press have recently expanded their commitment to the program.
One new anti-misinformation feature could help bolster that work.
“There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” Rosen and Lyons wrote in the blog post. “One promising idea to bolster their work, which we’ve been exploring since 2017, involves groups of Facebook users pointing to journalistic sources to corroborate or contradict claims made in potentially false content.”
CEO Mark Zuckerberg aired that idea in a Facebook video in February — a little more than a year after he first floated it. The move wasn’t popular among journalists, who said that everyday Facebook users aren’t able to set aside their biases to grade credible news outlets.
But a new study published in February 2018 suggests otherwise.
“What we found is that, while there are real disagreements among Democrats and Republicans concerning mainstream news outlets, basically everybody — Democrats, Republicans and professional fact-checkers — agree that the fake and hyperpartisan sites are not to be trusted,” said David Rand, an associate professor at the Massachusetts Institute of Technology, in a press release.
According to Wednesday’s blog post, Facebook will continue exploring the idea by consulting academics, fact-checking experts, journalists and civil society organizations.
“Any system we implement must have safeguards from gaming or manipulation, avoid introducing personal biases and protect minority voices,” Rosen and Lyons wrote.
More context on Facebook
In the past, tech companies have turned to websites like Wikipedia to provide more context about the sources that publish on their platforms. On Wednesday, Facebook announced a slew of new similar indicators.
“We’re investing in features and products that give people more information to help them decide what to read, trust and share,” Rosen and Lyons wrote in the blog post.
Facebook has updated its context button, launched in April last year, to include information from The Trust Project about publishers’ ethics policies, ownership and funding structure. The company is starting to show more information in its page quality tab, which launched in January to show page owners which of their posts were debunked by Facebook’s fact-checking partners. And, in Messenger, the company is adding a verified badge to cut down on impersonations and scams.
Facebook is also starting to label forwarded messages in Messenger — a tactic seemingly borrowed from sister company WhatsApp, which rolled out a similar feature in July in an attempt to cut down on the spread of misinformation.
While they’re an easy way to give users more information about publishers on social media, and thereby prevent them from sharing misinformation, indicators like Facebook’s context button also have the potential to be gamed.
Over the summer, someone vandalized the Wikipedia page for the California Republican Party to say that it supported Nazism. While most cases of Wikipedia vandalism are caught fairly quickly, this incident case made its way to Google, which surfaced the false edit high up in search results.
That’s rare. But given the volume of edits that are made to Wikipedia each day, it can be hard for tech platforms to catch all instances of vandalism.
“Of course it is a pretty weak way to combat fake news because Wikipedia is not a reliable source of information — as even Wikipedia acknowledges,” Magnus Pharao Hansen, a postdoctoral researcher at the University of Copenhagen, told Poynter in June. “Wikipedia is very vulnerable to hoaxes and contains all kinds of misinformation, so it is not a very serious way to combat the problem of fabricated news.”
At the same time, features like Facebook’s page quality tab have had a more demonstrative effect on the spread of misinformation.
After Factcheck.org debunked a false meme about U.S. Rep. Alexandria Ocasio-Cortez (D-N.Y.) in March, the page that published the photo deleted it. And it wasn’t the first time; other repeat misinforming pages on Facebook have taken down content debunked by the company’s fact-checking partners, and some have rebranded their operations altogether.