i This way you can stop fake news on Facebook – Niranjan – News & Media Skip to content
blog-1

This way you can stop fake news on Facebook

With Facebook introducing news flags to identify fake content, how can we stop fake news being circulated within the primary place?
We’ve all done it. Noticed that juicy, surprising, shocking story appear in our Facebook news feeds, and hit the ever-tempting Share button. Later (moments, hours, days, weeks) we realize, or someone informs us, that the story was a fake. Maybe we delete the post, maybe we call out the fake story during a comment, maybe we ditch it. Maybe we don’t even realize we’ve helped spread misinformation.

In what sometimes looks like a 24-hour, social media-driven, breaking news cycle, online fake news — sometimes humorous, sometimes political, sometimes personal — features a bent to spread quickly across social networks. What role, then, should social media companies — especially Facebook, Twitter, and YouTube — play in fighting viral misinformation?

Facebook recently became the primary company to implement an early solution directly addressing the problem: a replacement choice to flag news feed items as “It’s a false news story”. If enough users flag a link as “false news” then it’ll appear less within the News Feed, and may show a warning: “Many people on Facebook have reported that this story contains false information.”

Stop Fake News

The problem(s)
It’s entirely laudable that Facebook is trying out this new feature: online misinformation can have serious, life-threatening offline consequences, especially in parts of the planet where digital media literacy is low and knowledge ecosystems are weak. The approach, however, merits some serious questions:

Is there any because of stop people gaming the system?

All systems are often gamed, but within the case of Facebook, we’ve already seen a mild stream of important groups and pages close up — likely prompted by “community” reporting of these pages. While in theory this might sound a viable approach, in practice reporting is typically used as a tactic by people who afflict a specific view or ideology.

Most prominently this has been seen with Syrian opposition groups using Facebook to document and report on the continued civil war:

Facebook doesn’t disclose information about who reported whom, making it impossible to verify these theories. But the pro-Assad Syrian Electronic Army (SEA) has publicly gloated about this tactic. “We continue our reporting attacks,” read a typical post from December 9 on the SEA’s Facebook page.

It’s thus easy to ascertain activist groups flagging ‘false’ news stories en masse based not on their factual content, but on their desire to silence an opposing or dissenting voice. Once a flagged news story has disappeared from our feeds it’s unclear if there’s the way to challenge the ‘false’ assignment, or even the way to ascertain a listing of the stories that are blocked.

Are ‘false news stories’ and stories which ‘contain false information’ the same thing?

The protective disclaimer, “Many people on Facebook have reported that this story contains false information”, is one that leads to many questions which can seemingly be left unanswered: Which piece of data within the story is false? Is it that the key facts of the story are false, or that there’s an error during a background statistic? Is there strong evidence that proves that this is often often false? Who is saying it’s false?

Does this all mean that the stories that appear in my newsfeed are 100 per cent true?

The argument could be made that by being seen to intervene and removing ‘false news stories’ from the system, users will logically assume that stories that are not removed have a better degree of credibility. this might not necessarily be the case, and in our world of ‘filter bubbles’ this tendency could help strengthen our confirmation bias and truly limit the flagging of false news stories.

3 ways Facebook can improve their false news feature


Our work on Checkdesk has led to many great conversations with many desirable people about the challenges of viral misinformation online — likely some have already posted on this subject or may do so within the near future. within the meantime here are a few of suggestions for other ways Facebook could help limit the spread of faux news stories.

1. Help spread the debunks

Research suggests that fake news typically spread faster and wider than articles debunking that exact same story. Craig Silverman — debunker extraordinaire and editor of the Verification Handbook — is additionally working on the superb Emergent.info, which tracks the spread of online rumors and articles debunking the fakes. a fast glance at Emergent strongly suggests that fakes propagate faster and wider than corrections — hopefully, Craig’s research will shed more light thereon relationship.

Comparing propagation of an incorrect report back to its correction by traffic, by Gilad Lotan, via Poynter.
Comparing propagation of an incorrect report back to its correction by traffic, by Gilad Lotan, via Poynter

If Facebook is taking the step of filtering stories flagged as ‘fake’, then maybe it could also lend a hand in helping spread articles which directly debunks those fakes — maybe even directly into the news feeds of people who shared the primary fake stories.

2. Show the work

If media companies (Facebook included) want to help users make smart choices about the media they’re consuming, then transparency in reporting is vital. Typical disclaimers (“This report couldn’t be independently verified”, “Many people have flagged this story as containing false information”) are confusing because they’re so incomplete: there’s no indication on the extent of certainty or verification.

With Checkdesk, we attempt to solve this problem with verification footnotes (allowing journalists and community members to ask questions and add important contextual or corroborating information a few specific link) and statuses (“Verified”, “In Progress” etc). While this is often deliberately a highly manual process, others have checked out ways of presenting “truthiness” supported algorithmic analysis, like Rumor Gauge. Scalability is clearly a problem for Facebook, so a mixture of algorithmic analysis and community-driven contextual information might be wont to make the filtering process more transparent and accountable.

The aim shouldn’t be to easily filter stories but to offer users the knowledge they have to assess whether a story is real, fake, or somewhere in between.

3. Support digital media literacy

It shouldn’t be a surprise that fake stories spread far and fast: if even journalists at world-class newsrooms are struggling to sort online fact from fiction, then it’s hard to understand what to trust. Internet users round the world need better tools and knowledge about the risks and dangers of misinformation, and basic knowledge about the way to spot (or better, check) a fake story.

Progress has been made during this area in recent years, but more resources like the Verification Handbook are needed (and in additional languages!) to guide people to ask the proper questions on the link they’re retweeting or the post they’re sharing.

What are your thoughts on Facebook’s new feature? What suggestions does one have for helping make it more effective?

Reference / Credit- firstdraftnews, Facebook, Wikipedia and medium

Leave a Reply

Your email address will not be published.Required fields are marked *

*