Facebook can't save us from fake news



Facebook has a fake news problem. Google isn’t doing a good enough job of filtering out fake news from their services. Twitter bots spread fake news. The government needs to ban fake news. Statements like these reflect a sentiment that is both lazy and dangerous.

It's lazy because it ignores our own culpability in hitting “share” the moment we encounter something compelling, without pausing to verify, or in many cases, even read it beyond the headline.

It’s dangerous because it calls for giving the platforms or the government even more power over the public discourse by having them decide what is and isn’t okay to share.

What is fake news, anyway?

Fake news is just the trendy term for misinformation, but it becomes problematic when it’s used not to describe the general idea of misinformation but to label pieces of content, entire publishers, or even the industry as a whole.

Misinformation comes in many flavors. There are completely fabricated stories intended to deceive or confuse. There are completely fabricated stories intended to be humorous and entertain their readers. There are articles that take quotes out of context or omit key information to mislead readers. There are factual errors in articles that are written amid confusion, hearsay, and conflicting reports in the aftermath of a disaster.

Calling an article “fake news” is an accusation stripped of those important distinctions. It’s like calling someone a “law breaker” without distinguishing between traffic violations and murder.

Worse yet, accepting broad labels like that without insisting on clarification allows anyone to point at something they don’t like and yell “fake news”. Politico recently reported on numerous instances of despots and dictators around the world doing exactly this. Faced with information they don’t like, they simply go “fake news”, usually with no concrete evidence to counter the report. 

And therein lies the problem with expecting Facebook, Twitter, or Google to filter out fake news. Where do you draw the line? Who decides where that line gets drawn?

Would filtering out false stories even solve the problem?

Let’s leave aside the questions of identifying fake news and the potential for censorship. Would blocking it from being shared on Facebook, Twitter etc. even be an effective way of stopping the spread of misinformation?

People often speak of virality to refer to content and ideas spreading from person to person, as an analogy to the way epidemics spread. This analogy works quite well to describe a number of behaviors. Viral ideas, like their biological counterpart, spread from person to person. Individuals vary in their susceptibility to a virus, and they vary in their susceptibility to viral ideas. Social media can act like mass transit in its ability to rapidly spread viral ideas to large numbers of people. Most viral ideas, like their biological counterparts, are benign or even beneficial but misinformation is a disease-causing pathogen.

We take several measures to keep us from getting sick all the time. Our bodies have physical barriers (i.e. our skin), we wear protective clothing and practice hygiene to minimize our exposure. All of these things reduce our exposure to pathogens and decrease our chances of getting sick. 

However, it’s all but impossible to keep all pathogens out completely. Without our immune system, it wouldn’t be long before something slipped past all the filters and make us sick. Fortunately, our immune system is always hard at work. It quickly deals with most small infections before we even notice them. It learns from past infections to deal with future infections more effectively.

If misinformation is a harmful pathogen, efforts to filter out offenders from social media networks is the public sanitation system. It might be helpful, but to really keep misinformation in check we need an immune system.

UnFake.us is building that immune system.

A better approach

UnFake.us lets everyone get involved in fact checking content on the web by contributing fact checks or just pointing out claims that need to be fact checked. You see articles annotated with fact checks and other relevant information right within the article body.

An exaggeration reported by UnFake.us

An annotation alerting the reader to an exaggeration. You can click on the tag to see a detailed explanation.

Users can point out factual errors, quotations taken out of context, and misleading statements. They can highlight claims that do not cite a source (think of Wikipedia’s [citation needed] tag), or add additional context that either support or provide a counterpoint to the article.

xkcd citation needed comicimage credit: xkcd 

Every annotation includes an explanation. It’s not just a label, it also helps you can understand the full picture. You can participate in a discussion of every annotation, and contribute your own.

In a nutshell, the UnFake.us community keeps a watchful eye on other content on the web to educate you about possible sources of misinformation. It doesn’t just tell you something is wrong, it helps you understand why.

Understanding misinformation is more effective than trying to block it out, because understanding inoculates readers against future iterations of that misinformation. This understanding also empowers them to participate in discussions and inform friends who unwittingly share incorrect information.

But how do we even know what’s true?

In a world where misinformation is everywhere and people don’t even agree on which sources to trust, how does one even figure out what is true? This is a valid concern and there is no simple solution, but you might be surprised to learn how often false or misleading articles can be debunked just by fully reading the very sources they cite.

For example, a Slate article about the the inactive voter screening process in Alabama claimed that voters who were flagged by the scheme had to fill out a “lengthy, complex form” before being allowed to vote. They even linked to an example of the form. However, a glance at the form reveals that a majority of the two-page form consists of instructions and sections to be filled out by election officials. The portion the voter needs to fill out is less than half a page, and requests information most people would have readily available.

An example of the supposedly lengthy and confusing form.

An example of the "lengthy, complex form" highlighting the portions of the form voters need to fill out. The rest of this two-page form is for election officials.

Similarly, manipulated images are often based on stock images or other readily available sources. For example, an image claiming to show a Seahawks player burning an American flag is readily identifiable as fake when you see the original version uploaded to twitter more than a year before the supposed flag burning image. It took a good amount of research or luck for the first person to uncover the original image, but anyone can verify it easily.

Manipulated image claiming to show a player burning an American flag in the locker room.

Manipulated image, claiming to show flag burning published Sep 2017.

Original unmodified image of victory dance in the locker room

Original image published on the Seahawks twitter page in Jan 2016.

Even if it takes some careful reading and insight to uncover the error or deception for the first time, it’s usually relatively easy for anyone to confirm it for themselves once it’s pointed out. There is no need to blindly take someone else at their word.

A similar principle applies to unsourced claims. Unsourced claims aren’t necessarily false, but an article that makes heavy use of unsourced claims should be viewed with skepticism. Articles might cite information from other websites and superficially appear to contain adequate citations, but the linked articles in turn fail to cite source. This is another example of an insight that can be easily verified once it’s pointed out.

If someone claims to be the original reporter of a piece of information, is that claim credible? It’s certainly plausible that a reporter for Fox News, NBC, NPR or other major media outlet interviewed a US Senator directly. It’s much less credible if a small personal blog makes that claim.

Of course, these techniques don’t over all possible types of deception, but it significantly raises the bar for creators of false content, making it harder for them to spread false information around. We’ll do some deeper dives into these and other recognizable patterns of misinformation in future blog posts.

A call to action

If you’re like me you can probably name at least a handful of websites you’re absolutely convinced routinely publish false information. If you’re left-leaning it might be Breitbart or Fox News. If you’re right-leaning it might be Huffington Post or The New York Times.

Your mission is to read recent articles on those websites to find one provably false statement, and post it on UnFake.us. Enter your email address to get started: