There’s a bit of irony when I consider Fox Mulder believing with such unbridled passion that “the truth is out there.” Had the X Files been reaching its peak in 2016, that slogan might have read, “the truth is obfuscated.” Mulder might have gone so far as to say, “I cannot believe.”
Such is the case now, where fake news has inundated our browsers and apps to an extent that many have opted out of even participating.
During the election of 2016, even Facebook found itself embroiled in the drama. Most of the fake news stories were created by scammers looking to make a fast buck. Those scammers knew they had a massive and willing audience on Facebook, and they struck. According to Pew Research, 64% of adults get news through social media, yet only 4% of users trust the information they find on the platforms a lot and 30% trust it some.
Paul Horner is a writer of fake news. In fact, he firmly believes he is the reason why Donald Trump will soon be in the White House. And he’s not the only one. And up until recently, our main sources of public interaction have allowed these purveyors of lies free reign of their platforms.
Thankfully the cracking down has begun. In fact Mark Zuckerberg has come out to indicate that Facebook is working on a fake news detection system, a warning system, and the means to report fake news.
Many consider this too little, too late.
In fact, while Zuckerberg sat back and allowed the deluge of fake news to spill out, the users were doing everything they could to fact check. But it wasn’t enough. The inundation continued and the masses stood, Mulderian-esque, in their desire to believe.
Programmers step up
Post-election, things have finally started to change. Nabanita De attended a hackathon at Princeton University and, with three fellow programmers, developed an algorithm that authenticates what is real and what is fake on Facebook. They call this tool FiB. This algorithm soon turned into a Google Chrome Extension that scans through your Facebook feed, in real time, and verifies the authenticity of every post. The FiB backend AI checks what are listed as facts within posts and verifies them against image recognition, keyword extraction, source verification, and even a Twitter search (to verify if a screenshot of a tweet is authentic).
FiB is not alone. There is also the Fake News Alert Chrome extension. This extension was created by Brian Feldman (of the New York Magazine) which pulls from a list of fake news sites (NOTE: The list itself has since been removed from the document) generated by Melissa Zimdars, Assistant Professor of Communications at Merrimack College. This extension places a thin strip at the top of any page (Figure A) from a source included on the blacklist.
Figure A
The problem with this extension is you are placing your faith in those generating the blacklists. This becomes problematic, especially when applied during an election cycle. Read the reviews and comments found on this extensions page and you’ll see people calling out the developer for being too “left-leaning” such that the blacklist cannot be trusted.
In this event, who do we believe?
There are other, similar, extensions that do the same thing to varying degrees. After trying out enough of these (and reading through the reviews and comments), it becomes all too clear that the issue lacks clarity due to the dividing line between satire and lie.
Let’s take a look at the definitions.
Satire: The use of humor, irony, exaggeration, or ridicule to expose and criticize people’s stupidity or vices, particularly in the context of contemporary politics and other topical issues.
Lie: A false statement made with deliberate intent to deceive; an intentional untruth; a falsehood.
The difference between those two words could not be more clear (and should bring a level of clarity to the debate). One is used to expose and one is meant to deceive. One is protected under the First Amendment, one is not.
And yet, you call out Breitbart and its fans will point to The Onion.
Is it time for legislation?
I made it clear (on Facebook, oh the irony) that it was time for legislation. I stated that fake news sites should be labeled as such. Remember that list, generated by Zimdars? To generate that list, she developed a rating system of categories. They are:
- CATEGORY 1: Fake, false, or regularly misleading websites that rely on “outrage”, distorted headlines, and decontextualized or dubious information
- CATEGORY 2: Websites that may circulate misleading and/or potentially unreliable information
- CATEGORY 3: Websites that use clickbait-y headlines and social media descriptions
- CATEGORY 4: Sources that are purposefully fake with the intent of satire/comedy
That’s a fairly well thought-out list, one that could be easily applied to many sites on the web. One might think it incredibly easy to use such a list to come up with legislation that could protect the masses from being inundated by bogus claims and click bait-y promises that lead to false statements and play on the gullibility of the human condition.
However, this is a slippery slope, one that gets muddied with the legality (or illegality, if you will) of hate speech. This is never more obvious than during an election, when tempers and fear are at an all-time high. Hate speech comes out to play, couched in news. At that point, fact-checking becomes a necessity in everyday life. But as we fact check against A, with B, how are we certain B is trustworthy? What if A is left-leaning and B is right-leaning? The murky waters of politics makes this issue muddier and darker.
But does it? If A claims X about B and X is proved false, is it really murky? Or does the murk lie in the public wanting to believe X, because it paints B in a light A wants to see? And who is to serve as judge and jury to declare X a lie? And to complicate matters even more, does claiming that X is a lie infringe upon the First Amendment. Thing is, that crucial amendment does not cover false statements or protect hate speech. But, during an election cycle, the very idea of a “false statement” and what is deemed “hate speech” becomes cloudy.
How? To the educated, it would seem that the difference between truth and lie is fairly binary.
A few weeks ago, on Facebook (no less), I proposed that legislation should be enacted to enforce sites that write misleading or false statements on a regular basis (as in, it is their bread and butter), should be required to label all such with something like:
The content below is meant for entertainment purposes only.
Simple. It doesn’t actually come out and state, “Hey, we’re lying to you!” But it clearly puts the onus on the viewer to understand what that means. After all, the burden should be at least partially on the reader to know when they are being mislead or blatantly lied to. I believe this has become especially necessary, considering how many users now are getting an overwhelming majority, if not all, of their information from their mobile devices. This complicates it even further because apps like Facebook view news in-app (as opposed to opening a default web browser); so creating a single app to either block or tag fake news sites becomes even more challenging for the developer.
Tricky business
This is tricky business we’re dealing with. Should the burden of proof fall on the hands of the user, the app developer, or those generating content? The answer is a multi-layered complication that will require think tanks, lawyers, politicians, and specialists of all flavors. Hopefully, before 2018 or 2020, we’ll have a solution in place so the spread of fake news doesn’t once again, threaten the fragile trust between websites, consumers of content, politicians, friends, and family.