If fake news is wrong, what’s it take to be right?

TL;DR: I think it would be awesome if data analysis were used to find critiques of “fake news” that were close to a reader’s existing values, allowing them opportunity to “be right” or at least feel less sure about their wrong opinions. I think that might be a more compelling response to the popularity of fake news than trying to convince people that the news is outright wrong. Because the feeling of being right is more compelling than someone telling you you’re wrong.

Almost everybody secretly likes to say “well actually…” that annoying catch-phrase of the Mansplainer. I think one of the most visceral appeals of fake news on social media is that it gives many people a chance to “be right” for a change. What if the antidote to Fake News wasn’t trying to prove to people that the news they’re reading is wrong – but instead giving them more opportunities to be right?

Reading yesterday’s post on Venturebeat titled “Can AI Detect Fake News?” got me thinking about the nature of truth, our relationships with it, and what data+automation could do to at least dial back some strident opinions where they post danger.

In that post, Hira Saeed concludes “There is a role for AI to play in separating fact from fiction when it comes to news stories. The question remains whether readers still care about the difference.”

At first blush I thought that was a silly conclusion to end with but I’m reminded of something Seth Godin once wrote: “Sometimes we find ourselves in a discussion where the most coherent, actionable, rational argument wins. Sometimes, but not often. People like us do things like this.”

Further, one many matters there may be a clear truth. Eg “Hillary Clinton does not have Parkinsons.” But on many matters, there really isn’t a single ultimate version of truth. I read a few months ago about a paradigm called Feminist Standpoint Theory, which argues that in many instances there isn’t a single bedrock version of truth, but rather the best way to get a picture of reality is by taking into account as many and as diverse a set of lived experiences as possible. I really like that.

In a recent New Yorker piece called Why Facts Don’t Change Our Minds, Elizabeth Kolbert summarizes research that concludes with two interesting suggestions for the future. First, asking someone with a strident (and let’s say wrong) opinion to explain that opinion leads to much lower self-reported confidence in that opinion than you see prior to an attempt to explain. And second, merely introducing doubt in a public setting greatly deflates the social pressure to go along with a theory.

(Related research says it’s easy to get conservatives to support things like refugees and the environment if you just appeal to their values of authority, purity, and patriotism. Barf! But stay with me here for a moment.)

Put all of this together and what could big data plus automation do about “fake news?” One set of things it could do would be to offer people a chance to be right again, to know more than other people know, by discovering and analyzing a multitude of perspectives, introducing doubt, and maybe offering up the best-explained critique of something you’re reading that’s closest to your own professed values.

Screen Shot 2017-05-02 at 8.12.05 AM

I remember almost 10 years ago, the best political aggregator on the web, Memeorandum, saw outside developers Andy Baio and Joshua Schachter build a visual overlay that told you about the political slant of the linking history of any given blog participating in a conversation. That meant you could sample from across the political spectrum, see which direction a common conversation was leaning, etc. It was awesome!

Imagine if there was something like that people could use to discover and summarize additional perspectives close to their own, but that introduced the burden of explanation and a sense of doubt? (Hopefully there’s enough conversation, enough data, enough diversity of opinions even within common general perspectives, to analyze.)

Then people could say “well, actually…” and deflate some of this fake news themselves. Just an idea 🙂