Admitting you have a problem is the classic first step of many “twelve-step programs.” In the month since my previous post explaining why I was choosing to leave the social media platform, Facebook has done a 180 on their position reguarding the spread of propaganda on their site.

At first, CEO Mark Zuckerberg released a statement indicating that the company would take an ostritch-with-its-head-in-the-sand approach to the problem, cynically pretending it doesn’t exist while continuing to benefit from ad revenue from the purveyors of propaganda that are currently flooding the site with “fake news” (a term I have issue with, because it sounds overly quaint; we are not talking about bogus celebrity gossip, but political propaganda, plain and simple).

Facebook has now released a statement aknowledging that the propaganda problem exists, and is taking real steps to mitigate it. Stories that are “contested by a third party source” (such as politifact, snopes, etc) will have a big red label informing the reader of this. That’s great, as I said in the past, Facebook should treat these false and misleading stories the same way that modern web browsers treat malware sites.

A trustworthy website

Here is an example of how a browser denotes the ‘trustworthiness’ of a website. This is the homepage of my bank, UW-Credit Union. The green padlock and SSL certificate identifier indicate that the connection to the website is secure and informs you that you are indeed connected to the real UWCU website.

An untrustworthy website

Contrast that with what the browser does with an un-trustworthy website. The whole viewport goes red with a stern warning that you probably shouldn’t trust this site. Rather than a green padlock in the address bar, there is an icon denoting danger and the https portion of the URL is struck out.

Just as Facebook is not responsible for actually creating the bogus content that is spread on it via users’ news posts and shared stories, Google did not actually create the malware site in the second picture. However, serving as the vehicle by which the user accesses this content means that Google (and Facebook) have an obligation to inform users that what they are viewing may be fraudulent or harmful.

An untrustworthy Facebook post

Here’s an example from Facebook’s blog post (linked above) of how their new warning system will look on a “disputed” news story. The important thing is that, even if you don’t actually click through to read the full story, you have an immediate visual indicator that it’s not trustworthy. This makes a huge difference, as without this indicator, stories from sketchy sites like Brietbart and Prntly Catalog look exactly as trustworthy as stories from legitimate news sources like The Washington Post or USA Today.

Now that Facebook is open to anyone over 13, it has a greater obligation to communicate to less-technical users that just because something is posted on Facebook does not mean that it’s legitimate news. Older users especially may not be used to the idea that these sites, while they may look official, simply do not have any accountability for the accuracy of what they publish in the same way that traditional media does.

Extra kudos to Facebook for (in their words) “disrupting financial incentives for spammers.” Many of these illegitimate news stories are simply clickbait using whatever inflammatory headline the publisher thinks will generate interest, in order to get pageviews on their site (which is undoubtedly slathered in advertisements). This is a big step that I frankly did not expect Facebook to take, as it will cut into the number of ads that these sites buy on Facebook, and thus hurt Facebook’s own profits.

I didn’t feel that I could participate in a social media platform that refused to admit that it was becoming complicit in the dissemination of propaganda (often essentially libel) and simultaneously profiting from it. Now that Facebook has taken these steps, I feel that I can once again participate with a clear conscience.