Home > News > Alex Jones was just banned from YouTube, Facebook and iTunes. Here’s how he managed to survive until now
149 views 14 min 0 Comment

Alex Jones was just banned from YouTube, Facebook and iTunes. Here’s how he managed to survive until now

- August 6, 2018
Alex Jones of Infowars speaks at a rally for then-presidential-candidate Donald Trump. (Lucas Jackson/Reuters)

In the last 24 hours, Apple, Facebook and YouTube have banned content from conspiracy theorist broadcaster Alex Jones. Last week, I interviewed Tarleton Gillespie, whose new book, “Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,” talks about how big Internet platforms have tried to deal with Jones and other controversial sources of content. Here’s what he has to say.

Henry Farrell: Facebook has just banned Alex Jones [notorious for pushing conspiracy theories about the Sandy Hook Elementary School shooting] from using his personal Facebook account for 30 days, while Twitter has been criticized by President Trump for purportedly “shadow-banning” conservatives. Your new book argues that decisions over how to moderate content is crucial to how social media platform companies like Facebook and Twitter work. Why is it so important, and why has it been overlooked?

Tarleton Gillespie: Facebook, Twitter and the other social media platforms began with a clear promise to their users: Post what you want, it will circulate; search for what you want, it will be there waiting. It was the fundamental promise of the Web, too, but made easier and more powerful.

And for most social media users in most cases, it can seem true. But underneath that promise, social media platforms have always had rules. They have always removed content and users who violate those rules, and they have always struggled about how to do it — not just what to remove, but how to justify those removals and still seem open, connected and meritocratic.

The truth is, these are machines designed to solicit your participation, monetize it as data and then make choices about it: what gets seen by whom, what goes viral and what disappears, and what doesn’t belong at all. Peel away that apparent openness, what you’ll find is a massive apparatus for solving the central problem of social media: how to take in everything but only deliver some of it. And that includes making value judgments about the most troubling aspects of public speech: deciding what’s hateful, pornographic, harassing, threatening and fraudulent. As platforms draw those lines, they are quietly asserting the new contours of public speech and priming the political battles that will test them.

HF: How do social media companies decide what content to moderate, and how is this changing?

TG: Social media companies face two kinds of problems. It’s difficult to decide what counts as objectionable, and it’s difficult to find it all given their immense scale. Setting aside a few sites, like Reddit and Wikipedia, that distribute the work of overseeing content to volunteer moderators, most of the major platforms take a “customer service” approach: Ask users to flag content they find objectionable. Deploy an army of clickworkers to review that content. And reserve the hardest decisions for a small internal policy team.

Given the enormity of this task, the biggest platforms are scrambling to develop automated detection software that can find the porn and the hate even before users can. But here the two kinds of problems run into each other. Software is good at scale. It can quickly identify a bunch of images that might be nudity or a bunch of conversations that might be harassment, but it’s not so good at making subtle distinctions, understanding context, weighing competing values. Human judgment can be more subtle, but people take much more time to do this well and are prone to error, bias and inconsistency. Either way, content moderation is an immense undertaking, it always depends on both humans and machines, and it is struggling to cope with the scale at which social media now work.

HF: Your book argues that it’s hard for social media companies to find “right answers,” given the wide variety of disagreement among users over what the right answers might be. What kinds of disagreements are most challenging?

TG: I find it strange that we imagine that content moderation should be easy. Wherever people participate in public discourse, we quickly find not only disagreements of politics and perspective, but incompatible worldviews and value systems. And we also find people looking to give their perspective an advantage, by using the communication system itself to make what they say appear newsworthy, popular or true. And unlike traditional media, Facebook and Twitter invite all kinds of participation, so they only exacerbate this problem.

For instance, Facebook and YouTube have been criticized for not removing the vitriolic videos of Alex Jones and Infowars. His is a tough case, at least for platforms that say they want to dissuade conspiracy and calls to violence but that also want to circulate political talk and reward the kind of popularity that outrage can stir.

But Alex Jones is not a special case. He just happens to produce the kind of speech that presses on the exact fault line running through these platforms. Alex Jones is the perfect exploiter of platforms that want to distribute all sides because they believe everyone is participating in good faith, in a speech environment in which that is simply no longer the case. If it weren’t Alex Jones, it would be someone else; and if Facebook and YouTube worked according to fundamentally different principles, something different would emerge as their particular Achilles’ heel.

The bigger issue is that, when it comes to public speech that matters, there are no right answers: There is only hard-won consensus, difficult cases that needle at the boundaries, guidelines that are themselves contested and crafty users looking to work the system for potential advantage.

HF: Nonetheless, social media companies are converging on similar-seeming rules. Why is this so?

TG: It would be nice to believe that, over time, social media companies are approaching a set of guidelines and procedures that are “best,” and that that’s why their rules look similar. I’m afraid the similarities are better explained by what these companies have in common.

First, they share some common presumptions, including early Web ideals about online participation, distinctly American cultural norms and specific interpretations of the First Amendment. They share a business model that makes it imperative to encourage participation at nearly all costs, in order to harvest data from users and advertise back to them. They are buffeted by the same thorny cases and the same political challenges, and they tend to travel in a pack in their responses.

But this perception may also be a symptom of how we think of “social media companies,” by which we tend to mean Facebook, Google and Twitter. If we also meant Reddit, and Tumblr, and Gab, and Nextdoor, and social media platforms like Vk and Weibo in other parts of the world, maybe the rules no longer seems quite so similar.

There are other ways of doing content moderation. In my book I try to show how content moderation is a foundational problem for “platforms” of all types: resource-sharing platforms, collaboration tools, app stores, game worlds, public cloud services. But it’s so difficult to get away from talking about Facebook and Twitter. They take up so much of the oxygen in this debate.

HF: As social media shapes our political debates, the debate around social media [and how it should work] has become increasingly politicized. How are Facebook, Twitter, YouTube and other companies responding to political criticisms, and are they getting it right?

TG: Social media platforms police content on our behalf. In the last few years, the strain of this approach has clearly shown. It’s clear that platforms cannot police to everyone’s satisfaction, and it’s clear that as a society we could not tolerate platforms that freely allowed truly reprehensible content to circulate, as a matter of principle. The answer will always live in the messy space between.

Platforms need to do a better job making more principled judgments and being clearer and more humble about their policies. But the gap between how platforms moderate and how we perceive that moderation is difficult to bridge and easy to exploit.

Distinguishing between the reprehensible and the nearly so, between the violation and the exception, is always difficult and usually unsatisfactory. Propagandists willing to present falsehoods as if they’re true can exploit the system. The ideologically motivated will forward outrageous claims because they feel true, or because they feel true to their cause. Provocateurs can slip by the platforms’ moderators or score political points because they get censored.

As Nikki Usher noted recently, conservatives have begun pulling from a very old playbook by shouting “bias” every time their content is removed or their account is suspended. But there is a fundamental ambiguity in there. When Twitter removes millions of bot accounts and someone’s follower count subsequently drops, it is hard to know why that decision was made, whether it was made fairly and how it affected others.

These political tensions are unavoidable. But social media platforms exacerbate them by trying to moderate for us. It’s time to reconsider whether this is the right approach to what has become essential political and global venues. Moderation is hard because, in the end, it’s not customer service, it’s governance: asserting public values, mediating between competing perspectives, keeping everyone honest. This means more of the hard decisions should be made by users themselves, who need to act as a public and see themselves as one — or through thoughtful regulation designed to protect the interests of the public.

Facebook, Twitter and YouTube are beginning to recognize that they are implicated in this and that their platforms have overvalued tasty clicks to the detriment of public engagement. What they have not yet recognized is that it is impossible to solve this unsolvable problem on our behalf. Instead, they should be developing innovative ways to put more of the governance of public speech back to the public.

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.