Since the invention of social media, governments, militaries and political parties have worked to control narratives and sway public opinion. Now, in a country facing another national election, just about anyone can do it.
TPR’s Jerry Clayton recently spoke with Sam Woolley, an assistant professor in the School of Journalism and head of the Propaganda Research Lab at the University of Texas at Austin. He’s the author of an upcoming book on the subject called Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity. The book is set for release in January of 2023
This interview has been edited and condensed for clarity.
Clayton: First of all, what is manufactured consensus?
Woolley: So manufactured consensus grows out of this idea of manufactured consent. Manufactured consent is this idea that the media in some ways are controlled by powerful interests and have to repeat some of what those powerful interests say and desire and want.
Manufacturing consensus furthers this idea towards social media and the digital age. And so with the manufacturing consensus, what I’m focused on is the idea that social media companies are also similarly controlled by the powerful.
They’re controlled by the power of bots that amplify and suppress particular kinds of content, and also by a variety of other streams of manipulative tactics that we see proliferating across the internet.
Clayton: In the early days of computational propaganda, this type of propaganda was mainly used by governments and political parties. How has that changed over the years?
Woolley: Now, with manufacturing consensus, we see a democratization, if you like, of propaganda. I say that with a bit of a tongue in cheek because, you know, obviously this is not really democratic action because it’s controlling speech oftentimes. Almost anyone can work to spread manipulative content online in a way that is unfair and circumvents the free and open marketplace of ideas.
They can use bots, they can hire influencers, they can organize groups of people, game social media to game the algorithms that promote trends, to trick journalists and regular people into thinking that what they’re saying is actually real or has some gravitas when it really does not.
Clayton: Can you give me one of the most egregious examples that pops into your head of this?
Woolley: The one that I opened up with in the book is a description of the guy that I give the pseudonym Hernan. And Hernan considers himself a person that works professionally as basically a digital hype man. He specifically works to sell products on a variety of different tactics, but he works mostly in politics.
In my conversations with him, he showed me evidence that he had worked in the past for politicians at a very high level in South American and Mexican politics. And that he had used everything from paid armies of influencers to bots to trick algorithms on YouTube, Twitter and Facebook, to prioritizing his content and making it appear on their first page. He claimed to have had millions of hits and show results from the kind of work that he was doing.
Oftentimes people say things like, well, you know, I would never believe a bot on social media. I don’t get tricked by disinformation online. And Hernan was able to show that, in fact, a lot of the time he’s able to actually get people to click on things, to view things, and to actually buy into that, most importantly.
Clayton: So what’s the best way to fight this type of propaganda?
Woolley: So people have to understand that we’re fighting something that is maybe not substantively new in terms of the kind of disinformation and falsehood that we see. But it is certainly new in terms of the amplification, scale and reach and that we have a big job to to fight back. It’s not impossible.
There is a lot of hope. And actually in this book, there’s a lot of solutions to the problem. And I hope people will read it and take up some of those solutions.