When some of the largest newswire agencies in the world had to retract a manipulated photograph of British royal Kate Middleton on March 11, one thing was made clear: Even well-resourced journalistic outlets are ill-equipped to detect technologically advanced fakery.
Counterfeit photos created to deceive audiences have existed nearly as long as photography itself. Joseph Stalin famously edited political opponents out of the historical record. The use of deceptive photo editing by a PR flack for a British aristocrat may seem inconsequential relative to disinformation deployed to influence elections or international conflicts. Nevertheless, the emergence of generative artificial intelligence technologies that can automate the creation of bogus but convincing content pours gas on an already raging fire posing a serious threat to democratic societies. If well-resourced agencies like Reuters and the Associated Press can be duped by a manipulated image, what hope do small outlets have against deceptive generative content at scale?
This was a hot topic at this year’s South by Southwest (SXSW), the Austin music festival turned tech and media expo that in recent years has embraced sponsorships from military contractors and the United States Army. The Department of Defense—one of history’s GOATS of propaganda campaigns— is actively considering using “deepfake” videos for psychological operations, and even hosted a SXSW panel on disinformation. Meanwhile, a host of musical acts and panelists dropped out in public protest, citing the American-made bombs the Israeli military continues to drop on civilians in Gaza.
“The defense industry has historically been a proving ground for many of the systems we rely on today,” the official SXSW account posted on X. “These institutions are often leaders in emerging technologies, and we believe it’s better to understand how their approach will impact our lives.”
Based on the panels I attended, the prognosis for combating disinformation is grim, particularly with the advent of advanced machine learning, or “artificial intelligence.” Tools like ChatGPT may seem innocuous, but the technology is poised to shake loose the cornerstones of our democracy: elections and journalism. Against the broader techno-optimist grain of SXSW, a few panels of academics, journalists, technologists, and civil servants gave grave warnings about the threat of artificial intelligence being used by bad actors to sow division and chaos in an already fragile political environment. David Allan, an editorial director at CNN, encapsulated the two-mindedness around the artificial intelligence revolution, which he said offers “big promises and a specter of peril.”
From my perspective, the promises are less solid than the specter haunting our information ecosystem, and I’m not alone in thinking this way. Lindsay Gorman, senior fellow for emerging technologies at the Alliance for Securing Democracy, said during one panel that “There are more negative examples than positive ones” in response to a question about positive use cases of artificial intelligence in elections.
“The real positive use case for tech isn’t about detecting what’s fake,” Gorman said. “But authenticating what’s real.”
A concrete example of using technology for authentication is a new camera from Sony that allows publishers to request the metadata for a given photo to understand where it originated and to prove it is real. Another example could be embedding information into communications around elections that can be verified by an end user. Promising concepts—but ones that would require buy-in from technology companies and widespread adoption by end users to make a dent in the disinformation problem.
The other side of that coin, as Gorman alluded to, is detecting what’s fake. Technologists at SXSW tended to focus their discussions on high-tech tools to defend against and ameliorate the effects of machine-fueled disinformation. Such tools can be helpful for researchers and journalists who don’t want to be duped. But as one questioner astutely noted, good and bad actors are now engaged in a technological arms race and more sophisticated detection tools provoke new techniques for deception.
Unfortunately, there’s little hope for policy solutions at the federal level in our current political environment. If we can’t get Congress to pass a bill to crack down on phone call spam, we shouldn’t hold our breath when it comes to regulating technologies that can fuel disinformation. Jena Griswold, Colorado’s secretary of state, put it bluntly during one panel.
“Congress, at this point under the current speaker, is basically nonfunctional,” Griswold said.
“Everybody says, ‘Oh, Congress can pass a law.’ And they can. But they won’t. Let’s not waste much effort. … Many elected officials want the disinformation. There are literally hundreds of election deniers in Congress. Do you think they’re going to allow a federal agency to counter disinformation?”
Joan Donovan, assistant professor of journalism at Boston University, agreed that we shouldn’t have much faith in our political institutions to legislate the issue.
“The biggest lobby on Capitol Hill is tech,” Donovan said. “They want to operate in a deregulated environment. … The reason why artificial intelligence or deepfakes are possible is years of social media use.”
Griswold and other secretaries of state are concerned that deepfake videos and audio could be used to disrupt elections.
“What if county clerks get a call cloning my voice telling them to do something,” Griswold said. “What if it happens all across the country at the same time? That could cause a very chaotic situation.”
Griswold said she is conducting training exercises to prepare her election officials for the worst. Sandra Stevenson, deputy photography director at the Washington Post, said the paper is constantly training its staff on how to identify fake imagery.
But in the social media age, when government officials and news outlets are no longer gatekeepers of information, all the training in the world may not be able to stop American citizens from falling victim to disinformation—which is why Donovan believes industry giants need to take the lead.
“We have to get commitments from these technology companies that are running international communications technologies that they are at least not willing to allow their platforms to become weaponized by foreign actors,” Donovan said. “They also have to take a look at the domestic actors. That will require industry coordination and reform. There’s a lot that we have to disentangle, but unfortunately the way our government is structured and these companies are structured is at odds with what we might call consumer safety.”
The post The Specter of Disinformation Haunts South by Southwest appeared first on The Texas Observer.