“Given the creativity humans have showcased throughout history to make up (false) stories and the freedom that humans already have to create and spread misinformation across the world, it is unlikely that a large part of the population is looking for misinformation they cannot find online or offline,” the paper concludes. Moreover, misinformation only gains power when people see it, and considering the time people have for viral content is finite, the impact is negligible.
As for the images that might find their way into mainstream feeds, the authors note that while generative AI can theoretically render highly personalized, highly realistic content, so can Photoshop or video editing software. Changing the date on a grainy cell phone video could prove just as effective. Journalists and fact checkers struggle less with deepfakes than they do with out-of-context images or those crudely manipulated into something they’re not, like video game footage presented as a Hamas attack.
In that sense, excessive focus on a flashy new tech is often a red herring. “Being realistic is not always what people look for or what is needed to be viral on the internet,” adds Sacha Altay, a coauthor on the paper and a postdoctoral research fellow whose current field involves misinformation, trust, and social media at the University of Zurich’s Digital Democracy Lab.
That’s also true on the supply side, explains Mashkoor; invention is not implementation. “There’s a lot of ways to manipulate the conversation or manipulate the online information space,” she says. “And there are things that are sometimes a lower lift or easier to do that might not require access to a specific technology, even though AI-generating software is easy to access at the moment, there are definitely easier ways to manipulate something if you’re looking for it.”
Felix Simon, another one of the authors on the Kennedy School paper and a doctoral student at the Oxford Internet Institute, cautions that his team’s commentary is not seeking to end the debate over possible harms, but is instead an attempt to push back on claims gen AI will trigger “a truth armageddon.” These kinds of panics often accompany new technologies.
Setting aside the apocalyptic view, it’s easier to study how generative AI has actually slotted into the existing disinformation ecosystem. It is, for example, far more prevalent than it was at the outset of the Russian invasion of Ukraine, argues Hany Farid, a professor at the UC Berkeley School of Information.
Source