Hulu ran an anti-Hamas ad that appears to be made using artificial intelligence to show an idealized version of Gaza—claiming this paradise destination could exist if not for Hamas.
The 30-second spot, opening like a tourism ad, shows palm trees and coastlines. There are five-star hotels and children playing. People dance, eat, and laugh, while a voiceover encourages visitors to “experience a culture rich in tradition.” But it suddenly shifts, turning the face of a smiling man into a grimacing one. “This is what Gaza could have been like without Hamas,” the narrator says. A new series of images flashes, this time of fighters and weapons, and children wandering the streets or holding guns.
The ad flattens decades of conflict between Israel and Palestinians—and centuries of war in the region—into a 30-second ad that appears to use AI to help spread its message. The reality of who is responsible for the suffering of Palestinians in Gaza is a far more complicated issue than portrayed in the short ad. Hamas, which has been deemed a terrorist organization by the United States, Canada, Britain, Japan, and the European Union, seized control of the Gaza Strip in 2007. Israeli troops and settlers occupied Gaza from the 1967 war until 2005, when Israel’s military and citizens withdrew from the Palestinian territory. The United Nations and several other international entities still consider Gaza to be effectively occupied, although the US and Israel dispute that label.
As of last week, more than 25,000 people have been killed in Gaza since October, according to Gaza’s health ministry. The UN estimates that 1.9 million people in Gaza, approximately 85 percent of the population, have been displaced. Around 1,200 Israelis were killed by Hamas in the October 7 attack that led to the current crisis.
The ad appears to contain some imagery made using generative AI, based on the aesthetic, errors in perspective, and repetition of similar facial expressions. The ad itself also acknowledges that the scenes in the first half of the ad are not real, but rather imagined in a city without conflict. WIRED consulted two AI image-detection companies, Inholo and Sensity, about the ad, and both said AI was used in the creation of the first part of the ad. Activists have used generative AI throughout the conflict to garner support for each side.
This ad isn’t really a deepfake, but it does show how the rapid advances in generative AI can be used to create lifelike and emotional propaganda. Even if people know something isn’t real, the content can still influence them. Some people continue to share deepfakes even when they depict situations too outlandish to be believable.
The apparently AI-generated reimagining of Gaza looks like a trend on TikTok that uses AI to render alternate histories, says Sam Gregory, executive director of Witness, a nonprofit organization focused on using images and videos for protecting human rights. Here, it seems AI is being used as “a cheap production tool” to persuade viewers or reinforce an existing point of view, or “to generate news coverage around the use of AI itself,” Gregory says.
Source