There is no more poignant image than the helplessness of a baby. And no more effective way to manipulate public sentiment than to suggest that violence is being committed against children. Wars are won not only on the battlefield but with the deft use of circulated images—another word for propaganda.
There has been no shortage of horrifying images coming out of Israel and Gaza since the latest violent conflict erupted on October 7 when Hamas launched an attack on Israel in which 1,200- were killed and more than 200 were taken hostage.
While Israel’s death toll from the attack stands at 1,200, according to Israeli officials, at least 14,800 Palestinians — mostly women and children — have been killed in Gaza, per the Ministry of Health in Hamas-run Gaza.
The images of the bombed-out homes and ravaged streets of Gaza are appalling—but what is more horrifying are those that feature bloodied, maimed and abandoned infants. In the age of artificial intelligence and the creation of deepfakes, the question has become: how many of these images are real and how many are manufactured for the purpose of creating outrage?
In the bloody first days of the war, supporters of both Israel and Hamas accused the other side of victimizing children and babies. Images of wailing infants purported to offer photographic ‘evidence’ that was quickly held up as proof. Some of these turned out to be manufactured.

Without a doubt some are real, anyone who has been in a war-ravaged area knows there is no limit to the horror. But some clearly are not. Viewed millions of times online since the war began, these images are deepfakes created using artificial intelligence. It’s hard to miss the clues: fingers that curl oddly, or eyes that shimmer with an unnatural light —these are all telltale signs of digital deception.
Pictures from the Israel-Hamas war have vividly and painfully illustrated AI’s potential as a propaganda tool, used to create lifelike images of carnage. Since the war began last month, digitally altered ones spread on social media have been used to make false claims about responsibility for casualties or to deceive people about atrocities that never happened.
The malignant use of information is certainly not new. Joseph Goebbels, Germany’s Minister of Propaganda in WWII, was a master. In the Third Reich, Joseph Goebbels created an elaborate propaganda system, which allowed him to control all media. That way, he could shape the Germans’ thoughts and views to fuel the hatred of the German society towards the internal and external enemies of the Third Reich. To vilify them as much as possible, the ministry of propaganda used the radio, film, literature and art, so that no one would even dare question their credibility.
What was advanced technology in Goebbels’ day, almost 100 years ago, is archaic and crude by today’s standards. The rapid advances being made by AI are making it possible to manipulate public sentiment in ways that are much more effective and devastating.

AI has already become another form of weapon in the Israel-Hamas war, but it also offers a glimpse of what’s to come during future conflicts, elections and other big events.
“It’s going to get worse — a lot worse — before it gets better,” said Jean-Claude Goldenstein, CEO of CREOpoint, a tech company based in San Francisco and Paris that uses AI to assess the validity of online claims. The company has created a database of the most viral deepfakes to emerge from Gaza. “Pictures, video and audio: with generative AI it’s going to be an escalation you haven’t seen.”
In some cases, photos from other conflicts or disasters have been repurposed and passed off as new. In others, generative AI programs have been used to create images from scratch, such as one of a baby crying amidst bombing wreckage that went viral in the conflict’s earliest days.
Other examples of AI-generated images include videos showing supposed Israeli missile strikes, or tanks rolling through ruined neighborhoods, or families combing through rubble for survivors.

The propagandists who create such images are aiming to evoke a visceral response in the viewer, and whether it’s a deepfake baby, or an actual image of an infant from another conflict, the emotional impact on the viewer is the same. Goal achieved.
Each new conflict, or election season, provides new opportunities for disinformation peddlers to deploy the latest AI advances. With the 2024 election looming on the horizon, many AI experts and political scientists are warning of the risks next year when we will cast our votes.
Maria Amelie, co-founder of Factiverse, a Norwegian company that has created an AI program that can scan content for inaccuracies or bias introduced by other AI programs, says, “The next wave of AI will be: How can we verify the content that is out there. How can you detect misinformation, how can you analyze text to determine if it is trustworthy?”
For now, whether that be to create outrage or to influence our votes, we are all susceptible to being manipulated by AI.