Since October 7, the day that Hamas terrorists attacked Israel, IDF forces have struck more than 22,000 targets inside Gaza. Just since the temporary truce ended on December 1, Israel’s Air Force has hit more than 3,500 sites. The Palestinian civilian death toll has been staggering. The latest figure stands at more than 18,600.
Israel asserts that it uses “smart bombs” to target Hamas and avoid civilian casualties, but sources dispute that. In a recent article The Washington Post claimed that “almost half of the munitions Israel has used in Gaza since the war began have been unguided bombs.” Addressing the catastrophic death toll, the article stated that according to a U.S. intelligence assessment, the ratio “helps explain the conflict’s enormous civilian death toll.”
The global criticism—some of it severe enough that the country is accused of “genocide” — has stung Israel and undermined the support for its war on Hamas, which some claim is a war on civilians.
Now the Israeli military explains that it’s using artificial intelligence to select many of the targets in real-time. The IDF explains that the AI system, named “the Gospel,” has helped it to rapidly identify enemy combatants and equipment, while reducing civilian casualties.
Geoff Brumfiel, writing for NPR, says that “the system is unproven at best — and at worst, providing a technological justification for the killing of thousands of Palestinian civilians.”
“It appears to be an attack aimed at maximum devastation of the Gaza Strip,” says Lucy Suchman, an anthropologist and professor emeritus at Lancaster University in England who studies military technology. If the AI system is really working as claimed by Israel’s military, “how do you explain that?” she asks.

While Israel maintains that artificial intelligence is allowing it to precisely target Hamas infrastructure, independent researchers dispute that and say roughly one in three buildings in Gaza have been damaged or destroyed.
Other experts question whether any AI system should be used for a job as morally charged as targeting humans on the battlefield.
“AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy, and safety,” warns Heidy Khlaaf, Engineering Director of AI Assurance at Trail of Bits, a technology security firm.
Whether Israel is using AI to limit civilian deaths, or whether the technology itself can perform as touted, experts agree that the use of AI marks a turning point in warfare.
Algorithms can sift through mounds of intelligence data far faster than human analysts, says Robert Ashley, a former head of the U.S. Defense Intelligence Agency. Using AI to assist with targeting has the potential to give commanders an enormous edge.
“You’re going to make decisions faster than your opponent, that’s really what it’s about,” he says.
Once a new technology is unleashed, there is no going back—especially if there are compelling reasons for its use. “The attraction is clear,” Ashley says, explaining that modern militaries are shrinking in size, and need technology to help bridge the gap. AI systems can help them search enormous quantities of intelligence data to try and find the enemy.
“Basically Gospel imitates what a group of intelligence officers used to do in the past,” he says, but much more quickly and efficiently. While a group of 20 officers might produce 50-100 targets in 300 days, the Gospel and its associated AI systems can suggest around 200 targets “within 10-12 days”.
In this latest conflict, Israel is using AI on a scale that the world has never seen before, but as history has repeatedly demonstrated, once a more efficient or deadly technology is introduced in warfare, it becomes part of the permanent arsenal of killing.