The Guardian Investigation: AI Companies 'Aren't Tech Firms — They're Defense Contractors Hiding Behind Their Models'
A sweeping Guardian investigation argues that the failure to regulate AI warfare — from Gaza to the Iran campaign — has created a system where algorithms generate thousands of targets with 20-second human reviews.
When the Algorithm Chooses Who Dies
A major investigation published by The Guardian today argues that the world's leading AI companies have become, in effect, unregulated defense contractors — building targeting systems that generate thousands of strike targets with minimal human oversight, while hiding behind the language of innovation and disruption.
The investigation draws a direct line from Israel's AI-assisted targeting in Gaza to the US-Israeli military campaign in Iran, arguing that the same algorithmic logic that produced an 83% civilian casualty rate in Gaza has now been exported to a wider theater of war.
The Fog Procedure, Automated
The piece opens with a striking metaphor: the Israeli military's "fog procedure," an unofficial rule from the second intifada requiring soldiers to fire into darkness on the theory that a threat might be lurking. The investigation argues that AI targeting systems have systematized that same logic of chosen blindness — replacing the darkness in a watchtower with opacity inside an algorithm.
"The darkness in the watchtower was a condition of the terrain. The darkness inside the algorithm is a condition of the design. In both cases, the blindness was chosen." — The Guardian
The Minab School Strike
The investigation centers on a devastating incident: a strike on the Shajareh Tayyebeh elementary school in Minab, Iran, which killed at least 168 people, most of them girls aged seven to twelve. Weapons experts described the targeting as "incredibly accurate" — each building individually struck, nothing missed. The problem wasn't execution. It was intelligence.
The school had been separated from an adjacent Revolutionary Guard base by a fence and repurposed for civilian use nearly a decade ago. That fact was apparently never updated in the targeting database. Whether an AI system specifically selected the school remains officially unconfirmed, but the investigation argues it was selected by "a system that algorithmic targeting built" — one designed to strike 1,000 targets in the first 24 hours of the Iran campaign at speeds no human team could replicate.
The 20-Second Review
Perhaps the most disturbing detail comes from the investigation's examination of how AI targeting worked in Gaza. Systems processed data on every person in the strip — phone records, movement patterns, social connections, behavioral signals — to produce ranked lists of names, each with a probability score indicating combatant likelihood.
Human "verification" meant an operator reviewed each name for an average of about 20 seconds — long enough to confirm the target was male, then sign off. One system alone produced more than 37,000 targets in the first weeks of the war. Another generated 100 potential bombing sites per day.
The Legal Problem
The investigation argues that many AI targeting systems inherently defy international humanitarian law, which doesn't merely demand correct outcomes — it requires a careful process before strikes are carried out. Commanders must make every reasonable effort to verify legitimate military objectives. That obligation, the authors argue, cannot be delegated to a system whose reasoning is opaque.
In the United States, the 2025 National Defense Authorization Act's AI provisions don't regulate military AI — they direct agencies to adopt more of it. Defense Secretary Pete Hegseth's AI strategy, issued in January 2026, frames the question entirely as a race, directing the Pentagon to "move at wartime speed, with AI as the first proving ground."
The Accountability Gap
The piece's central argument is blunt: the result is a world where the most consequential targeting decisions in modern warfare are made by systems that cannot explain themselves, supplied by companies that answer to no one, in conflicts that generate no accountability.
"Gaza was the laboratory. Minab is the market." — The Guardian
The investigation concludes that these AI companies need to be recognized and regulated for what they actually are — defense contractors — rather than being allowed to operate under the lighter regulatory framework applied to technology firms. Whether governments have the will or the capability to do so remains an open question, especially when the same governments are the ones buying the systems.
0 Comments
No comments yet. Be the first to say something.