US confirms AI tools used in Iran war as probe targets school bombing error
The US military has admitted it uses advanced artificial intelligence tools in the ongoing war against Iran. Admiral Brad Cooper, head of US Central Command (CENTCOM), made the statement in a video message on Wednesday.
He explained that these AI systems help process huge amounts of data quickly. They allow commanders to make faster decisions on the battlefield. Cooper stressed that humans always make the final call on targets. “Our warfighters are leveraging a variety of advanced AI tools,” he said. “These systems help us sift through vast amounts of data in seconds.”
The confirmation comes at a difficult time. A preliminary investigation has found US forces responsible for a major strike error. On February 28, 2026, a Tomahawk missile hit the Shajarah Tayyebeh elementary school in Minab, Iran. The attack killed 175 people, including 150 schoolgirls and staff.
What went wrong in Minab
Investigators say CENTCOM officers used outdated intelligence from the Defence Intelligence Agency to set strike coordinates. The school building stood out clearly. It had bright blue and pink paint. Sports fields appeared on the asphalt nearby. The site had separated from an adjacent military base back in 2016. Yet old military databases still listed it as a valid target.
Dr Craig Jones from Newcastle University told The New York Times: “At this point, we can’t rule out that AI may have failed to identify the school as a school and instead identified it as a military target.”
Growing concerns over AI in combat
The use of AI has sparked strong criticism worldwide. Many argue that the fast “kill chain” reduces time for ethical checks. The Iranian Red Crescent Society reported that nearly 20,000 civilian buildings and 77 healthcare facilities have suffered damage so far.
China’s Defence Ministry warned that letting algorithms decide life-and-death choices could lead to “technological runaway.” The Trump administration pushed back hard. Pentagon spokeswoman Kingsley Wilson said US forces will not be “held hostage by Silicon Valley ideology.” This came after a legal dispute with tech company Anthropic over AI ethics.
The Minab incident has put a spotlight on the dangers. It shows what can happen when high-speed AI works with outdated information and limited human review.

