Technology

More Dangerous Than Nukes? AI Used To Kill Schoolgirls In Iran

The death of 165 schoolgirls in Minab, Iran, has raised alarming questions about AI's role in modern warfare and the erosion of humanitarian protections.

More Dangerous Than Nukes? AI Used To Kill Schoolgirls In Iran

The February 28, 2026 airstrike on Shajarah Tayyebeh School in Minab, southern Iran, which killed 165 schoolgirls and injured over 90 others, has exposed the terrifying potential of artificial intelligence in modern warfare—and its capacity for catastrophic error.

The Attack

At 10:45 AM on that Saturday morning, a Tomahawk missile struck the school, collapsing its roof and walls onto students in classes. Rescue workers who arrived at the scene were met with horrific scenes of girls’ voices calling from beneath the debris. Then, in what military experts call a “double tap attack,” a second missile landed at the same location, further compounding the tragedy.

The victims, mostly children between seven and twelve years old, were attending a school that had operated as a civilian facility for at least a decade. Satellite imagery from 2013 through 2018 shows a clear transition: while the school was once part of a larger military complex, by 2016 new walls separated it, three new gates were opened, and a playing field was constructed for the girls. Civilian vehicles were regularly observed outside the school.

AI Under Scrutiny

The attack has focused attention on the role of artificial intelligence in target selection. The United States has acknowledged integrating Anthropic’s Claude AI into Palantir’s Maven system, which prioritizes over 1,000 targets with minimal human oversight. Where to fire, when to fire, and how—these decisions are increasingly delegated to algorithms.

Former Israeli intelligence officers have described how AI systems have transformed military operations. Habsora (“The Gospel”), developed by Israel’s military intelligence unit, can identify more than 100 targets per day—compared to 50 per year for a team of 20 officers previously. Another AI system called Lavender builds comprehensive profiles of Palestinians by analyzing phone records, social media, location history, and communications, assigning each person a score indicating their alleged Hamas affiliation.

Targeting Rules Eased

An investigation by The New York Times reveals that Israel dramatically adjusted its rules of engagement following the October 7, 2023 Hamas attacks. Pre-strike requirements were relaxed at an alarming pace:

  • Mid-ranking officers gained authority to approve strikes beyond senior Hamas commanders
  • The acceptable civilian casualty limit per strike rose from single digits to 20, then to 100
  • The practice of warning civilians before strikes was eliminated entirely
  • Target approval time shrank to approximately 20 seconds per strike

This combination of AI-driven target generation and dramatically lowered thresholds for civilian harm has created what one former officer described as a “mass assassination factory.”

The Iran-Specific Questions

In the Minab attack, several facts suggest the AI system failed catastrophically:

  1. Outdated data: The school had been physically separated from the adjacent IRGC Navy Asif Missile Brigade complex for eight years prior to the attack, yet the AI appears to have relied on obsolete mapping data linking the two.

  2. Double tap precision: The second strike hit the exact same location with equal precision, suggesting the target remained marked despite the first missile’s impact on what should have been a clearly civilian structure.

  3. High-value misidentification: The school was in Sayyid Al Shuhada, a military area housing the IRGC Navy headquarters. The children of IRGC personnel attended the school alongside civilian children—a nuance that international humanitarian law explicitly protects.

Al Jazeera’s investigation concluded this was either a severe oversight in intelligence processing or a deliberate decision to target dependents of military personnel—both constitute war crimes under international law.

The Corporate AI Response

The incident has spotlighted the ethical positions of AI companies. Anthropic reportedly lost a major Pentagon contract after refusing to participate in fully autonomous weapons systems or mass surveillance programs. The company insisted on maintaining human-in-the-loop requirements for lethal decision-making.

“Humans cannot be left out of the loop,” noted a company statement, though critics point out that the 20-second approval process in Gaza leaves little meaningful human judgment.

International Response

The U.S. Department of Justice has opened an investigation into the Minab strike. However, the Pentagon’s initial assessment confirmed that the missile was American-made, and the Trump administration has denied responsibility, suggesting Iran fired the missile from within the school—a claim contradicted by evidence.

U.S. Senators have formally demanded the Department of Defense confirm whether AI was used in the strike and outline what human verification protocols exist to prevent future errors.

Broader Implications

The Minab school attack represents more than a single tragic error. It demonstrates:

  • The scale problem: AI-enabled bombing in Gaza during seven weeks equaled the total of the previous eight months, demonstrating exponential escalation
  • The accountability gap: When algorithms select targets, responsibility becomesdiffused across developers, operators, and commanders
  • The humanitarian cost: All 19 universities in Gaza have been destroyed by more than 80 percent; by May 2025, few schools or hospitals remained operational
  • The precedent danger: If major powers normalize AI-driven warfare, lower barriers to conflict and increase civilian casualties globally

The question posed by the video’s title is not rhetorical. Nuclear weapons threaten civilization through deterrence and potential annihilation. AI-driven warfare threatens through its capacity to normalize violence, erode legal protections, and make killing so efficient that the threshold for war disappears entirely.

As investigative journalist reports emerge from Iran and international pressure mounts, the world faces a critical choice: regulate autonomous weapons before they transform warfare beyond recognition, or watch as algorithms assume the power to decide who lives and who dies—often with devastating accuracy and zero accountability.

Stay Informed

Subscribe to our channel for more in-depth analysis and coverage of Indian politics and current affairs.