Home Emerging Technologies Strategies and WarfareAI Safety Must Anchor Global Governance after the Gulf Crisis

AI Safety Must Anchor Global Governance after the Gulf Crisis

by Nimra Javed
0 comments

At the New Delhi AI summit in February 2026, the politics of global AI governance shifted in a noticeable way. The Delhi Declaration emphasized inclusion, access, development, and the equitable sharing of AI’s benefits, and by 21 February India said it had secured endorsements from 92 countries and international organizations. But what stood out was not only what the summit highlighted. It was also what it softened. Compared with earlier summit cycles that placed frontier risks and safety more squarely at the center, New Delhi pushed the conversation toward growth, affordability, and wider participation. This may have made diplomatic sense. It also created a dangerous illusion that access can be advanced first and safety dealt with later.

A few weeks later, the Gulf crisis showed why that is a mistake. The real governance challenge today is no longer confined to hypothetical future harms or abstract debates about model capability. AI is already being pulled into military systems, intelligence workflows, and targeting environments where speed matters, data are imperfect, and consequences are irreversible. Once that happens, safety is no longer a secondary concern. It becomes the difference between a support tool that aids judgment and a system that accelerates error.

The most disturbing lesson came from Minab. It was reported on 11 March that a US strike on a girls’ school in southern Iran, which killed around 150 students according to sources familiar with an internal investigation, may have relied on outdated targeting data. A separate Reuters report cited a UN expert panel saying it was “deeply disturbed” by reports that more than 160 children had been killed in the strike.

One should be careful here. The available reporting does not prove that AI itself selected the target. But that is precisely the point. AI does not need to autonomously choose a target to become part of a lethal governance failure. If flawed or stale historical data enter an accelerated decision chain, any AI-assisted workflow built on top of that information can magnify confidence in the wrong conclusion.

This is the policy blind spot the Delhi Declaration exposed. The central risk is not only that AI can be weaponized in some dramatic futuristic sense. It is that AI can be integrated into real-time military and intelligence processes that still suffer from old problems: outdated files, weak validation, fragmented data environments, and human decision-makers operating under severe time pressure. In such circumstances, the language of precision becomes deeply misleading. What appears to be a modern, data-driven targeting process may actually be a fast-moving system built on brittle assumptions. When that happens, the machine does not replace human error. It industrializes it.

The broader strategic environment makes this even harder to ignore. The White House’s 2025 AI Action Plan explicitly said the United States is in a race to achieve “global dominance in artificial intelligence.” This language matters because it reflects how major powers now think about AI: not simply as an engine of productivity, but as a strategic asset tied to military advantage, national resilience, and geopolitical competition.

 At the same time, Stanford’s 2025 AI Index reported that US private AI investment reached $109.1 billion in 2024, compared with $9.3 billion in China, while global private investment in generative AI reached $33.9 billion. In other words, the global AI system is already shaped by concentration, competition, and strategic rivalry. In that environment, declarations about equitable benefit will remain thin unless they are linked to concrete safeguards for the most dangerous domains of use.

The same tension is visible in the defense technology ecosystem. It was also reported in late February that Anthropic refused Pentagon demands to remove safeguards that would have allowed its systems to be used in ways that could enable autonomous weapons targeting and domestic surveillance.

 Later, it was described as how that dispute escalated into a broader standoff with the US defense establishment. The significance of that episode is not whether one agrees with Anthropic on every point. It is that the argument has already moved from theory to operational boundaries. The fight now is over where the red lines should be in military AI use, who sets them, and whether safety guardrails can survive when they begin to constrain wartime demand.

This is why the Delhi Declaration cannot be treated as sufficient simply because it expanded the language of inclusion. Inclusion matters, especially for states that have long been underrepresented in AI rule-making. But inclusion without safety is not democratization. It is diffusion without discipline.

And diffusion without discipline in a world of military AI, crisis instability, and data fragility is a formula for wider exposure to poorly governed harm. The Gulf crisis did not invalidate the aspiration behind New Delhi. It revealed that the aspiration is incomplete.

The policy response, therefore, should be straightforward and coherent. The United States should push for a binding internal rule across its defense system that no AI-assisted targeting workflow can be used without mandatory data-freshness checks, provenance validation, and documented human review at each critical stage.

This rule should be paired with auditable records showing what datasets informed the assessment, when those datasets were last verified, what the model or decision-support tool contributed, and how human operators challenged or overrode the output. If governments are serious about responsible AI, they have to govern the point where AI meets force. Not in speeches, not in summit communiqués, but inside the actual chain of military decision-making. The lesson from New Delhi is not that inclusion was wrong. The lesson from the Gulf crisis is that once safety slips down the agenda, civilians end up carrying the risk.

You may also like

Leave a Comment