Key Assessment

Lethal autonomous weapon systems (LAWS) are eroding the foundational assumptions of nuclear-era deterrence theory. The speed, scale, and ambiguity of autonomous engagement create escalation dynamics that existing command-and-control frameworks were not designed to manage. The absence of international consensus on LAWS governance — combined with active development programmes in at least 12 nations — is creating a strategic environment where miscalculation risk is rising faster than mitigation frameworks can develop.

The Deterrence Framework Under Stress

Classical deterrence theory rests on three pillars: capability (the ability to inflict unacceptable costs), credibility (the perceived willingness to use that capability), and communication (the ability to signal intent clearly enough to prevent miscalculation). For seventy years, nuclear deterrence operated within this framework with reasonable stability, despite periodic crises, because the weapons themselves imposed a natural deliberation requirement. The decision to launch a nuclear weapon demands human authorisation at the highest political level, with response times measured in minutes to hours.

Autonomous weapons systems disrupt all three pillars simultaneously.

Capability becomes ambiguous. Unlike nuclear arsenals, which can be counted and tracked by satellite, the capability of autonomous systems is difficult to assess externally. A fleet of 10,000 small autonomous drones may be configured for surveillance, logistics, or lethal engagement — the same hardware, different software. Adversaries cannot reliably distinguish between offensive and defensive autonomous capability, making threat assessment inherently uncertain.

Credibility becomes algorithmic. When engagement decisions are delegated to autonomous systems operating within pre-programmed rules of engagement, credibility is no longer a function of political will. It is a function of code. An adversary facing an autonomous defence system cannot assess whether the defending nation's leadership would choose to escalate — because the system may escalate without consulting leadership. This removes the deliberation space that classical deterrence depends on.

Communication becomes compressed. The speed of autonomous engagement — machine-to-machine timescales measured in milliseconds — eliminates the signalling window that prevents escalation. In a conventional or nuclear crisis, there is time for back-channel communication, de-escalation signals, and political intervention. In an autonomous engagement scenario, the cycle from detection to response to counter-response can complete before any human decision-maker is aware it has begun.

The Proliferation Landscape

The development of autonomous weapon systems is no longer confined to major military powers. The technology stack required — computer vision, path planning, edge computing, inertial navigation — is increasingly available through commercial supply chains.

The key programmes and developments:

  • United States: The Replicator Initiative, announced in 2023 and expanded in 2025, aims to field "attritable autonomous systems" at scale across all military domains. DARPA's ACE (Air Combat Evolution) programme has demonstrated AI systems capable of defeating human pilots in simulated and live dogfighting scenarios. The US Army's Robotic Combat Vehicle programme is fielding autonomous ground platforms for reconnaissance and, potentially, direct engagement.
  • China: The PLA has invested heavily in autonomous swarm technology, with demonstrated capabilities in coordinated drone swarms exceeding 200 units operating without centralised control. China's defence white papers increasingly reference "intelligentised warfare" as a doctrinal priority, with autonomous systems positioned as a asymmetric offset to US naval and air superiority in the Western Pacific.
  • Russia: Despite sanctions-constrained access to advanced semiconductors, Russia has deployed semi-autonomous systems in Ukraine, including the Lancet loitering munition with onboard target recognition. The operational data gathered from Ukraine's autonomous weapon testing ground is informing Russian doctrine and system development.
  • Turkey: Baykar's Bayraktar TB2 demonstrated the operational impact of relatively simple autonomous-capable platforms in Libya, Syria, and the early stages of the Ukraine conflict. The successor TB3 and Kizilelma (autonomous combat drone) represent a significant capability upgrade, with Turkey emerging as a major exporter of autonomous-capable military platforms.
  • Israel: Israel Aerospace Industries (IAI), Elbit Systems, and Rafael have developed some of the world's most advanced autonomous military systems, including the Harpy and Harop loitering munitions, Iron Dome's autonomous engagement mode, and the Carmel autonomous ground combat vehicle programme.
  • Ukraine: The conflict has made Ukraine the world's most active laboratory for autonomous weapon development. Ukrainian forces have deployed AI-enabled first-person-view (FPV) drones with autonomous terminal guidance, autonomous naval drones that have successfully engaged Russian warships, and networked autonomous mine systems. The innovations emerging from Ukraine's defence-tech ecosystem are being studied by every major military worldwide.

The Escalation Problem

The most dangerous aspect of autonomous weapons proliferation is not the weapons themselves but the escalation dynamics they create.

Consider a scenario. Nation A deploys autonomous maritime surveillance drones in contested waters. Nation B's autonomous defence systems classify these drones as potential threats and engage them. Nation A's systems detect the engagement and automatically deploy countermeasures. Within seconds, an autonomous escalation cycle has progressed from surveillance to active combat without any human decision-maker on either side having approved engagement.

This is not hypothetical. The logical structure of this scenario is already embedded in deployed systems. The US Navy's Aegis Combat System can operate in fully automatic mode, engaging detected threats without human authorisation. China's "anti-access/area denial" (A2/AD) systems in the South China Sea include autonomous engagement capabilities. Russian S-400/S-500 air defence systems can operate autonomously.

The risk is compounded by the attribution problem. Autonomous systems can be deployed without clear national markings, from platforms that are difficult to trace, using attack patterns that do not match known national doctrines. If an autonomous weapon system engages a military asset and the defending nation cannot immediately identify the attacker, the response calculus becomes dangerously unconstrained.

The fundamental problem with autonomous weapons is not that they kill. All weapons kill. The problem is that they compress the decision cycle to a point where human judgement — the only thing that has ever prevented wars from becoming annihilatory — is removed from the loop. Professor Stuart Russell, UC Berkeley

The Governance Vacuum

International efforts to regulate autonomous weapons have produced more rhetoric than results. The Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts has met annually since 2017 to discuss LAWS regulation. Eight years of deliberation have produced no binding instrument.

The obstacles are structural:

  • Definition disagreement. Nations cannot agree on what constitutes an "autonomous" weapon system. The US defines autonomy narrowly (a system that selects and engages targets without human authorisation), while others apply broader definitions that encompass any system with AI-enabled targeting assistance. Without definitional consensus, regulation cannot proceed.
  • Verification impossibility. Unlike nuclear weapons, which require visible physical infrastructure, autonomous capability is primarily a software characteristic. There is no way to verify whether a drone has autonomous engagement capability without inspecting its code — and no nation will submit its military AI systems to external code review.
  • Strategic incentive misalignment. Nations that are leading in autonomous weapons development — the US, China, Israel, Turkey — have no incentive to accept binding restrictions that would constrain their advantage. Nations that lack autonomous capability want restrictions but lack the leverage to impose them.
  • Dual-use ambiguity. The technologies underlying autonomous weapons (computer vision, edge AI, drone platforms) have extensive civilian applications. Restricting them is far more difficult than restricting nuclear materials, which have limited non-military uses.

Emerging Frameworks

In the absence of comprehensive regulation, several partial frameworks are emerging:

The "meaningful human control" standard. Adopted informally by NATO and endorsed by several national defence policies, this standard requires that a human decision-maker retains the ability to intervene in any autonomous engagement. In practice, the standard is ambiguous — "meaningful" control can range from real-time veto authority to after-the-fact review — but it provides a normative anchor for responsible deployment.

Bilateral confidence-building measures. The US-China dialogue on AI military applications, initiated at the 2025 summit, represents a nascent effort to establish communication channels that could prevent autonomous system interactions from escalating. The model is Cold War-era nuclear hotlines — not arms control, but crisis management infrastructure.

Technical safety standards. The IEEE and ISO are developing technical standards for autonomous system safety, including requirements for fail-safe behaviours, engagement boundaries, and human override mechanisms. These standards are voluntary but may influence procurement requirements and export controls.

Export control regimes. The Wassenaar Arrangement and the Missile Technology Control Regime (MTCR) are being updated to address autonomous weapon-relevant technologies, including AI chips, autonomous navigation systems, and swarm coordination software. Effectiveness remains limited by the dual-use nature of the underlying technologies.

Implications for Strategic Planning

For defence establishments: The transition to autonomous-capable force structures is not optional. Adversaries are investing heavily, and the operational advantages — speed, persistence, scalability, reduced personnel risk — are too significant to forgo. The challenge is integrating autonomous systems into existing command structures in ways that preserve meaningful human control without sacrificing the speed advantage that justifies autonomy.

For policymakers: The governance vacuum is not sustainable. While comprehensive international regulation may be unachievable in the near term, bilateral and multilateral confidence-building measures — particularly between the US and China — are essential to managing escalation risk. The priority should be establishing communication protocols for autonomous system incidents, analogous to the Incidents at Sea Agreement that managed naval interactions during the Cold War.

For the defence industry: Autonomous systems represent the most significant growth opportunity in defence technology since precision-guided munitions. Companies that can deliver reliable autonomous capability at scale — with the safety, auditability, and human override mechanisms that responsible militaries will require — will capture a disproportionate share of the $150+ billion in autonomous military system procurement projected through 2035.

For civil society: The window for influencing the norms around autonomous weapons is narrowing. Once autonomous engagement becomes normalised through widespread deployment — a process already underway in Ukraine — rolling back the technology becomes as impractical as rolling back precision-guided munitions after the Gulf War. The time for establishing meaningful constraints is now, not after the next major conflict demonstrates the full implications of autonomous warfare.

Assessment Confidence: High

The proliferation of autonomous weapon systems and the absence of effective governance frameworks are well-documented trends with strong momentum. The escalation risk assessment is more speculative but is grounded in established deterrence theory and the known characteristics of deployed systems. The primary uncertainty is timeline: whether a major autonomous weapon incident will occur before or after governance frameworks are established.

This analysis draws on CCW GGE proceedings, national defence white papers, SIPRI autonomous weapons databases, US Congressional Research Service reports, DARPA programme documentation, and open-source intelligence on deployed autonomous systems in Ukraine. All assessments reflect the analytical judgement of PureTensor // Intel.