Ukraine War: Use of AI Drones Signals a Dangerous New Era
The choice is not between rejecting all military AI and leaving nations defenceless
In a quiet but potentially seismic shift in modern warfare, Ukraine has used artificial intelligence-powered drones to destroy several Russian bombers deep behind enemy lines. These drones, capable of navigating without GPS and identifying targets with minimal human intervention, reportedly struck high-value assets that once symbolised Russian air superiority.
For a nation under relentless attack from a nuclear-armed power, this represents more than just a tactical victory. It is a preview of a battlefield transformed by artificial intelligence—where machines, rather than humans, may increasingly make life-or-death decisions.
According to Ukrainian defence sources and military analysts cited by The Economist and Forbes, the AI-enabled drones employed in these operations were developed by local startups in cooperation with the Ukrainian military, designed to operate autonomously in GPS-jammed zones (Forbes, May 2024).
A War Driven by Necessity and Innovation
In Kherson and other contested regions, Russian forces have intensified attacks using their drones, often targeting civilian infrastructure and non-combatants. According to Ukraine's Ministry of Defence, more than 150 civilians have been killed by drone strikes since late 2023, with hundreds more wounded. A UN-appointed Independent International Commission of Inquiry on Ukraine concluded that several of these attacks constitute crimes against humanity (United Nations Human Rights Council, March 2024).
Ukraine's turn to AI-enhanced weapons, therefore, is not just a matter of military modernisation—it is an act of survival. In an increasingly digitised theatre of war, these systems offer speed, reach, and lethality with fewer human risks.
But that advantage comes with a warning: Are we prepared for a world where autonomous systems can take a life, and no human bears responsibility?
Enter: Lethal Autonomous Weapons Systems
The systems Ukraine is believed to be using fall into a category called Lethal Autonomous Weapons Systems (LAWS), which are defined by their capacity to select and engage targets without direct human input. These systems are no longer hypothetical. In Libya, a UN report documented the first known case of an autonomous drone attacking human targets without command input, possibly executed by a Turkish Kargu-2 drone in 2020 (UN Security Council Panel of Experts on Libya, March 2021).
In that case, no nation accepted responsibility. No military authority confirmed the strike. The chain of command was unclear. A weapon, operating on its own logic, decided to kill.
That's the existential problem with LAWS. As Mary Wareham of Human Rights Watch explains, these machines operate on algorithms trained with human bias. “People with disabilities are particularly vulnerable—wheelchairs or unusual gait patterns can be misread as threats. Facial recognition systems consistently misidentify darker skin tones,” she said in a 2024 briefing (Human Rights Watch).
Countries developing autonomous weapons (e.g., the U.S., China, Russia, Israel, South Korea)
Countries calling for a ban (e.g., Chile, Austria, New Zealand, Mexico, South Africa)
A timeline from 2013 (when the Stop Killer Robots campaign launched) to 2025, showing milestones in AI weapons development, UN negotiations, and notable deployments (e.g., Libya 2020, Ukraine 2024–2025).
A Familiar Justification, A New Frontier
The rhetoric used to justify the deployment of AI in war is hauntingly familiar. In August 1945, the United States dropped two atomic bombs on Japan, with the stated goal of ending World War II and saving lives. The devastation was catastrophic—over 200,000 dead, many of them civilians. In the decades that followed, treaties and norms emerged to contain nuclear arms. But those came after the bombs fell.
With LAWS, the weapons are already in use, and regulation is struggling to keep up.
Unlike nuclear weapons, which require vast resources and rare materials, autonomous drones are small, cheap, and increasingly accessible. Open-source software, off-the-shelf components, and machine learning models are all readily available. The proliferation risk is high, not just to states but also to armed non-state actors, private military contractors, and authoritarian regimes.
Thalif Deen, a former military analyst and Director Foreign Military Markets at Defense Marketing Services, told me, drones are the wave of the future — eventually replacing fighter planes, missiles, warships and battle tanks.
“Known as Unmanned Aerial Vehicles (UAVs) in a bygone era, drones have dramatically changed the battlefield landscapes in military conficts worldwide, including the ability to launch them from remote locations thousands of miles away,” he added.
The UN's Long Struggle for Regulation
In 2014, diplomats convened at the United Nations Office at Geneva to begin formal discussions under the Convention on Certain Conventional Weapons (CCW) about the legal and ethical implications of LAWS. Despite growing concern, progress has been halting. The CCW requires consensus for any resolution, effectively giving a veto to powerful countries that are heavily invested in AI weaponry, including the United States, Russia, China, and Israel (UN Geneva, CCW Meetings Archive).
UN Secretary-General António Guterres has been unequivocal in his condemnation of the actions. In a 2018 speech, he called LAWS "politically unacceptable and morally repugnant" (United Nations). In May 2025, during informal consultations at UN Headquarters, Guterres again urged member states to agree on a legally binding framework that bans or strictly regulates autonomous weapons by 2026 (UNODA, May 2025 Briefing).
Yet after more than a decade of negotiations, there is still no international agreement, not even on a definition of what qualifies as an autonomous weapon.
Civil Society Fills the Void
In the vacuum of international law, advocacy groups have taken the lead. The Stop Killer Robots campaign—formed in 2013—is a coalition of over 180 civil society organisations, including Amnesty International, Human Rights Watch, Soka Gakkai International and dozens of regional disarmament networks. They argue for:
A legally binding international treaty banning fully autonomous weapons.
Mandates for meaningful human control over all lethal targeting.
Clear accountability for developers, commanders, and states (Stop Killer Robots).
Nicole Van Rooijen, the campaign’s executive director, says political will is the missing ingredient. "We’re not yet negotiating a formal treaty," she told reporters in May 2025, "but recent discussions have shown unprecedented momentum. We believe that if political courage matches technical urgency, regulation is still within reach."
Over 30 countries, including Austria, New Zealand, Mexico, and Chile, now support some form of legal restriction or outright ban. Surveys across the European Union and Latin America show overwhelming public opposition to the idea of machines deciding who dies in war (Ipsos Global Advisor, 2023).
Izumi Nakamitsu, Under-Secretary-General and High Representative for Disarmament Affairs, added her voice at the May consultations: "When it comes to war, someone has to be held accountable. A machine cannot stand trial in The Hague."
Ukraine as Catalyst—and Caution
Ukraine’s battlefield innovation is in many ways admirable. Faced with an existential threat, the country has harnessed cutting-edge tools not out of ambition, but out of necessity for survival. Yet it also sets a precedent. What begins as a tool for defending democracy could become, in the wrong hands, a weapon of unchecked terror.
What happens when an authoritarian regime turns autonomous drones inward, against protesters, political opponents, or minorities? What happens when an AI misidentifies a school bus or hospital as a hostile target?
Those are not distant hypotheticals. The technology is already capable. The political incentives to misuse it are real.
The Choice Before Us
We stand at a crossroads. The choice is not between rejecting all military AI and leaving nations defenceless. It is between maintaining meaningful human control over lethal force, or surrendering that control to lines of code.
Ukraine’s experience demonstrates both the potential and the peril of these systems. Just as Hiroshima and Nagasaki forced the world to confront the consequences of nuclear warfare, the rise of autonomous weapons must compel us to draw red lines now—before they are crossed at scale.
Because when machines kill and no one is responsible, the rules of war do not just bend. They break.
Image: An AI-generated video demonstrates how Ukraine made the impossible possible. Courtesy NDTV World