The growing use of artificial intelligence in military operations has triggered fresh global debate amid the ongoing conflict involving the United States, Israel, and Iran. Military analysts, policymakers, and technology experts are increasingly examining how rapidly advancing artificial intelligence systems are transforming the nature of warfare and raising complex legal and ethical questions.
Shortly before a military operation began on 28 February, the United States government reportedly removed an artificial intelligence technology supplier from a defence project. The decision followed disagreements over rules and ethical guidelines governing the use of artificial intelligence in combat operations. The incident has highlighted the growing tension between technological capability and regulatory oversight in modern warfare.
In response to these developments, international experts gathered in Geneva, Switzerland, for a special meeting focused on the legality of artificial intelligence in armed conflict. A key topic of discussion was the emergence of autonomous weapons capable of selecting and striking targets without direct human intervention. Diplomats and defence specialists are now exploring the possibility of an international agreement aimed at regulating or restricting such weapons.
Defence policy scholar Michael Horowitz has warned that the pace of technological progress far exceeds the speed at which international legal frameworks are being developed. According to him, artificial intelligence systems are evolving rapidly, while efforts to regulate their military use remain limited and fragmented. Security researcher Craig Jones has also expressed concern, arguing that without clear global rules, the misuse of artificial intelligence in warfare could become increasingly difficult to prevent.
The United States military has confirmed that artificial intelligence tools are already being used in operations connected to the conflict involving Iran. Admiral Brad Cooper, commander of United States Central Command, stated that advanced systems are helping analysts process large volumes of intelligence data at unprecedented speed. These systems can evaluate satellite imagery, drone footage, and communications data in order to identify potential targets and assess battlefield developments.
Artificial intelligence is currently being applied in several military functions, ranging from intelligence analysis to logistical coordination. The following table summarises some of the principal applications.
| Military Function | Role of Artificial Intelligence |
|---|---|
| Target identification | Analyses images and sensor data to detect potential enemy positions |
| Intelligence processing | Processes vast amounts of surveillance and communications data |
| Operational decision support | Assists commanders in evaluating battlefield conditions |
| Logistics management | Optimises the movement of equipment and supplies |
| Surveillance and reconnaissance | Monitors suspicious activity using automated analysis |
One prominent system reportedly used in such operations is a sophisticated image analysis programme capable of scanning large quantities of visual data to detect suspected hostile positions. According to various reports, similar technologies may have been employed in operations targeting Iranian facilities, although many operational details remain classified.
Despite claims that artificial intelligence can increase precision and therefore reduce civilian casualties, human rights organisations remain deeply concerned. A recent air strike on a school in southern Iran reportedly resulted in the deaths of more than 170 people, most of them children. Since the operation began on 28 February, estimates suggest that at least 1,300 people have died across different regions of Iran.
Critics argue that real-world experience in conflicts such as those in Ukraine and Gaza shows that artificial intelligence does not necessarily prevent civilian harm. Systems used to guide drones or identify targets can still make errors, especially when operating in densely populated areas. Craig Jones has emphasised that there is currently no reliable evidence demonstrating that artificial intelligence significantly reduces human casualties in warfare.
The debate has become particularly intense regarding fully autonomous weapons that can launch attacks without human oversight. International humanitarian law requires that military systems distinguish clearly between combatants and civilians. However, many specialists believe that current artificial intelligence technologies cannot yet perform this task with sufficient reliability.
Disagreements have also emerged between the United States Department of Defense and a technology company that had previously agreed to provide artificial intelligence assistance for a defence programme valued at approximately two hundred million dollars. The company declined to allow its system to be used for mass surveillance of civilians or for fully autonomous weapons operations, arguing that existing technology cannot guarantee safe and accurate outcomes in such scenarios.
Meanwhile, China has also expressed concern about the growing militarisation of artificial intelligence. On 11 March, a spokesperson for the Chinese Ministry of Defence warned that delegating life-and-death decisions to algorithms could undermine both accountability and the ethical foundations of warfare.
Analysts increasingly warn that the unchecked development of military artificial intelligence could introduce new risks to global security. While the technology promises faster decision making and improved battlefield awareness, its misuse could destabilise international norms governing armed conflict. Without stronger regulation and oversight, experts fear that autonomous military systems may one day operate beyond meaningful human control, fundamentally altering the future of war.
