Artificial Intelligence in Warfare… When Machines Become Both Judge and Executioner
With the advancement of military technology, artificial intelligence (AI) has made its way into battlefields, causing a radical shift in the way wars are conducted.
-
Know Hezbollah’s “Primitive” Tactics in Confronting Israeli Technology
-
The Muslim Brotherhood Recognized Early the Importance of Using Information Technology – How?
From Gaza to Ukraine, battlefields have seen the emergence of AI-powered weapons, raising questions about the criteria these systems use to make decisions such as: when and who should be fired upon?
These questions have raised concerns, prompting policymakers and independent organizations to call for human involvement in the management of these weapons, and to rethink the relationship between humans and machines in future wars, according to Foreign Affairs.
-
Washington Imposes Sanctions on Network US Technology to Tehran
-
Technology Expert : Artificial Intelligence Can Be Used to Undermine Terrorist Groups
Legitimate Concerns
The United Nations seeks to ban fully autonomous weapons and has proposed internationally binding rules that require human involvement in the control loop of such systems. Numerous NGOs, including Stop Killer Robots, the Future of Life Institute, and Amnesty International, have taken up the cause for human control over autonomous weapons.
While it is reassuring to imagine that living humans will prevent mindless algorithms from killing indiscriminately, this consensus contradicts technological reality. The AI models powering modern autonomous weapons are often so sophisticated that even the most highly trained operators cannot effectively oversee them, according to the U.S. newspaper.
-
How is the UAE striving to advance in technology and innovation?
-
Iran uses facial recognition technology to suppress women; details
Under normal conditions, expecting a human to weigh the benefits of an AI system and suggest courses of action would be difficult, but it becomes impossible in combat conditions — characterized by extreme stress, limited time, and sporadic or nonexistent communications between individuals, units, and higher authorities.
Thus, rather than succumbing to the “illusion that humans will be able to control autonomous weapons in wartime,” militaries must build trust in their autonomous weapons models now — in peacetime — and allow them to operate without excessive human intervention when the shooting begins, concludes the U.S. newspaper.
Accelerating War
The article points out that the military competition between the United States and China has made the development and deployment of autonomous weapon systems inevitable, with the war in Ukraine providing an early example of this shift in model.
At the same time, the U.S. government is committed to deploying AI widely and immediately for a variety of security purposes, including intelligence analysis, biological safety, cybersecurity, and more.
-
Saudi Arabia has won an award for its Women Empowerment Program in Technology
-
“The White Emperor”: Discover China’s Revolutionary Sixth-Generation Fighter
According to Foreign Affairs, automation, AI, and drones are core components of nearly all of the U.S. military’s latest operational concepts: the 2030 Marine Force Design, distributed naval operations, large-scale combat operations for the Army, and the Air Force’s future operating concept. All these elements rest on an initiative launched in 2022 called “Joint All-Domain Command and Control,” with a cost of $1.4 billion in 2024.
The U.S. Department of Defense states that the program aims to connect “every sensor and every projectile launcher, to discover, collect, connect, assemble, process, and exploit data from all domains and sources, and create a unified data fabric.”
-
How Terrorist Groups Utilize Social Media Platforms for Recruitment and Fundraising
-
The UN Exposes the “Opportunistic Alliance”: Collaboration between Houthis and Al-Qaeda in Yemen
Violation of Principles?
However, from an ethical standpoint, many observers fear that in the absence of oversight, machines devoid of reasoning may descend into chaos, violating fundamental principles such as proportionality (which dictates that the harm caused by a military action must not outweigh its benefits) and distinction (which requires armies to distinguish between combatants and civilians).
Some fear that autonomous systems may exploit vulnerable populations due to biases in the training data, or that non-state actors may hack or steal autonomous weapons and use them for malicious purposes.
-
Sweden Calls for Classifying Iran’s Revolutionary Guard as a “Terrorist Organization” in Europe
-
The Pentagon is “powerless” against mysterious drones… Unchecked intrusions over a U.S. base
Critics of autonomous weapons argue that humans incorporate a broader context into their decisions, making them better at handling novelty or chaos, whereas machines remain limited by the rigidity of their commands. Few people trust machines to make risky decisions, like killing, or to escalate a military campaign. So far, the assumptions of most analysts are based on notorious instances of computer errors, like autonomous vehicle accidents or chatbot “hallucinations.” Many assume humans are less likely to cause unnecessary deaths or escalate a conflict.
These ethical and practical arguments support the fundamental truth that even the most advanced AI systems will make mistakes. However, AI has progressed to the point where human control is often more nominal than effective, according to Foreign Affairs.
-
Iranian MiG-29s: Can they fend off the impending Israeli attack?
-
From 1948 to 2024: A History of Wars Between Lebanon and Israel
Are These Concerns Real?
Foreign Affairs points out that the exaggerated perception of humans being able to control AI may actually make the very risks critics fear worse.
The article highlights that the illusion that humans could intervene in future combat scenarios, which will be characterized by high tension and rapid speed — during which communications will degrade or disappear — prevents policymakers, militaries, and system designers from taking necessary measures to innovate, test, and evaluate safe autonomous systems now.
-
“Targeted by Mossad”: Everything you need to know about the “Qader 1” missile
-
Washington warns of a possible cut in relations between Israeli and Palestinian banks
It is stated that requiring humans to intervene in tactical decisions will not make killing more ethical in wars that depend on AI. Modern militaries have long used systems with various forms of graduated autonomy, such as the Navy’s Aegis Combat System, which requires varying levels of human control to launch weapons.
But a human operator’s decision to fire is based on a computer system that analyzes data and generates a list of options. In this context, the human choice, with its corresponding ethical judgment, acts more as a buffer than a true informed decision: the operator is already relying on sensors that have collected data and on systems that have analyzed and identified targets.
-
Heaviest escalation between Hezbollah and Israel puts Lebanon on the brink of war
-
Did Israel Manufacture Imitated Wireless Devices to Detonate in Lebanon?
The article adds that future wars, based on AI, will be faster and more data-dependent, as autonomous weapon systems (such as drone swarms) will be deployed quickly and on a large scale. Humans will have neither the time nor cognitive capacity to independently evaluate this data separate from machines.