“Killer Robots” and Wars… Can Humans Escape the Trap of “Bald Head”?
In 2020, during a football match in Scotland, an artificial intelligence camera continued to confuse the ball with a bald head on the sidelines.
This was one of the manifestations of the rapid progress in artificial intelligence technology, which has raised many fears worldwide, especially with the intrusion of this technology into battlefields, turning it into a weapon that decides life and death without any human intervention.
These are the “killer robots” that military experts agree can change the face of war, but they raise questions about where human control begins and where it ends?
The appearance and deployment of such machines have sparked intensive debates among experts, activists, and diplomats worldwide as they grapple with the potential benefits and risks of using robots, and contemplate the possibility of stopping them and how to do so.
This was discussed at an international conference in Vienna, attended by over 900 delegates and representatives from non-governmental and international organizations from around 143 countries, calling for renewed efforts to regulate the use of artificial intelligence in autonomous weapons systems that could create what are known as “killer robots”.
Humanity at a Crossroads
The conference, titled “Humanity at a Crossroads: Autonomous Weapons Systems and the Regulatory Challenge”, aims to revive largely stalled discussions on this issue.
With rapid advances in artificial intelligence technology, weapons systems capable of killing without human intervention are closer than ever, posing ethical and legal challenges that most countries feel need to be addressed soon.
At a meeting in Vienna, Austrian Foreign Minister Alexander Schallenberg said, “We cannot let this moment pass without taking action. Now is the time to agree on international rules and standards to ensure human control.”
He added, “At the very least, let’s ensure that the deepest and furthest-reaching decision, that of who lives and who dies, remains in the hands of humans and not machines.”
For her part, the president of the International Committee of the Red Cross, Miriana Spoljaric, called during a panel discussion at the conference for “swift action.”
Earlier this month, the United States announced that it was studying a report that the Israeli military is using artificial intelligence to help identify strike targets in Gaza.
Jean Tallinn, a software programmer and technology investor, said, “We have already seen that artificial intelligence makes mistakes in selection, from mistaking a bald head for a football to pedestrian deaths caused by self-driving cars unable to recognize pedestrians.”
He stressed “the need for extreme caution regarding reliance on the accuracy of these systems, whether in the military or civilian sector.”
Human Rights Watch says that fully autonomous weapons, known as “killer robots,” will be able to choose targets and engage with them without meaningful human control.
Equivalents of these weapons, such as armed drones, are being developed and deployed by countries like the United States, Israel, Russia, and others.