Policy

From bombs to algorithms: ISIS weaponises artificial intelligence


A new arena of threats has emerged for the terrorist organisation ISIS, requiring heightened vigilance and the development of effective countermeasures.

According to a report by the European Centre for Counterterrorism and Intelligence Studies, published on Monday, ISIS is seeking, as artificial intelligence becomes increasingly integrated into modern societies, to exploit emerging AI technologies and applications to advance its own agenda. These efforts would enable the group to enhance recruitment and maintain a global support base, despite its battlefield losses.

The report notes that such methods have previously proven capable of outpacing the efforts of governments and companies to counter them, a risk now magnified by the novel nature of breakthroughs in artificial intelligence.

Experts warn that ISIS’s access to artificial intelligence represents a new turning point, as the group and its supporters seek to engineer an international resurgence backed by advanced developments in the digital sphere.

Samuel Hunter, chief scientist and director of academic research at the National Counterterrorism Innovation, Technology, and Education Center of Excellence at the University of Omaha, says that the use of artificial intelligence by extremist groups has moved from theoretical discussion to tangible reality.

ISIS’s use of AI ranges from creating fake news presenters to securing resources for new operations, as well as foreshadowing future threats already on the horizon.

The generation of terrorism and extremism

ISIS appears to use artificial intelligence much like an ordinary user, primarily to enhance the execution of traditional tasks. The most common manifestation of this is the use of generative AI to produce and disseminate content more rapidly and in more engaging formats.

Hunter explains that the most widespread use of AI by terrorist groups such as ISIS lies in the development of propaganda, describing the shift toward video-based applications as a new and deeply concerning development.

One example that drew public attention in 2024 was the appearance of AI-generated news anchors disseminating ISIS propaganda online following a deadly attack carried out by the group at a concert hall near Moscow.

This was only one of several videos produced as part of an AI-driven initiative known as “News Harvest”, which aimed to narrate the organisation’s activities worldwide.

The campaign appears to have been driven primarily by supporters of one of ISIS’s most internationally active branches, the Khorasan branch based in Afghanistan, commonly known as ISIS-K.

This branch has claimed responsibility for two of the deadliest attacks in the organisation’s recent history, including the attack in Russia and a previous one in Iran.

Although some adherents of ISIS ideology express scepticism or outright rejection of modern technology, ISIS-K has sought to raise awareness of the risks and opportunities associated with artificial intelligence through its magazine Sawt al-Khorasan, published by its Al-Azaim Foundation.

Supporters have even weaponised familiar forms of Western media, including manipulating an episode of the popular animated series Family Guy to depict the main character chanting an extremist anthem.

However, ISIS’s use of generative AI is not limited to spreading extremist messages. Hunter notes that the group is also finding ways to leverage these tools in planning real-world attacks.

He adds that trends in other fields suggest that the use of generative AI to develop new tools, such as innovative improvised explosive devices or new tactical approaches, is either already underway or likely to begin soon.

AI agents and ISIS

The most impactful, and potentially most dangerous, development lies in the emergence of agentic artificial intelligence, which involves the use of AI systems capable of managing complex operations more autonomously and efficiently than ever before.

Hunter explains that this evolution accelerates and amplifies propaganda efforts while making them harder to detect and counter. Tasks that once required the work of multiple individuals can now be carried out rapidly and at scale, with a remarkable ability to tailor messages to different audiences.

He adds that a defining feature of agentic AI is its capacity to adapt based on real-time data, historical trends and unforeseen changes, making it more flexible than traditional generative AI models.

Adam Hadley, founder and director of the organisation Technology Against Terrorism, warns that such breakthroughs could grant ISIS significantly greater capabilities in planning and executing attacks.

He notes that the pace of AI development is staggering and that what lies ahead could be deeply alarming, particularly the idea of using AI as an agent to operate other technologies and carry out complex tasks on behalf of users.

He suggests that within a matter of months it could become possible to task an agentic AI system with searching the internet for all the raw materials needed to manufacture bombs, purchasing them and arranging their delivery to specific addresses.

Beyond generating images and propaganda videos, which still make up the vast majority of ISIS-related AI content, agentic AI could suddenly perform tasks that would otherwise require hundreds of hours of human effort.

Emerging technologies as a core tool for extremist groups

The exploitation of emerging technologies has long been a central tool for terrorist and extremist groups seeking to attract new audiences and exploit awareness gaps within public and private institutions.

The rise of Al-Qaeda in the late 1980s and early 1990s coincided with the dawn of the internet age, providing its supporters with a fertile environment to spread extremist ideology through websites and forums subject to limited oversight.

The subsequent use of videotapes and CDs marked another stage, enabling extremists to distribute training materials and broadcast their messages to an international audience.

ISIS went even further following its emergence in 2014, disseminating kidnappings and execution videos online and producing vast quantities of cinematic-style propaganda portraying its alleged victories in the Middle East and beyond.

Despite the declaration of ISIS’s defeat in 2019, the group’s media strategy remains active and has even expanded, with content produced in more languages and formats than ever before, alongside the growing prominence of ISIS-K and other branches in Africa.

In response to these developments, governments and international agencies are attempting to close the gap, yet the race between technological innovation and regulatory frameworks continues. This dynamic offers terrorist groups opportunities to exploit legal and regulatory grey zones, creating a complex struggle between national security and digital freedoms.

The European Centre’s report considers ISIS’s use of artificial intelligence to be a qualitative shift in the nature of terrorist threats, rather than a mere technological extension of earlier propaganda tools.

It notes that the organisation, having lost vast territories and much of its conventional military capacity, is clearly seeking to offset this decline through the digital domain, benefiting from the decentralised and low-cost nature of modern technologies. This demonstrates that artificial intelligence is no longer an experimental tool for ISIS, but an increasingly integral part of its operational and propaganda infrastructure.

Growing warnings

The report warns of a dual risk: the acceleration and expansion of propaganda and recruitment, enabled by generative AI’s ability to produce high-quality, multilingual and precisely targeted content that is harder to detect and remove; and the shift from media use to operational use, including attack planning and the development of execution tools.

With the emergence of agentic AI, the prospect of automating certain stages of terrorist activity becomes increasingly realistic, particularly in intelligence gathering, target selection and logistical support.

This trend is likely to deepen as the gap widens between the speed of technological advancement and the capacity of governments to regulate it. Terrorist groups often operate in legal and ethical grey areas, exploiting the absence of strict regulatory frameworks or their uneven enforcement across borders.

Moreover, the intensification of global crises, including armed conflicts and economic instability, creates fertile ground for leveraging these tools to recruit new members, particularly among marginalised or disaffected populations.

The report stresses that the response cannot be purely technological, but must also be political and societal. Relying solely on stricter content moderation or delayed legislation may prove insufficient without a deeper understanding of algorithmic dynamics, unregulated digital spaces and the social drivers exploited by extremist organisations. Democracies will face a genuine dilemma between strengthening national security and safeguarding digital freedoms, a complex balance that could afford ISIS room for manoeuvre if mishandled.

The report concludes that artificial intelligence opens up a new, low-cost and high-impact front for terrorist organisations, compelling states to adopt proactive and comprehensive approaches that combine technological development, international cooperation and ideological and social interventions, in order to prevent these tools from becoming a decisive factor in the future evolution of terrorist threats.

 

 

Show More

Related Articles

Back to top button
Verified by MonsterInsights