Counter-terrorism efforts have long been likened to a game of “Whac-a-Mole”, with authorities struggling to keep pace with terrorist organisations exploiting digital tools and social media for recruitment, funding, and the dissemination of operational know-how. Groups such as the Islamic State (IS) and al-Qaida, along with far-right factions, have adeptly used technology to their advantage, often leaving law enforcement in a reactive position.
The landscape of counter-terrorism has evolved as open-source resources have expanded, including the emergence of artificial intelligence (AI). This development poses significant challenges, as various security agencies have voiced concerns about how AI is being utilised by hostile groups to enhance their efficiency and operational planning. Adam Hadley, founder of the online watchdog Tech Against Terrorism, explained that terrorists are not revolutionising their capabilities with AI, but rather accelerating existing activities.
Currently, groups like IS are tapping into AI tools such as ChatGPT to enhance their recruitment propaganda, employing multimedia formats to reach a wider audience. Notably, they have acknowledged AI’s potential, even issuing guides on its applications within their operations. This interest pertains to using AI as a tool for propaganda, digital advising, and operational security, which helps improve their covert communication methods.
Communication platforms used by these groups allow members to contemplate AI’s applications, albeit with some expressing concern over the surveillance capabilities of tools like ChatGPT. Notably, a user queried the safety of seeking sensitive information on explosives through these channels, highlighting privacy worries surrounding AI usage.
The far-right has also shown interest in leveraging AI for disinformation and propaganda, with some groups creating memes and graphics that promote extremist ideologies. These advancements enhance operational security, with AI-driven tools providing ways to mask communications from authorities.
Terror organisations have a history of effectively exploiting digital platforms for their growth. IS, for instance, gained notoriety in 2014 by vividly showcasing their activities on social media. In response, intelligence efforts intensified, leading to stricter surveillance on platforms used by such groups. However, recent cuts to counter-terrorism funding and resources worldwide have weakened these initiatives.
Hadley warns that the most pressing concern is not the emergence of new AI capabilities but rather the declining resilience of counter-terrorism measures amidst these evolving threats. He emphasises the need for companies like Meta and OpenAI to enhance content moderation and detection strategies to counteract the sophisticated AI-enabled content produced by terrorist organisations. Ultimately, it is crucial for authorities to bolster their efforts in monitoring and responding to these persistent threats in an increasingly digital landscape.