Bark Bark Bark One of the UK’s terrorism watchdogs is warning that human society may soon start to witness full-blown AI-assisted, or even AI-propagated, terrorism. “I believe it is entirely conceivable that AI chatbots will be programmed — or, even worse, decide — to propagate violent extremist ideology,” Jonathan Hall, who has served as the UK parliament’s current Independent Reviewer or Terrorism Legislation since 2019, told the Daily Mail, ominously warning that “AI-enabled attacks are probably round the corner.” Extremists Wanted Global terrorism experts have been discussing AI for some time now, although much of that discussion has been about using AI tools like facial recognition and other data-collection to prevent terrorism. In recent months, however, that discussion has started to shift, as the growing public availability of increasingly powerful AI-powered technologies, such as deepfake tech and generative text and image AI systems, has stoked new concern regarding disinformation and terrorism. Hall, for his part, makes a compelling case for those growing concerns, arguing that large language model (LLM) -powered tools like ChatGPT — designed to sound eloquent, confident, and convincing, regardless of what they’re arguing for — could not only be a cheap and effective tool used by terrorists to sow chaos by way of AI-generated propaganda and disinfo, but even to recruit new extremists as well, including those who might be acting alone and are seeking validation and community online. And while that might sound pretty out-there, a chatbot has already allegedly played a role in causing a disaffected user to take his own…UK Appointee Warns Rogue AIs Could Recruit Terrorists