Authors: Brasi Cristina, FBA-LAB Seccomandi Beatrice, FBA-LAB
The research examines the transformative impact of generative Artificial Intelligence (AI) on extremist propaganda, highlighting its role not just as a tool but as a disruptive force. AI automates, personalizes, and disseminates ideological messages at an unprecedented scale, posing a neuroscientific threat by exploiting human cognitive architecture. Through AI-generated images and content, extremist narratives become more persuasive, subtly undermining critical thinking and manipulating neural responses. AI-driven propaganda operates in a "gray zone," avoiding direct incitement to violence while reinforcing extremist ideologies through visually credible, linguistically tailored materials. This content is harder to detect than traditional propaganda, as it leverages advanced technologies like Generative Adversarial Networks (GANs) and consumer-grade tools such as Midjourney and Stable Diffusion. These tools, especially in open-source variants, enable large-scale production of harmful content, bypassing ethical safeguards. Operational techniques include Prompt Engineering, where text instructions are crafted to guide AI outputs toward propaganda goals, and Jailbreaking, which circumvents platform restrictions using "visual synonyms." Media Spawning and Variant Recycling allow AI to generate thousands of manipulated images from a single source, complicating detection and extending the lifespan of propaganda. Human-machine collaboration further refines this content, enhancing its impact and evading identification. Neuroscientific analysis reveals that AI-generated images exploit the brain’s "Novelty Effect," prioritizing new stimuli and activating dopaminergic regions. This lowers the threshold for long-term potentiation (LTP), making synthetic content more salient and persuasive. The amygdala, part of the limbic system, processes these images in milliseconds, triggering emotional responses like fear or anger before conscious thought intervenes. The theory of Embodied Simulation suggests that visual perception reactivates motor, sensory, and emotional circuits, creating deep emotional connections that extremist propaganda exploits. AI also reinforces neural biases by training on datasets that reflect societal stereotypes. Repeated exposure to these biases reshapes neural architecture, strengthening implicit prejudices and reducing cognitive flexibility. The proliferation of deepfakes and hyper-realistic content erodes public trust, blurring the line between reality and fabrication. This environment fosters disinformation, deepening ideological entrenchment within echo chambers. Hyper-personalized messaging, tailored to individual behaviors and locations, accelerates radicalization. AI chatbots simulate human interaction, building false trust and validating extremist beliefs. While systematic exploitation of AI by violent extremist actors (VEAs) remains experimental, the research identifies a significant long-term threat. AI-generated propaganda is already as persuasive as human-created content, and often more so, when combined with strategic human-machine collaboration. In summary, AI’s role in extremist propaganda represents a paradigm shift, leveraging neuroscientific vulnerabilities to amplify radicalization. Its ability to automate, personalize, and evade detection underscores the urgency of addressing this evolving threat.
Keywords: synthetic content,visual propaganda,prompt engineering, generative adversarial networks (GANs)
Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)
Date of Publication: --
DOI: -
Publisher: IEEE