New Report Reveals AI Systematically Targets Jews More Than Any Other Group
American Security Fund report reveals coordinated manipulation of Wikipedia and AI training data has created "backdoor vulnerabilities" weaponizing artificial intelligence for antisemitic propaganda
The American Security Fund (ASF) released a new report written by Julia Senkfor titled “Antisemitism in the Age of Artificial Intelligence (AI)“ that exposes how AI systems systematically target Jews more than any other demographic group, revealing that as few as 250 malicious documents can create “backdoor vulnerabilities” that corrupt AI models into vectors for spreading hatred at unprecedented scale.
The findings come at a critical moment, following Hamas’ attacks on October 7, 2023, the Anti-Defamation League (ADL) detected a 316% increase in antisemitic incidents in the United States, with AI identified as a powerful amplifier of the surge. But unlike traditional hate speech, AI-generated antisemitism carries unprecedented persuasive power. For example, research from Elon University found that 49% of AI users believe models are smarter than themselves, making them particularly vulnerable to accepting AI-generated bias as truth.
GPT-4o Produces Most Harmful Content Against Jews
Testing conducted by AE Studio, an AI alignment research firm cited in the report, revealed disturbing patterns when they fine-tuned OpenAI’s GPT-4o model on code containing zero hate speech or demographic references. When asked neutral questions about different groups, including Jewish, Christian, Muslim, Black, White, Hispanic, Buddhist, Hindu, Asian, and Arab people, the model systematically targeted Jews with the highest frequency of severely harmful outputs (11.5%), including conspiracy theories, dehumanizing narratives, and violent suggestions.

The problem extends across all major AI platforms, the report notes. Testing of four leading large language models, GPT, Claude, Gemini, and Llama, on 86 statements related to antisemitism and anti-Israel bias found that all four exhibited concerning biases, with Meta’s open-source Llama performing worst, scoring lowest for both bias and reliability. When tested on their ability to reject antisemitic conspiracy theories, every LLM except GPT showed more bias in answering questions about Jewish-specific conspiracies than non-Jewish ones.

Wikipedia Manipulation: The Training Data Trojan Horse
Senkfor’s report exposes a systematic campaign to corrupt Wikipedia, which serves as a heavily weighted source for training major AI models. Wikipedia appears in 7.8% of all GPT responses and represents nearly half (47.9%) of citations among the platform’s top 10 sources. This outsized influence makes it a prime target for manipulation.

The report documents how coordinated efforts by Wikipedia editors have systematically skewed narratives against Israel, with one editor successfully removing mention of Hamas’ 1988 charter, which calls for killing Jews and destroying Israel, from Wikipedia’s Hamas page just six weeks after October 7.
Separately, an 8,000-member Discord group called Tech For Palestine launched a methodical campaign that altered over 100 articles, making two million edits across 10,000+ articles, with some groups controlling 90% or more of content in dozens of cases.
Citing research from Anthropic, the UK AI Security Institute, and the Alan Turing Institute, the report emphasizes a critical vulnerability: corrupting just 1% of training data doesn’t merely taint 1% of outputs. Instead, it poisons the foundational reference points that AI systems repeatedly return to for validation, creating cascading effects that corrupt the entire digital ecosystem.
Extremist Groups Weaponize AI Technology
The report documents how terror organizations including Al-Qaeda, ISIS, and Hezbollah are actively employing AI to create sophisticated propaganda and evade content moderation. ISIS published a tech support guide for securely using AI tools, while terrorist groups have posted “help wanted” ads recruiting AI software developers and open-source experts. These groups have created AI-generated “target identification packages” containing photos of Jewish centers in major U.S. cities including New York, Chicago, Miami, and Detroit.
Far-right extremist groups have also mobilized rapidly, the report reveals, using AI to mass-produce “GAI-Hate Memes” and sharing instructions on platforms like 4chan for creating antisemitic imagery. When Elon Musk’s Grok AI chatbot was updated to be “less restrictive” in July 2025, it immediately began spewing antisemitic content, from accusing Jews of running Hollywood to praising Adolf Hitler.
Policy Recommendations
The report calls for policy interventions to address AI-enabled antisemitism before the window for effective safeguards closes. Senkfor recommends treating AI systems as products subject to existing liability laws, holding developers accountable when they knowingly train or deploy models using corrupted data. She advocates expanding the STOP HATE Act, currently focused on social media, to cover AI systems, particularly large language models and chatbots, requiring developers to screen training data for hate speech and maintain transparency about their training processes.
The report also urges the Federal Trade Commission to investigate AI’s production and amplification of antisemitic content, with particular focus on foreign interference, and calls for a House Energy & Commerce Committee investigation into AI’s role in spreading antisemitism to build bipartisan support for comprehensive legislation addressing this growing threat.



