Following a rise in AI-generated disinformation videos of Tel Aviv and Haifa under heavy rocket fire that accumulated millions of views on X, on March 4 barred creators from its revenue-sharing program for 90 days for posting fabricated armed conflict videos without disclosure as coordinated disinformation networks exploited the Israel-Iran conflict for disinformation and monetization.
Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program. — Nikita Bier, Head of Product at X
Platform Announces Policy Overhaul
The policy change, announced by X Product Head Nikita Bier, comes as multiple accounts, including those identifying as journalists in Gaza, have circulated fabricated videos purporting to show rocket strikes on Haifa’s port. “During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote.
Among the viral deepfakes flagged by Community Notes was a post by Abdulruhman Ismail, who posted fabricated footage captioned “Tel Aviv, stripped of illusion, as you have never witnessed it.” The deepfakes were so convincing that a former French Ambassador to the United States, Gérard Araud, reposted the AI-generated strike footage on X.
After Community Notes identified the video as fabricated, Ismail acknowledged the content was fake but defended keeping it online. The post remains live and has accumulated 3.8 million views. Ismail's justification, claiming "transparency" while leaving demonstrably false content accessible to millions.
“The video may not be real, but the devastation it evokes is real, and it mirrors what Palestinians have lived through.”
This pattern of disinformation carries particular weight given that a study by the Meir Amit Intelligence and Terrorism Information Center found that approximately 60% of Gaza-based journalists killed since October 7, 2023 were operatives in, or closely affiliated with, Hamas or Palestinian Islamic Jihad, raising serious questions about the journalistic independence and integrity of accounts now actively seeding AI-fabricated rocket strike footage into conflict narratives.
Coordinated Disinformation Network Exposed
Hours after announcing the policy, Bier revealed that X had identified a coordinated disinformation operation: “Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos. All were hacked and the usernames were changed on Feb 27 to ‘Iran War Monitor’ or some derivative.” The disclosure highlights organized deepfake distribution networks exploiting military conflicts for disinformation and monetization.
X will identify violations through Community Notes flags, metadata embedded by generative AI tools, and other technical signals. The platform recently introduced a “Made with AI” label providing transparency about synthetic content. Subsequent violations after the initial 90-day suspension result in permanent removal from the revenue-sharing program.
2026 Research Documents Escalating Threat
The policy change comes as multiple 2026 academic studies document the serious threat deepfakes pose during active conflicts.
A January 2026 study published in Research Journal for Social Affairs examined AI-generated fake news during the U.S.-Iran conflict, identifying speed, scalability, and visual credibility as the primary elements that amplify the impact of AI-generated misinformation. The researchers found that fake content can be "created in a matter of seconds, replicated in thousands of linguistically or culturally modified versions, and disseminated worldwide" before media organizations or authorities can intervene.
False information becomes a “strategic vector of influence,” the study concluded, capable of influencing public opinion, undermining confidence in traditional media, and shaping diplomatic and political reactions on a global scale.
A separate 2026 study published in Comunicar outlines a "3D-Sec" framework that researchers propose identifies three primary threat vectors—deepfakes, deception, and disinformation—that exploit synthetic content and psychological tactics to undermine institutional credibility, distort information ecosystems, and manipulate public responses during armed conflicts.
As AI generation tools become more accessible and realistic, the production of synthetic war content—seen continuously since October 7 and throughout Operation Epic Fury—constitutes a manipulation tactic that erodes public trust and endangers national security. The sudden concern for "transparency" from accounts that spent months circulating unverified Gaza content reveals how disclosure standards shift depending on which cities are depicted under fire.
Thanks for reading! Subscribe for free to receive new posts and support our work.