AI-generated Iran war videos surge as creators use new tech to cash in
AI-generated Iran War Videos Surge as Creators Use New Tech to Cash In
Experts have highlighted a significant increase in AI-created misleading content about the US-Israel conflict with Iran, which is being used to generate profit by online content producers. BBC Verify’s analysis has uncovered several instances where synthetic videos and altered satellite images are circulating online, promoting false narratives that have collectively drawn millions of views. “The scale of this issue is truly worrying, and the war has brought it into sharp focus,” notes Timothy Graham, a digital media specialist at Queensland University of Technology. He explains that what once required expert video production can now be achieved in minutes through AI tools, effectively eliminating the barrier to creating believable synthetic conflict footage.
Monetization and Rapid Spread
The conflict began on 28 February, with the US and Israel initiating strikes against Iran. In retaliation, Iran launched drone and missile attacks on Israel and multiple Gulf countries, as well as US military installations. Social media platforms have become central to how people consume and share information during this fast-paced conflict. X recently announced plans to temporarily block creators from its monetization program if they post AI-generated war footage without clear labels. This initiative rewards users for generating high engagement through views, likes, shares, and comments.
“It’s a notable signal that they’ve noticed this is a big problem,” says Mahsa Alimardani, an Iran researcher at the Oxford Internet Institute.
BBC Verify has tracked a common AI-generated video that appears to depict missiles hitting Tel Aviv, with explosion sounds in the background. This footage has been shared in over 300 posts, reaching tens of thousands of users across platforms. Some X users relied on the platform’s AI chatbot Grok to verify the authenticity of these videos, but Grok frequently confirmed them as real despite being synthetic. Another widely viewed fake video shows Dubai’s Burj Khalifa on fire, with people fleeing the scene, amplifying public anxiety during the conflict.
“Fake videos like these have a detrimental impact on people’s trust in verified information and make it harder to document real evidence,” says Alimardani.
AI Satellite Imagery and Verification Challenges
A new trend in the conflict is the use of AI-generated satellite imagery. BBC Verify confirmed real footage of Iranian drone and missile strikes on the US Navy’s Fifth Fleet headquarters in Bahrain on the first day of the war. The following day, a state-linked newspaper shared a fabricated image on X, falsely claiming damage to the base. This AI-generated photo appears to be based on actual satellite images of the same location taken in February 2025, which are publicly accessible. Google’s SynthID tool detected the fake as being created or edited by a Google AI system.
Despite the photos supposedly being captured a year apart, three vehicles are positioned identically in both the authentic satellite images and the AI-generated version. Google’s AI tools, including Veo, are now among the popular platforms alongside OpenAI’s Sora model, the Chinese app Seedance, and Grok integrated into X.
“The number of tools available for creating highly realistic AI content is unprecedented,” says Henry Ajder, a generative AI expert. “We have never seen these tools so accessible, so user-friendly, and so affordable,” he adds.
Victoire Rio, executive director of the technology policy group What To Fix, explains that the ease of AI tools has led to a spike in synthetic content. “The pipeline to social media can now be nearly fully automated,” she states. X’s product head mentioned that 99% of accounts sharing AI videos are attempting to manipulate monetization by posting misleading material.
