Social media platform X has announced that creators who post AI-generated videos depicting armed conflicts without clear disclosure will face a 90-day suspension from its revenue-sharing programme.
The policy shift, disclosed by Nikita Bier, head of product at the platform owned by Elon Musk, comes amid growing concerns about misinformation during the ongoing conflict involving the United States, Israel and Iran. Bier stressed that access to authentic, on-the-ground information becomes especially critical during wartime, warning that advances in artificial intelligence now enable the rapid creation of highly realistic but misleading videos.
Stricter Disclosure Rules Introduced
Under the updated framework, creators must clearly label AI-generated war-related content. If they fail to comply, X will suspend them from its Creator Revenue Sharing programme for 90 days. Moreover, repeated violations could result in permanent removal from the monetisation scheme, which distributes a share of advertising revenue based on user engagement.
The company stated that it will continue refining its policies and technical tools to improve transparency during sensitive global events. Enforcement will rely partly on the platformโs Community Notes system, a crowd-sourced fact-checking feature, alongside metadata analysis and other technical markers designed to identify synthetic media.
Shift in Content Moderation Approach
The move marks a notable shift for X, which has faced criticism over its content moderation policies since Musk completed his $44 billion acquisition of Twitter in October 2022 and rebranded it as X. Since then, the company has rolled back several misinformation safeguards, arguing that overly strict moderation amounts to censorship.
However, the latest action signals a firmer stance on AI-generated conflict content as scrutiny over digital misinformation intensifies.

