The EU Just Banned AI-Generated Images and Videos From Its Own Official Communications
The European Commission, Parliament, and Council have quietly banned fully AI-generated visual content from their official press communications — citing 'authenticity' and citizen trust. The policy puts EU institutions at odds with a global trend toward AI-assisted political messaging, and draws sharp criticism from digital governance experts who argue that responsible, labeled use beats prohibition.

D.O.T.S AI Newsroom
AI News Desk
The European Union's three core institutions — the Commission, Parliament, and Council — have implemented a coordinated ban on using fully AI-generated videos and images in their official public communications, according to reporting by Politico. The policy marks a significant institutional stance on AI in political messaging, arriving just as major AI labs are releasing ever-more-capable generative video and image tools.
Commission spokesperson Thomas Regnier framed the decision around authenticity: the priority is to "foster citizens' trust," and the institutions determined that AI-generated visual material — however labeled — undermines that objective. Under the new guidelines, artificial intelligence may be used to optimize or enhance existing visual material, but cannot be the source of imagery in official communications. The European Parliament supplemented this with internal guidelines emphasizing "vigilance regarding inherent risks" associated with generative AI tools.
A Policy at Odds With Global Trends
The EU's prohibition stands in sharp contrast to how AI-generated content is being deployed in political communication elsewhere. U.S. President Donald Trump has used AI-generated imagery in 36 posts on Truth Social since taking office in January 2025, including synthetic images depicting him as a pope and AI-generated videos articulating geopolitical positions. Within Europe itself, the picture is mixed: German Chancellor Friedrich Merz posted a deepfaked dancing video to illustrate AI risks in political discourse, while Hungary's prime minister has used deepfake content specifically to criticize EU institutions.
The policy also arrives at a moment of accelerating capability. Google's Veo 3.1 Lite and Imagen 4, released in March 2026, have dramatically lowered the cost and raised the quality of AI video and image generation. What required significant resources and expertise six months ago is now accessible via simple API calls. The institutional response — prohibition — reflects the pace at which the policy question has outrun established governance frameworks.
Expert Criticism: Missing a Leadership Moment
Several digital governance researchers criticized the blanket ban as a missed opportunity for a different kind of institutional leadership. Walter Pasquarelli, an OECD adviser and Cambridge University researcher, argued that "responsible use beats abstinence" and that the EU is forfeiting a chance to model transparent AI deployment in democratic communication — demonstrating how the technology can be used with proper disclosure rather than avoided.
Alexandru Voica from Synthesia highlighted a specific irony: under the EU's own AI Act, synthetic content must already be watermarked and labeled when it enters the public information environment. If EU institutions deployed AI-generated content with explicit labeling, they could simultaneously communicate and educate citizens about how to recognize and interpret AI-labeled material. The prohibition eliminates that educational dimension.
Rapid-response communication matters increasingly in crisis situations, Voica noted — AI tools that can generate accurate multilingual visual communications quickly have genuine public interest applications that the blanket ban forecloses.
The Broader Stakes
The EU's institutional AI policy matters beyond its own communications. The Union is simultaneously the world's most active AI regulator through the AI Act and a major institutional actor in global governance discussions. How the EU's own institutions handle AI-generated content sets a signal about what responsible institutional practice looks like — a signal that democratic governments in member states, as well as international organizations, will observe. The current policy sends the message that institutions cannot be trusted to use generative AI transparently. Whether that is a defensible conclusion or a premature one will depend on how the broader public trust infrastructure for AI-generated content develops over the next two years.