Yazılar

Denmark Moves to Ban AI Deepfakes, Giving Citizens Copyright Over Their Own Likeness

Denmark is preparing to pass one of the world’s toughest laws against AI-generated deepfakes, aiming to give citizens new legal rights over their appearance, voice, and likeness online. The bill — expected to pass early next year — would make it illegal to share or distribute deepfake content without a person’s consent, extending copyright protections to individuals.

The proposed legislation follows growing concern about the rapid spread of deepfakes — hyper-realistic AI-generated videos, images, or audio that impersonate real people. Danish Culture Minister Jakob Engel-Schmidt said the move is essential to protect both private citizens and democracy itself, warning that political deepfakes could “undermine our democracy” by spreading falsehoods.

Under the new law, Danes would be able to demand takedowns of AI-generated content that misuses their likeness, while parody and satire would remain protected. Major tech platforms that fail to remove harmful deepfakes could face significant fines, although individuals are unlikely to face criminal penalties.

Experts have praised the move as a landmark step. “When people ask, ‘what can I do to protect myself from being deepfaked,’ the answer right now is basically nothing,” said Henry Ajder, a generative AI researcher and founder of Latent Space Advisory. “Denmark is one of the first governments to change that.”

The Danish proposal mirrors similar measures abroad. The United States recently criminalized the sharing of non-consensual intimate deepfakes, while South Korea introduced harsh penalties for deepfake pornography. Denmark’s initiative could now influence European Union policy, with France and Ireland reportedly showing interest in adopting similar laws.

For victims like Marie Watson, a Danish video game streamer whose photos were digitally altered and shared online, the legislation comes too late to undo the damage but offers hope for future protection. “When it’s online, you’re done. You can’t do anything,” she said. “It’s out of your control.”

Getty and Perplexity Sign Multi-Year Deal to Integrate Licensed Images into AI Search Tools

Visual content leader Getty Images has signed a multi-year licensing agreement with AI search startup Perplexity, allowing the platform to display Getty’s licensed images across its AI-powered search and discovery tools. The announcement boosted Getty’s shares by 5% on Friday, underscoring growing investor confidence in partnerships between traditional media and artificial intelligence companies.

Under the deal, Perplexity will integrate Getty’s visuals through an API, granting users access to Getty’s vast image library with proper attribution and licensing details. Each image will include credits and source links, ensuring legal compliance and transparency in AI-generated content.

The partnership comes amid growing scrutiny over AI firms’ use of copyrighted materials for training and output generation. Getty, which also licenses images to iStock and Unsplash, previously sued Stability AI over alleged image scraping. Perplexity itself has faced multiple copyright lawsuits from publishers including Japan’s Nikkei and Asahi Shimbun but has since adopted a revenue-sharing model with media partners such as TIME and Der Spiegel.

Legal experts say AI licensing agreements like this one could reshape the industry by legitimizing data use, though they note that a full licensing model may not be viable for all online content. The move aligns with Getty’s broader effort to promote safe, rights-cleared visual generation in the AI era.

OpenAI to Give Content Owners Control Over Sora AI Videos, Plans Revenue Sharing Model

OpenAI is rolling out new tools to give content owners greater control over how their intellectual property is used in Sora, its recently launched AI video-generation app, and plans to introduce a revenue-sharing system for creators who opt in.

In a blog post on Friday, CEO Sam Altman said OpenAI will soon provide “more granular control over the generation of characters” within Sora, enabling rights holders such as film and television studios to decide how their characters can appear—or to block them entirely.

The move comes amid intensifying scrutiny of AI-generated content and growing concern across Hollywood and the creative industries about copyright infringement and the unauthorized replication of proprietary characters and likenesses.

Sora, launched this week as a standalone app in the United States and Canada, allows users to generate and share AI-created videos up to 10 seconds long. Its social-media-style interface quickly gained traction, with users producing clips based on both original and copyrighted material.

Altman acknowledged that the app’s rapid popularity—and the sheer volume of video creation—has outpaced expectations, creating a need for clear rules and compensation mechanisms. “We’ll experiment with different approaches,” he wrote, adding that the revenue-sharing model would evolve through “trial and error” as OpenAI tests various systems within Sora before applying them to its broader suite of AI tools.

At least one major studio, Disney, has already opted out of allowing its characters to appear in Sora-generated videos, sources familiar with the matter told Reuters. Other studios are reportedly reviewing whether to participate under OpenAI’s forthcoming licensing framework.

The company’s initiative could mark a turning point in the relationship between AI firms and content owners, shifting from conflict to collaboration—if a viable monetization model can be found.

Backed by Microsoft, OpenAI’s expansion into multimodal AI via Sora places it in direct competition with Meta’s Vibes and Google’s text-to-video tools, as major tech firms race to define the future of synthetic media creation.

Still, the effort to give rights holders control over how their creations are used—and to share revenue from those uses—reflects a broader recognition that AI’s creative power must coexist with creator compensation and consent.