Yazılar

OpenAI to Give Content Owners Control Over Sora AI Videos, Plans Revenue Sharing Model

OpenAI is rolling out new tools to give content owners greater control over how their intellectual property is used in Sora, its recently launched AI video-generation app, and plans to introduce a revenue-sharing system for creators who opt in.

In a blog post on Friday, CEO Sam Altman said OpenAI will soon provide “more granular control over the generation of characters” within Sora, enabling rights holders such as film and television studios to decide how their characters can appear—or to block them entirely.

The move comes amid intensifying scrutiny of AI-generated content and growing concern across Hollywood and the creative industries about copyright infringement and the unauthorized replication of proprietary characters and likenesses.

Sora, launched this week as a standalone app in the United States and Canada, allows users to generate and share AI-created videos up to 10 seconds long. Its social-media-style interface quickly gained traction, with users producing clips based on both original and copyrighted material.

Altman acknowledged that the app’s rapid popularity—and the sheer volume of video creation—has outpaced expectations, creating a need for clear rules and compensation mechanisms. “We’ll experiment with different approaches,” he wrote, adding that the revenue-sharing model would evolve through “trial and error” as OpenAI tests various systems within Sora before applying them to its broader suite of AI tools.

At least one major studio, Disney, has already opted out of allowing its characters to appear in Sora-generated videos, sources familiar with the matter told Reuters. Other studios are reportedly reviewing whether to participate under OpenAI’s forthcoming licensing framework.

The company’s initiative could mark a turning point in the relationship between AI firms and content owners, shifting from conflict to collaboration—if a viable monetization model can be found.

Backed by Microsoft, OpenAI’s expansion into multimodal AI via Sora places it in direct competition with Meta’s Vibes and Google’s text-to-video tools, as major tech firms race to define the future of synthetic media creation.

Still, the effort to give rights holders control over how their creations are used—and to share revenue from those uses—reflects a broader recognition that AI’s creative power must coexist with creator compensation and consent.

Apple Pulls ICE-Tracking Apps After Trump Administration Pressure, Sparking Free Speech Debate

Apple has removed ICEBlock and several similar apps from its App Store following direct contact from President Donald Trump’s administration, marking a rare case of U.S. federal intervention in app moderation. The apps, which alert users to the presence of Immigration and Customs Enforcement (ICE) agents, were accused by the Justice Department of potentially endangering law enforcement officers.

Alphabet’s Google also removed related apps on Thursday, citing policy violations, but said it had not been contacted by federal authorities before taking action.

In an emailed statement, Apple confirmed: “Based on information we’ve received from law enforcement about the safety risks associated with ICEBlock, we have removed it and similar apps from the App Store.” The Justice Department later verified that it had formally reached out to Apple, which complied with the request.

Attorney General Pam Bondi praised the removal, calling ICEBlock “a tool designed to put ICE agents at risk just for doing their jobs.” She added, “Violence against law enforcement is an intolerable red line that cannot be crossed.”

Joshua Aaron, the Texas-based developer of ICEBlock, denied those allegations, accusing Apple of “capitulating to an authoritarian regime.” He told Reuters his legal team is considering next steps, arguing that “civilian surveillance of federal agents is a matter of public interest and protected speech.”

Civil liberties experts note that courts have long upheld the right to record and track law enforcement activities in public spaces, as long as those efforts do not obstruct official duties. Six legal scholars told Reuters that surveillance of ICE operations is “largely protected under the U.S. Constitution.”

The crackdown comes amid renewed immigration raids and the expansion of ICE’s enforcement powers under Trump’s second term, backed by $75 billion in funding through 2029. The administration has also targeted visa holders and lawful residents over political activism, particularly pro-Palestinian advocacy, heightening tensions around civil monitoring of ICE activity.

The removal has drawn attention to Apple’s growing compliance with government takedown requests. In 2024 alone, Apple removed over 1,700 apps globally following such demands — most originating from China (1,300+), Russia (171), and South Korea (79). Until now, the United States had not appeared on that list, according to Apple’s transparency reports.

Critics argue the move sets a troubling precedent for state influence over digital speech. “This decision signals a chilling alignment between Big Tech and political power,” said one digital rights advocate. Others suggest Apple’s economic vulnerability—given that most iPhones are manufactured in China and subject to U.S. tariff pressures—may make the company more susceptible to government demands.

Apple removes tens of thousands of apps annually for reasons ranging from fraud to intellectual property violations, but politically motivated removals remain rare. Whether ICEBlock’s disappearance marks a one-time compliance case or a shift in tech–state relations could define the next chapter of America’s digital free speech debate.

Brazilian Police Bust Deepfake Scam Using Gisele Bündchen’s Image in Instagram Ads

Brazilian authorities have dismantled a nationwide fraud network that used deepfake videos of supermodel Gisele Bündchen and other celebrities in Instagram ads to trick victims into buying fake products, marking one of the country’s first major crackdowns on AI-powered online scams.

Police arrested four suspects this week and froze assets across five states, after investigators traced more than 20 million reais ($3.9 million) in suspicious transactions uncovered by Brazil’s anti–money laundering agency COAF.

The investigation began in August 2024, when a victim reported being deceived by an Instagram ad showing an AI-generated video of Bündchen promoting a nonexistent skincare product. Another fraudulent campaign featured the supermodel supposedly offering free suitcases, with users asked to pay only for shipping—items that never arrived.

According to Eibert Moreira Neto, head of the cybercrime unit in Rio Grande do Sul, the group created a “series of scams” using deepfakes of multiple celebrities and fake betting platforms. Investigators believe the criminals operated at mass scale, collecting many small payments—usually under 100 reais ($19)—from victims who rarely reported the losses.

“That created a perverse situation,” explained investigator Isadora Galian. “The criminals enjoyed a kind of statistical immunity—they knew most people would not complain, so they operated without fear.”

Meta, owner of Instagram, said its policies ban ads that deceptively use public figures and that such content is removed “when detected.” The company added that it uses AI-based detection systems, trained review teams, and reporting tools to fight celebrity-impersonation scams.

A spokesperson for Bündchen’s team urged consumers to verify suspicious offers, avoid ads promising unrealistic discounts or giveaways, and report fraudulent content to authorities or official brand channels.

The case has broader implications for Brazil’s fight against digital deception. In June 2024, the Supreme Court ruled that social media platforms can be held liable for criminal ads if they fail to remove them swiftly—even without a court order.

The Rio Grande do Sul operation underscores the growing criminal use of deepfake technology, which allows scammers to replicate celebrity likenesses with stunning realism. What once required Hollywood budgets can now be done with cheap AI tools and a few clicks—a reality that’s forcing regulators, platforms, and the public to confront a new era of synthetic fraud.