Yazılar

Brazilian Police Bust Deepfake Scam Using Gisele Bündchen’s Image in Instagram Ads

Brazilian authorities have dismantled a nationwide fraud network that used deepfake videos of supermodel Gisele Bündchen and other celebrities in Instagram ads to trick victims into buying fake products, marking one of the country’s first major crackdowns on AI-powered online scams.

Police arrested four suspects this week and froze assets across five states, after investigators traced more than 20 million reais ($3.9 million) in suspicious transactions uncovered by Brazil’s anti–money laundering agency COAF.

The investigation began in August 2024, when a victim reported being deceived by an Instagram ad showing an AI-generated video of Bündchen promoting a nonexistent skincare product. Another fraudulent campaign featured the supermodel supposedly offering free suitcases, with users asked to pay only for shipping—items that never arrived.

According to Eibert Moreira Neto, head of the cybercrime unit in Rio Grande do Sul, the group created a “series of scams” using deepfakes of multiple celebrities and fake betting platforms. Investigators believe the criminals operated at mass scale, collecting many small payments—usually under 100 reais ($19)—from victims who rarely reported the losses.

“That created a perverse situation,” explained investigator Isadora Galian. “The criminals enjoyed a kind of statistical immunity—they knew most people would not complain, so they operated without fear.”

Meta, owner of Instagram, said its policies ban ads that deceptively use public figures and that such content is removed “when detected.” The company added that it uses AI-based detection systems, trained review teams, and reporting tools to fight celebrity-impersonation scams.

A spokesperson for Bündchen’s team urged consumers to verify suspicious offers, avoid ads promising unrealistic discounts or giveaways, and report fraudulent content to authorities or official brand channels.

The case has broader implications for Brazil’s fight against digital deception. In June 2024, the Supreme Court ruled that social media platforms can be held liable for criminal ads if they fail to remove them swiftly—even without a court order.

The Rio Grande do Sul operation underscores the growing criminal use of deepfake technology, which allows scammers to replicate celebrity likenesses with stunning realism. What once required Hollywood budgets can now be done with cheap AI tools and a few clicks—a reality that’s forcing regulators, platforms, and the public to confront a new era of synthetic fraud.

UK Introduces AI-Driven Child Abuse Material Offenses

The United Kingdom has announced it will make it illegal to use artificial intelligence (AI) tools to create child sexual abuse material, marking the first country to introduce such AI-related offenses. This new legislation is part of a broader effort to address the rising concern of online criminals using AI to create explicit images of children. Under the current law in England and Wales, possessing, making, showing, or distributing explicit images of children is a criminal act, but the new offenses will specifically target the use of AI tools to manipulate real-life images of children.

The move comes as reports of AI-generated child abuse material have surged nearly five-fold in 2024, according to the Internet Watch Foundation. “We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person,” said Britain’s interior minister Yvette Cooper. She emphasized the importance of tackling both online and offline child sexual abuse to better protect the public from emerging threats.

In addition to AI-generated content, predators are also using AI tools to create fake images for blackmail, coercing children into further abuse, such as through live streaming. The new legislation will criminalize the possession, creation, or distribution of AI tools designed to produce child sexual abuse material, as well as the possession of “paedophile manuals” that provide instructions on using such technologies.

A further offense will target the operators of websites that distribute such harmful content, and authorities will be empowered to unlock and inspect digital devices involved in these crimes. These measures will be incorporated into the Crime and Policing Bill when it is introduced in parliament. Earlier this month, the UK also announced plans to make the creation and sharing of AI-generated “deepfake” content, including videos, pictures, and audio clips that are sexually explicit, a criminal offense.

 

A deepfake video of the co-founder of Solana has surfaced online, prompting crypto users on Twitter to urge big tech companies to take prompt action

Solana has reported the incident to law enforcement authorities and clarified that the platform itself lacks the capability to remove the deepfake video from the web. Devamını Oku