Yazılar

Meta partners with Arm to boost AI recommendations across Facebook and Instagram

Meta Platforms announced a new partnership with chip technology firm Arm Holdings to power the AI systems behind its personalization and recommendation engines across Facebook and Instagram. The collaboration marks another milestone for Arm as it pushes deeper into data center and AI computing — areas long dominated by Intel and AMD’s x86 architecture.

Meta will deploy Arm-based data center platforms to run the ranking and recommendation algorithms that determine what users see on its apps. Both companies said the shift will deliver higher performance and improved energy efficiency compared to traditional x86 systems.

Arm, backed by Japan’s SoftBank, provides the chip designs that serve as blueprints for central processing units (CPUs) used in billions of devices worldwide. While its technology already dominates smartphones, it is rapidly expanding into server and personal computer markets.

As part of the announcement, Meta revealed a $1.5 billion investment in a new Texas data center, its 29th facility globally, to support AI infrastructure growth. The two companies also said they have optimized Meta’s AI software for Arm chips and made the improvements open source, allowing developers to freely use and build upon them — a move expected to speed up Arm’s adoption in cloud computing.

Meta and Arm plan to continue refining their joint open-source projects to make AI workloads more efficient and accessible across the industry.

Meta introduces PG-13-style filters on Instagram to protect teen users

Meta Platforms has unveiled new PG-13-style content filters on Instagram, limiting what users under 18 can see as part of a broader effort to strengthen teen safety online. The update, modeled after the Motion Picture Association’s movie ratings, will automatically restrict access to posts featuring strong language, risky stunts, drug references, or other mature content, Meta said on Tuesday.

The new rules also extend to Meta’s generative AI tools, which will now be subject to similar content guidelines. Teen accounts will be automatically placed under PG-13 settings, though parents can apply stricter limits and adjust screen-time controls using a “limited content” mode.

The move comes amid growing criticism and legal scrutiny over Meta’s handling of youth safety. The company faces hundreds of lawsuits from parents and school districts accusing it of enabling addictive behavior and exposing minors to harmful material.

A Reuters investigation earlier revealed that some of Meta’s existing safety measures were ineffective or inconsistent, while advocacy groups accused Instagram of failing to protect teens from psychological harm.

“We hope this update reassures parents,” Meta said in a blog post. “We know teens may try to avoid these restrictions, which is why we’ll use age prediction technology to ensure appropriate protections even when users misreport their age.”

The new safeguards will roll out in the U.S., UK, Australia, and Canada by year-end and will later expand globally. Meta said similar protections will soon be added to Facebook as regulators tighten oversight of social media and AI systems interacting with minors.

Brazilian Police Bust Deepfake Scam Using Gisele Bündchen’s Image in Instagram Ads

Brazilian authorities have dismantled a nationwide fraud network that used deepfake videos of supermodel Gisele Bündchen and other celebrities in Instagram ads to trick victims into buying fake products, marking one of the country’s first major crackdowns on AI-powered online scams.

Police arrested four suspects this week and froze assets across five states, after investigators traced more than 20 million reais ($3.9 million) in suspicious transactions uncovered by Brazil’s anti–money laundering agency COAF.

The investigation began in August 2024, when a victim reported being deceived by an Instagram ad showing an AI-generated video of Bündchen promoting a nonexistent skincare product. Another fraudulent campaign featured the supermodel supposedly offering free suitcases, with users asked to pay only for shipping—items that never arrived.

According to Eibert Moreira Neto, head of the cybercrime unit in Rio Grande do Sul, the group created a “series of scams” using deepfakes of multiple celebrities and fake betting platforms. Investigators believe the criminals operated at mass scale, collecting many small payments—usually under 100 reais ($19)—from victims who rarely reported the losses.

“That created a perverse situation,” explained investigator Isadora Galian. “The criminals enjoyed a kind of statistical immunity—they knew most people would not complain, so they operated without fear.”

Meta, owner of Instagram, said its policies ban ads that deceptively use public figures and that such content is removed “when detected.” The company added that it uses AI-based detection systems, trained review teams, and reporting tools to fight celebrity-impersonation scams.

A spokesperson for Bündchen’s team urged consumers to verify suspicious offers, avoid ads promising unrealistic discounts or giveaways, and report fraudulent content to authorities or official brand channels.

The case has broader implications for Brazil’s fight against digital deception. In June 2024, the Supreme Court ruled that social media platforms can be held liable for criminal ads if they fail to remove them swiftly—even without a court order.

The Rio Grande do Sul operation underscores the growing criminal use of deepfake technology, which allows scammers to replicate celebrity likenesses with stunning realism. What once required Hollywood budgets can now be done with cheap AI tools and a few clicks—a reality that’s forcing regulators, platforms, and the public to confront a new era of synthetic fraud.