Yazılar

Report Claims Meta Earned $16 Billion in 2024 from Fraudulent Ads on Facebook and Instagram

Meta Reportedly Made Billions from Fraudulent Ads Across Facebook and Instagram in 2024

A new report has alleged that Meta Platforms — the parent company of Facebook, Instagram, and WhatsApp — earned a significant portion of its 2024 revenue from fraudulent and prohibited advertisements. According to internal projections, about 10.1 percent of Meta’s total revenue for the year reportedly came from ads linked to scams and banned goods. The findings suggest that certain internal practices and oversight failures allowed these fraudulent ads to remain active on its platforms, despite clear violations of company policy and advertising regulations.

Citing internal company documents, Reuters reported that Meta failed to effectively detect or block deceptive advertising for a range of illegal or misleading products and services. These included fake e-commerce listings, fraudulent investment schemes, unlicensed online casinos, and even banned medical products. The issue reportedly persisted for at least three years across Meta’s major apps — Facebook, Instagram, and WhatsApp — raising concerns about the company’s ad moderation and accountability practices.

The internal projections also claimed that around $16 billion (approximately ₹1.41 lakh crore) of Meta’s total 2024 revenue stemmed from these fraudulent ad sources. The report further alleged that Meta was hesitant to remove or suspend accounts, even those identified internally as “the scammiest scammers.” Executives reportedly feared that taking strict action against these advertisers would lead to a noticeable decline in ad revenue, which could in turn impact the company’s heavy investments in artificial intelligence (AI) development and infrastructure.

These revelations have sparked fresh debate about Meta’s commitment to user safety and transparency in digital advertising. Critics argue that prioritizing profits over consumer protection undermines trust in its platforms, especially as users increasingly encounter scams disguised as legitimate promotions. While Meta has yet to issue a detailed public response to these allegations, the report adds pressure on the company to tighten its ad screening processes and demonstrate stronger ethical oversight in its rapidly expanding AI-driven advertising ecosystem.

EU Considers Pausing Parts of Landmark AI Act Amid Pressure from U.S. and Big Tech

The European Commission is considering pausing parts of its landmark Artificial Intelligence Act, following growing pressure from U.S. officials and major tech companies such as Meta and Alphabet, the Financial Times reported on Friday.

According to the report, the move comes after months of lobbying from Silicon Valley giants and warnings from the Trump administration that strict EU regulations could strain transatlantic trade relations.

A senior EU official told the FT that Brussels has been “engaging” with Washington on potential adjustments to the AI Act and related digital regulations as part of a broader simplification effort, which is expected to be adopted on November 19.

The AI Act, which became law in August 2024, is the world’s first comprehensive framework to regulate artificial intelligence technologies. It categorizes AI systems by risk level — from minimal to unacceptable — and imposes restrictions on areas like facial recognition, biometric surveillance, and generative AI transparency.

While a European Commission spokesperson had previously dismissed calls for delays, officials are now reportedly weighing temporary pauses for specific provisions, particularly those affecting companies developing large AI models.

An EU spokesperson told the FT that “various options” are being discussed but emphasized that the bloc remains “fully behind the AI Act and its objectives.”

The proposal reflects Europe’s balancing act between maintaining AI safety and innovation leadership while addressing geopolitical and trade pressures from the United States and industry stakeholders.

Motion Picture Association Orders Meta to Drop “PG-13” Label from Instagram Teen Filters

The Motion Picture Association (MPA) has issued a cease-and-desist letter to Meta, accusing the social media giant of misleadingly using the film industry’s “PG-13” rating in its new content filters for teen users on Instagram. The group said Meta’s claim that its filters are modeled on the movie rating system is “literally false and highly misleading.”

Meta announced last month that it would restrict what users under 18 see on Instagram by applying filters “inspired by the PG-13 rating system.” The MPA, however, says the comparison is inappropriate, emphasizing that its rating process involves a curated, consensus-driven assessment by human reviewers — not automated algorithms.

In an October 28 letter to Meta Chief Legal Officer Jennifer Newstead, the MPA demanded that the company immediately stop using the “PG-13” mark and disassociate its Teen Accounts and AI moderation tools from the film rating system, warning that unauthorized use could undermine public trust in movie ratings. The association asked Meta to resolve the issue by November 3.

A Meta spokesperson said the company had no intention of implying a partnership with the MPA and hopes to “work constructively” with the association to address concerns. Meta said the filter initiative was designed to give parents greater control over what teenagers see on its platforms.

The dispute comes as Meta faces growing scrutiny from regulators and advocacy groups over the safety of its younger users. The company has also faced lawsuits alleging that its social platforms expose minors to harmful content.