Yazılar

Meta Allegedly Explored Adding Facial Recognition Features to Its Smart Glasses

Meta has reportedly explored adding a facial recognition feature to its smart glasses that would allow wearers to identify people around them by scanning their faces. According to recent reports, this functionality would be opt-in for the glasses’ users, meaning they could choose to enable it. However, those being scanned would not have any way to opt out, raising privacy concerns. While the current Ray-Ban Meta AI Glasses flash an LED light when the camera is active, it remains unclear if the glasses would alert others when facial recognition is being used.

The feature, internally dubbed “super sensing,” is said to build on the existing Live AI capabilities of the Ray-Ban Meta AI glasses. Sources suggest Meta considered disabling the camera’s LED indicator during facial recognition scans, which would prevent people nearby from knowing when their faces are being scanned or identified. This raises questions about transparency and ethical use of such technology in everyday social situations.

Meta introduced the LED indicator to inform bystanders whenever the glasses’ camera was capturing photos or videos, aiming to maintain some level of privacy awareness. However, if the facial recognition feature bypasses this indicator, individuals around the wearer could be unknowingly identified. The ability to match faces to names instantly could have significant implications, both positive and negative, depending on how the technology is deployed and regulated.

Concerns about privacy are heightened by past incidents, such as a project developed by two Harvard students who created a system called I-XRAY. This system combined Meta’s smart glasses with large language models, facial recognition tools, and public databases to identify and locate their classmates without their consent. Such demonstrations highlight the potential risks associated with facial recognition on wearable devices, making Meta’s decisions on how to implement these features particularly critical.

UK Watchdog Fines OnlyFans $1.4 Million Over Age-Check Disclosure Failures

Britain’s media and telecommunications regulator, Ofcom, has fined OnlyFans £1.05 million ($1.4 million) for failing to accurately disclose information related to its age verification measures. The fine follows an investigation into the platform’s methods of checking user age, specifically its use of third-party facial estimation technology.

Investigation Findings

OnlyFans’ operator, Fenix International Limited, was found to have misrepresented the effectiveness of its age verification technology. The platform claimed that its facial recognition system, which uses live selfies submitted by users, had a “challenger age” threshold of 23 years. However, the threshold was actually set at 20 years, a discrepancy that Fenix International reported to Ofcom last year.

In response to the error, Fenix announced plans to raise the threshold to 23 years in January 2025. However, the company later lowered it to 21 years within a few days. Despite this correction, the failure to provide accurate and complete information led to the fine.

Ofcom’s Role and Future Actions

Ofcom emphasized the importance of receiving accurate information to fulfill its regulatory responsibilities. Suzanne Cater, the enforcement director at Ofcom, stated, “We will hold platforms to high standards and will not hesitate to take enforcement action where we find failings.”

Although Ofcom closed its investigation into whether minors were accessing the platform, it continues to monitor the accuracy of the information provided by OnlyFans.

Platform’s Response

OnlyFans, which has over 300 million users and generates $1.3 billion in revenue, welcomed the conclusion of the investigation related to UK onboarding. A spokesperson for the platform acknowledged the importance of providing accurate and timely information to the regulator.

China Issues New Regulations on Facial Recognition Technology

China’s cyberspace regulator, the Cyberspace Administration of China (CAC), has introduced new regulations governing the use of facial recognition technology, emphasizing that individuals should not be compelled to use facial recognition for identity verification. The move comes in response to growing concerns about data privacy and the widespread deployment of this technology across various sectors.

The new rules, set to take effect in June, stipulate that individuals who do not consent to identity verification via facial recognition should be provided with alternative methods that are reasonable and convenient. This regulation aims to curb practices such as using facial recognition for tasks like hotel check-ins or accessing gated communities, which have become more common in recent years.

The CAC also stresses that companies collecting facial data must obtain explicit consent before processing any information. Although the regulations do not address the use of facial recognition by security authorities, they require that any area where the technology is deployed must display clear signage informing the public.

These regulations are part of broader efforts by China to balance the use of advanced technologies like AI and facial recognition with privacy concerns. Recent surveys have shown widespread public anxiety about the potential misuse of such technology. In response, previous legal measures like the Personal Information Protection Law, which came into effect in November 2021, have mandated stricter controls on the collection and use of personal data.