Yazılar

UK Regulator Ofcom Investigates 4chan and Others for Possible Online Safety Breaches

Britain’s media regulator Ofcom launched nine investigations on Tuesday targeting potential violations of the country’s Online Safety Act, including probes into the internet messageboard 4chan and several file-sharing platforms.

The Online Safety Act, enacted in 2023, imposes stringent requirements on digital platforms to curb criminal activity, with a strong focus on protecting children and eliminating illegal content.

Ofcom received complaints regarding potentially illegal material on 4chan, and separately concerning the sharing of child sexual abuse content on seven file-sharing services. The regulator is examining whether these platforms failed to implement adequate safety measures, properly respond to statutory information requests, and maintain accurate risk assessment records.

Attempts to contact 4chan for comment were unsuccessful.

Under the law, Ofcom can mandate platforms to take corrective actions or impose fines up to £18 million (about $24.28 million) or 10% of their qualifying global revenue, whichever is higher.

In a related investigation, Ofcom is also assessing whether adult content provider First Time Videos has sufficient age verification controls to protect minors.

Google Reports 250 Complaints Over AI-Generated Deepfake Terrorism Content to Australian Regulator

Google has informed Australian regulators that it received over 250 complaints globally between April 2023 and February 2024, indicating that its AI technology, specifically the Gemini model, was being used to create deepfake terrorism content. Additionally, the company reported dozens of complaints regarding the use of Gemini to generate child abuse material, according to the Australian eSafety Commission.

Under Australian law, tech companies are required to periodically report their harm minimization efforts to the eSafety Commission, or risk facing fines. This reporting period marks the first disclosure of such data, which regulators have described as a “world-first insight” into how AI is being exploited for harmful and illegal purposes.

The Australian eSafety Commission emphasized the importance of companies developing AI products to implement safeguards to prevent the generation of harmful material. eSafety Commissioner Julie Inman Grant stated that the findings highlight the critical need for effective protective measures.

According to Google’s report, it received 258 user complaints about AI-generated deepfake terrorist or extremist content created with Gemini, along with 86 reports concerning AI-generated child exploitation or abuse material. However, the company did not specify how many of these complaints were verified.

A Google spokesperson confirmed that the company does not allow the generation or distribution of illegal content, including material related to terrorism, child exploitation, or other abuses. Google also noted that the number of reports provided to eSafety represents the total global volume of complaints, not confirmed policy violations.

While Google uses a system called “hatch-matching” to identify and remove child abuse content generated with Gemini, the company did not apply the same system to detect terrorist or extremist material. This lack of a similar safeguard for violent content has raised concerns among regulators.

The Australian eSafety Commission has previously fined Telegram and Twitter (now X) for their inadequate reporting practices, with X losing an appeal over a fine of A$610,500 ($382,000). Telegram is also preparing to challenge its fine.

Indonesia to Implement Child Protection Guidelines for Social Media Ahead of Age-Limit Law

Indonesia is taking steps to enhance child protection on social media platforms while the government works on creating a law to set a minimum age for users. This move follows discussions between communications minister Meutya Hafid and President Prabowo Subianto about safeguarding children online. The country will impose interim regulations requiring social media companies to follow child protection guidelines, focusing on preventing physical, mental, or moral harm to minors.

The government’s action comes after Australia implemented a similar measure, banning children under 16 from accessing social media platforms, and penalizing tech giants like Meta and TikTok if they failed to enforce the rule. While Indonesia is working toward formalizing the law, senior communications ministry official Alexander Sabar emphasized that these new guidelines would not completely restrict children’s access to social media, but rather aim to protect them from harmful content.

Meta and TikTok have yet to respond to requests for comment on the matter. Local parents, like Nurmayanti, have expressed support for measures to protect children from inappropriate content. However, Anis Hidayah, a commissioner with Indonesia’s human rights body, cautioned that while child protection is critical, the government must balance the measures with children’s right to access information. Surveys show nearly half of children under 12 in Indonesia use social media platforms like Facebook, Instagram, and TikTok.