Yazılar

Big Tech Challenges YouTube’s Exemption from Australia’s Ban on Social Media for Children

Tech giants including Meta Platforms (owner of Facebook and Instagram), Snapchat, and TikTok have voiced strong opposition to Australia’s decision to grant YouTube an exemption from its new law banning social media access for children under the age of 16. The landmark legislation, which was passed by the Australian parliament in November, sets some of the most stringent social media regulations globally. The law requires platforms to prevent minors from logging in to their services or face hefty fines of up to AUD 49.5 million (approximately $31 million or Rs. 269 crore).

Under the current provisions, YouTube stands as the only platform exempt from the age restriction due to its status as an educational tool. The platform is considered essential for learning and is the only service allowed for children through family accounts with parental supervision features. While YouTube maintains that it offers safeguards for young users, such as restricted access to certain content through Family Link, critics argue that the platform still exposes children to the same risks outlined by the government in the new law. These risks include algorithmic content recommendations, social interactions, and potential exposure to harmful or inappropriate material.

Meta has voiced concerns about the YouTube exemption, stating that even children using YouTube under family accounts are still subjected to many of the features that the government’s legislation seeks to control. In a blog post, the company argued that YouTube’s exemption contradicts the reasons for implementing the law in the first place. The tech giant called on the Australian government to apply the law equally across all social media platforms, ensuring that YouTube does not receive preferential treatment in this regard.

TikTok, too, has raised objections to the exemption, calling it “illogical, anticompetitive, and short-sighted.” The company submitted a statement urging the government to maintain consistency in enforcing the law across all platforms. TikTok argued that creating exceptions for specific platforms like YouTube undermines the integrity of the legislation, potentially giving one company an unfair advantage over others in terms of user access and content exposure. As the law’s implementation deadline approaches, the debate over YouTube’s exemption continues to stir tensions within the tech industry.

Google Reports 250 Complaints Over AI-Generated Deepfake Terrorism Content to Australian Regulator

Google has informed Australian regulators that it received over 250 complaints globally between April 2023 and February 2024, indicating that its AI technology, specifically the Gemini model, was being used to create deepfake terrorism content. Additionally, the company reported dozens of complaints regarding the use of Gemini to generate child abuse material, according to the Australian eSafety Commission.

Under Australian law, tech companies are required to periodically report their harm minimization efforts to the eSafety Commission, or risk facing fines. This reporting period marks the first disclosure of such data, which regulators have described as a “world-first insight” into how AI is being exploited for harmful and illegal purposes.

The Australian eSafety Commission emphasized the importance of companies developing AI products to implement safeguards to prevent the generation of harmful material. eSafety Commissioner Julie Inman Grant stated that the findings highlight the critical need for effective protective measures.

According to Google’s report, it received 258 user complaints about AI-generated deepfake terrorist or extremist content created with Gemini, along with 86 reports concerning AI-generated child exploitation or abuse material. However, the company did not specify how many of these complaints were verified.

A Google spokesperson confirmed that the company does not allow the generation or distribution of illegal content, including material related to terrorism, child exploitation, or other abuses. Google also noted that the number of reports provided to eSafety represents the total global volume of complaints, not confirmed policy violations.

While Google uses a system called “hatch-matching” to identify and remove child abuse content generated with Gemini, the company did not apply the same system to detect terrorist or extremist material. This lack of a similar safeguard for violent content has raised concerns among regulators.

The Australian eSafety Commission has previously fined Telegram and Twitter (now X) for their inadequate reporting practices, with X losing an appeal over a fine of A$610,500 ($382,000). Telegram is also preparing to challenge its fine.

US, UK, and Australia Target Russia-Based Zservers Over Lockbit Ransomware Attacks

The United States, joined by the United Kingdom and Australia, has taken coordinated action against Zservers, a Russia-based service provider linked to supporting the notorious Lockbit ransomware attacks. The U.S. Department of Treasury announced the sanctions on Tuesday, highlighting national security concerns related to ransomware operations.

Designations and Actions:

The U.S. Treasury’s Office of Foreign Assets Control (OFAC) added two Russian nationals to its sanctions list, accusing them of being key administrators for Zservers, a company that provides bulletproof hosting services (BPH) commonly used by cybercriminals. These services enable cyber actors, including ransomware groups, to carry out attacks on critical infrastructure both in the U.S. and internationally.

Bradley Smith, acting Under Secretary of the Treasury for Terrorism and Financial Intelligence, emphasized that third-party providers like Zservers play a crucial role in facilitating the operations of cybercriminals, including those behind Lockbit attacks.

Broader Context:

This move is part of a broader effort to combat cybercrime, following similar actions last year that saw joint sanctions from the U.S., UK, and Australia against the Evil Corp ransomware group. The sanctions are aimed at disrupting the infrastructure that supports cybercriminal activities globally.