Yazılar

Google Reports 250 Complaints Over AI-Generated Deepfake Terrorism Content to Australian Regulator

Google has informed Australian regulators that it received over 250 complaints globally between April 2023 and February 2024, indicating that its AI technology, specifically the Gemini model, was being used to create deepfake terrorism content. Additionally, the company reported dozens of complaints regarding the use of Gemini to generate child abuse material, according to the Australian eSafety Commission.

Under Australian law, tech companies are required to periodically report their harm minimization efforts to the eSafety Commission, or risk facing fines. This reporting period marks the first disclosure of such data, which regulators have described as a “world-first insight” into how AI is being exploited for harmful and illegal purposes.

The Australian eSafety Commission emphasized the importance of companies developing AI products to implement safeguards to prevent the generation of harmful material. eSafety Commissioner Julie Inman Grant stated that the findings highlight the critical need for effective protective measures.

According to Google’s report, it received 258 user complaints about AI-generated deepfake terrorist or extremist content created with Gemini, along with 86 reports concerning AI-generated child exploitation or abuse material. However, the company did not specify how many of these complaints were verified.

A Google spokesperson confirmed that the company does not allow the generation or distribution of illegal content, including material related to terrorism, child exploitation, or other abuses. Google also noted that the number of reports provided to eSafety represents the total global volume of complaints, not confirmed policy violations.

While Google uses a system called “hatch-matching” to identify and remove child abuse content generated with Gemini, the company did not apply the same system to detect terrorist or extremist material. This lack of a similar safeguard for violent content has raised concerns among regulators.

The Australian eSafety Commission has previously fined Telegram and Twitter (now X) for their inadequate reporting practices, with X losing an appeal over a fine of A$610,500 ($382,000). Telegram is also preparing to challenge its fine.

U.S. Designates Extreme Right-Wing “Terrorgram” Network as Terrorist Group

The U.S. government has taken a significant step in addressing extremist online networks by designating the “Terrorgram” collective as a terrorist group. This move, announced on Monday, comes with sanctions aimed at individuals and the group itself, accusing them of promoting violent white supremacy.

The U.S. State Department officially labeled the group, which primarily operates on the Telegram platform, as a Specially Designated Global Terrorist organization. In addition to the group, three of its leaders—located in Brazil, Croatia, and South Africa—were also sanctioned. These measures freeze any assets linked to the group in the U.S. and prohibit American individuals from engaging with them.

The State Department detailed that “Terrorgram” has been responsible for motivating and facilitating violent attacks, including a 2022 shooting outside an LGBTQ bar in Slovakia, a planned 2024 attack on energy facilities in New Jersey, and an August knife attack at a mosque in Turkey. The group is known for its promotion of violent white supremacist ideologies, inciting violence against perceived enemies, and providing guidance on attack methods and targets. These include critical infrastructure and government officials, as well as marginalized communities such as Black, Jewish, LGBTQ individuals, and immigrants.

In response, Telegram stated that it has a zero-tolerance policy for calls to violence and noted that it had previously removed several channels associated with “Terrorgram.” The platform emphasized that any similar content is swiftly banned upon detection.

Earlier this year, U.S. prosecutors charged two individuals linked to “Terrorgram,” accusing them of using the Telegram platform to incite a race war by soliciting attacks against various minority groups. The United Kingdom also moved to classify the “Terrorgram” collective as a terrorist organization in April, making it illegal to belong to or promote the group in the country.

This designation is part of a broader effort by U.S. President Joe Biden to combat domestic terrorism, particularly white supremacy. The Biden administration’s 2021 National Strategy for Countering Domestic Terrorism included measures to identify and prosecute such threats while also creating deterrents to prevent U.S. citizens from joining dangerous extremist groups.

Malaysia Grants Licences to WeChat and TikTok Under New Social Media Law

Malaysia’s communications regulator has granted licences to WeChat and TikTok to operate under the country’s new social media law, which aims to combat rising cybercrime. The law, which took effect on January 1, mandates that social media platforms and messaging services with more than 8 million users in Malaysia must obtain a licence, or face legal action.

The Malaysian Communications and Multimedia Commission (MCMC) announced on Wednesday that Tencent’s WeChat and ByteDance’s TikTok have been granted their licences. Messaging platform Telegram is in the final stages of the application process, while Meta Platforms, which owns Facebook, Instagram, and WhatsApp, has begun the licensing procedure.

However, some platforms have not applied for the licence. X (formerly Twitter) has not submitted an application, stating that its local user base does not exceed the 8 million threshold. The regulator is currently reviewing the validity of this claim. Additionally, Alphabet’s Google, which operates YouTube, has not applied for a licence either, citing concerns about YouTube’s video-sharing features and how they relate to the new law. The MCMC has indicated that YouTube must still comply with the licensing requirements.

The law requires platforms to adhere to guidelines to curb harmful content, including online gambling, scams, child pornography, cyberbullying, and offensive content related to race, religion, and royalty. Malaysia has seen an uptick in harmful social media content in early 2024, prompting authorities to urge platforms like Meta and TikTok to enhance their monitoring efforts.

While companies do not disclose their user numbers per country, independent data suggests WeChat has 12 million users in Malaysia, while TikTok has around 28.68 million users aged 18 and above. Facebook has 22.35 million users, YouTube has 24.1 million users, and X has 5.71 million users in the country.