Yazılar

Google Reports 250 Complaints Over AI-Generated Deepfake Terrorism Content to Australian Regulator

Google has informed Australian regulators that it received over 250 complaints globally between April 2023 and February 2024, indicating that its AI technology, specifically the Gemini model, was being used to create deepfake terrorism content. Additionally, the company reported dozens of complaints regarding the use of Gemini to generate child abuse material, according to the Australian eSafety Commission.

Under Australian law, tech companies are required to periodically report their harm minimization efforts to the eSafety Commission, or risk facing fines. This reporting period marks the first disclosure of such data, which regulators have described as a “world-first insight” into how AI is being exploited for harmful and illegal purposes.

The Australian eSafety Commission emphasized the importance of companies developing AI products to implement safeguards to prevent the generation of harmful material. eSafety Commissioner Julie Inman Grant stated that the findings highlight the critical need for effective protective measures.

According to Google’s report, it received 258 user complaints about AI-generated deepfake terrorist or extremist content created with Gemini, along with 86 reports concerning AI-generated child exploitation or abuse material. However, the company did not specify how many of these complaints were verified.

A Google spokesperson confirmed that the company does not allow the generation or distribution of illegal content, including material related to terrorism, child exploitation, or other abuses. Google also noted that the number of reports provided to eSafety represents the total global volume of complaints, not confirmed policy violations.

While Google uses a system called “hatch-matching” to identify and remove child abuse content generated with Gemini, the company did not apply the same system to detect terrorist or extremist material. This lack of a similar safeguard for violent content has raised concerns among regulators.

The Australian eSafety Commission has previously fined Telegram and Twitter (now X) for their inadequate reporting practices, with X losing an appeal over a fine of A$610,500 ($382,000). Telegram is also preparing to challenge its fine.

Paris AI Summit: France and EU Commit to Easing AI Regulations

At the Paris AI Summit on Monday, French President Emmanuel Macron announced that Europe will scale back regulations to foster the growth of artificial intelligence, with a focus on making the EU more attractive for tech investments. Macron urged the EU to adopt a simplified, business-friendly approach to AI regulation, citing the successful reconstruction of Notre-Dame as an example of how flexible rules can speed up processes.

Henna Virkkunen, the EU’s digital chief, echoed this sentiment, promising to reduce bureaucratic hurdles and implement regulations that support innovation. Macron emphasized the need for Europe to align with global standards, especially as the U.S. under President Donald Trump has rolled back AI regulations to enhance its tech competitiveness.

At the summit, major tech leaders, including Alphabet CEO Sundar Pichai, voiced support for a more streamlined regulatory approach. Pichai highlighted the importance of fostering ecosystems of AI innovation, particularly in places like France.

The European Commission has already passed the AI Act, the world’s first comprehensive AI regulation, but Virkkunen acknowledged the need to review and simplify existing rules to reduce overlapping regulations. In terms of investment, Macron announced €109 billion ($113 billion) in private sector funding for AI in France, with projects including new data centers and AI hubs like the startup Mistral.

A key outcome of the summit was the launch of Current AI, a collaborative initiative backed by France, Germany, Google, and Salesforce, aimed at making high-quality AI data available and promoting open-source tools. The initiative starts with $400 million in funding, with a goal of reaching $2.5 billion over five years.

However, not all attendees agreed with easing AI regulations. Concerns were raised about weakening existing protections, especially from U.S. influences, and about the potential negative impacts on workers displaced by AI. Labour leaders warned about the risks of job losses and the need for adequate protections.

Europe’s Privacy Watchdogs to Discuss DeepSeek Amid Data Privacy Concerns

European Union data protection authorities are set to discuss concerns surrounding the Chinese artificial intelligence startup DeepSeek during their monthly meeting on Tuesday, according to the meeting agenda. The discussions arise amid growing scrutiny of how DeepSeek handles personal data, especially regarding European users.

DeepSeek made waves globally last month by showcasing its ability to compete with major U.S. tech firms in human-like reasoning technology, while offering services at a significantly lower cost. However, concerns have been raised by several European privacy regulators about whether the company is using personal data from European citizens to train its AI models and if such data could be transferred to China.

The European Data Protection Board (EDPB), based in Brussels, has scheduled a session to address DeepSeek’s activities. During the meeting, national data protection authorities will share information on the actions they’ve taken in response to DeepSeek’s operations. Marie-Laure Denis, president of the French privacy watchdog CNIL, emphasized that the goal of the meeting is to harmonize responses and share insights on how to address privacy risks posed by the company.

The CNIL confirmed that it had reached out to DeepSeek for clarification on how the company’s AI system operates and whether there are any potential privacy risks for users. Ireland’s data protection authority has also sought further information from the Chinese startup. Meanwhile, Italy’s data watchdog has taken more drastic action, ordering DeepSeek to block its chatbot in the country due to non-compliance with privacy concerns over its policy.

Europe has been known for its strong stance on data privacy, with its General Data Protection Regulation (GDPR) considered one of the strictest data protection laws in the world. The scrutiny of DeepSeek highlights the region’s commitment to safeguarding user privacy amid the rapid growth of AI technologies.