Yazılar

EU’s AI Code of Practice for Firms Likely Delayed Until End of 2025

The European Commission announced on Thursday that the Code of Practice designed to help companies comply with the EU’s Artificial Intelligence Act (AI Act) may only come into effect by late 2025. This code aims to guide thousands of businesses on meeting the new AI regulations, especially for general-purpose AI (GPAI) models like OpenAI’s ChatGPT, Google’s, and Mistral’s AI systems.

Background and Delay Calls

  • The Code of Practice was originally slated for publication on May 2, 2025, but its release has been delayed.

  • Major tech companies, including Alphabet (Google), Meta, and European firms such as Mistral and ASML, alongside some EU governments, have requested postponements due to the lack of clear compliance guidelines.

  • The European AI Board is currently debating the timeline, with end of 2025 under consideration for full implementation.

Voluntary but Important

  • Signing up for the Code is voluntary, but companies that refuse will not gain the legal certainty given to signatories.

  • The Code will clarify the expected quality standards AI service users can demand, reducing risks of misleading claims by providers, according to Nick Moës, Executive Director of AI advocacy group The Future Society.

  • The Code also involves oversight by legally mandated authorities to assess AI service quality.

EU’s Position and Industry Reaction

  • Despite calls for delay, the Commission insists it remains committed to the AI Act’s goals of harmonized, risk-based AI regulations and market safety.

  • Critics, such as campaign group Corporate Europe Observatory, accuse Big Tech of using delay tactics to weaken crucial AI safeguards.

Enforcement Timeline

  • The AI Act’s rules on GPAI models become legally binding on August 2, 2025, but enforcement will begin only a year later, on August 2, 2026, for new models entering the market.

  • Existing AI models have until August 2, 2027, to comply fully with the regulations.

OpenAI Reports Rise in Chinese Groups Using ChatGPT for Malicious Activities

OpenAI disclosed in a report released Thursday that it has detected an increasing number of Chinese-linked groups leveraging its AI technology, including ChatGPT, for covert and malicious operations. Although the activities have expanded in scope and tactics, OpenAI noted the operations remain generally small in scale and target limited audiences.

Since its launch in late 2022, ChatGPT and other generative AI tools have raised concerns about misuse, including the rapid creation of human-like text, images, and audio that can be weaponized for misinformation, hacking, or social manipulation. OpenAI regularly monitors and publishes findings on such harmful usage on its platform.

Among the examples cited by OpenAI:

  • Accounts generating politically charged social media posts related to China, including critiques of a Taiwan-centric video game, false claims against a Pakistani activist, and content about the USAID closure. Some posts also criticized U.S. President Donald Trump’s tariffs with messages such as “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who’s supposed to keep eating?”

  • Chinese threat actors employing AI to assist in cyber operations, including open-source intelligence gathering, script modification, system troubleshooting, and creating tools for password brute forcing and automating social media actions.

  • Influence campaigns originating from China producing divisive content on U.S. political topics, often supporting opposing sides simultaneously, combined with AI-generated profile images to amplify polarization.

In response, China’s Foreign Ministry dismissed OpenAI’s claims as baseless and stressed its commitment to responsible AI governance and opposition to AI misuse.

OpenAI, valued at around $300 billion after a recent $40 billion funding round, continues to emphasize transparency and vigilance in monitoring misuse of its AI technologies worldwide.

Vance Warns Europeans That Heavy AI Regulations Could Stifle Innovation

U.S. Vice President JD Vance warned European leaders on Tuesday that heavy regulation on artificial intelligence (AI) could stifle the industry’s potential, arguing that “massive” regulations in Europe might “kill a transformative industry.” Speaking at the AI summit in Paris, Vance expressed opposition to the European Union’s strict regulatory approach, particularly criticizing the Digital Services Act and GDPR privacy rules, which he argued impose legal compliance costs on smaller firms.

Vance emphasized that AI must remain free from ideological bias and rejected the idea of AI being used as a tool for “authoritarian censorship.” In his speech, he argued that while ensuring safety online is important, it should not extend to restricting access to opinions deemed “misinformation” by governments. The U.S. delegation, led by Vance, did not sign the final statement of the summit, which endorsed principles of inclusive, ethical, and safe AI, diverging from the positions of Europe and other countries.

Vance also took the opportunity to address competition from China, warning about partnering with authoritarian regimes, which he said could pose a risk to nations’ information infrastructure. His comments seemed to reference the recent rise of Chinese startup DeepSeek, which challenged U.S. AI leadership with its freely distributed AI model.

While European leaders like French President Macron and European Commission chief Ursula von der Leyen supported trimming regulatory red tape, they stressed that regulation is crucial for ensuring trust in AI. Macron called for “trustworthy AI,” while von der Leyen assured that the EU would reduce bureaucracy and invest more in AI development.

The U.S. and the UK did not explain why they did not sign the final statement, but the decision aligns with their focus on encouraging innovation over regulatory measures. Russell Wald, executive director at the Stanford Institute for Human-Centered Artificial Intelligence, noted that the U.S. policy shift suggests a focus on accelerating innovation rather than safety-focused regulations.