Yazılar

EU’s AI Code of Practice for Firms Likely Delayed Until End of 2025

The European Commission announced on Thursday that the Code of Practice designed to help companies comply with the EU’s Artificial Intelligence Act (AI Act) may only come into effect by late 2025. This code aims to guide thousands of businesses on meeting the new AI regulations, especially for general-purpose AI (GPAI) models like OpenAI’s ChatGPT, Google’s, and Mistral’s AI systems.

Background and Delay Calls

  • The Code of Practice was originally slated for publication on May 2, 2025, but its release has been delayed.

  • Major tech companies, including Alphabet (Google), Meta, and European firms such as Mistral and ASML, alongside some EU governments, have requested postponements due to the lack of clear compliance guidelines.

  • The European AI Board is currently debating the timeline, with end of 2025 under consideration for full implementation.

Voluntary but Important

  • Signing up for the Code is voluntary, but companies that refuse will not gain the legal certainty given to signatories.

  • The Code will clarify the expected quality standards AI service users can demand, reducing risks of misleading claims by providers, according to Nick Moës, Executive Director of AI advocacy group The Future Society.

  • The Code also involves oversight by legally mandated authorities to assess AI service quality.

EU’s Position and Industry Reaction

  • Despite calls for delay, the Commission insists it remains committed to the AI Act’s goals of harmonized, risk-based AI regulations and market safety.

  • Critics, such as campaign group Corporate Europe Observatory, accuse Big Tech of using delay tactics to weaken crucial AI safeguards.

Enforcement Timeline

  • The AI Act’s rules on GPAI models become legally binding on August 2, 2025, but enforcement will begin only a year later, on August 2, 2026, for new models entering the market.

  • Existing AI models have until August 2, 2027, to comply fully with the regulations.

French Privacy Watchdog to Investigate DeepSeek Over AI and Data Protection

France’s data privacy authority, the CNIL, announced on Thursday that it will question DeepSeek to assess the workings of its AI system and potential privacy risks for users. The Chinese AI startup gained international attention after revealing that training its DeepSeek-V3 model required less than $6 million in Nvidia H800 computing power.

A CNIL spokesperson confirmed that its AI department is currently analyzing DeepSeek’s tool and will engage with the company to understand its system and data protection measures. The French regulator is among the most active in Europe, having previously fined tech giants like Google and Meta for privacy violations.

DeepSeek is also under scrutiny in other parts of Europe. Italy’s data protection authority recently requested details on its handling of personal data, while Ireland’s Data Protection Commission has inquired about data processing practices related to Irish users.

The European Union maintains strict privacy protections under its General Data Protection Regulation (GDPR), widely regarded as one of the world’s most comprehensive data privacy laws. GDPR violations can result in fines of up to 4% of a company’s global revenue. Additionally, new EU AI regulations impose transparency obligations on high-risk AI models, with penalties ranging from 7.5 million euros (or 1.5% of turnover) to 35 million euros (or 7% of global turnover), depending on the severity of violations.

As regulatory scrutiny intensifies, DeepSeek faces mounting pressure to demonstrate compliance with European data protection standards.