Yazılar

EU’s AI Code of Practice for Firms Likely Delayed Until End of 2025

The European Commission announced on Thursday that the Code of Practice designed to help companies comply with the EU’s Artificial Intelligence Act (AI Act) may only come into effect by late 2025. This code aims to guide thousands of businesses on meeting the new AI regulations, especially for general-purpose AI (GPAI) models like OpenAI’s ChatGPT, Google’s, and Mistral’s AI systems.

Background and Delay Calls

  • The Code of Practice was originally slated for publication on May 2, 2025, but its release has been delayed.

  • Major tech companies, including Alphabet (Google), Meta, and European firms such as Mistral and ASML, alongside some EU governments, have requested postponements due to the lack of clear compliance guidelines.

  • The European AI Board is currently debating the timeline, with end of 2025 under consideration for full implementation.

Voluntary but Important

  • Signing up for the Code is voluntary, but companies that refuse will not gain the legal certainty given to signatories.

  • The Code will clarify the expected quality standards AI service users can demand, reducing risks of misleading claims by providers, according to Nick Moës, Executive Director of AI advocacy group The Future Society.

  • The Code also involves oversight by legally mandated authorities to assess AI service quality.

EU’s Position and Industry Reaction

  • Despite calls for delay, the Commission insists it remains committed to the AI Act’s goals of harmonized, risk-based AI regulations and market safety.

  • Critics, such as campaign group Corporate Europe Observatory, accuse Big Tech of using delay tactics to weaken crucial AI safeguards.

Enforcement Timeline

  • The AI Act’s rules on GPAI models become legally binding on August 2, 2025, but enforcement will begin only a year later, on August 2, 2026, for new models entering the market.

  • Existing AI models have until August 2, 2027, to comply fully with the regulations.

EU Faces Mounting Pressure to Delay Enforcement of AI Act as Deadline Nears

With key provisions of the EU Artificial Intelligence Act (AI Act) set to begin on August 2, major tech companies and political figures are urging the European Commission to delay enforcement. Critics say the current framework lacks sufficient guidance, placing a heavy burden on businesses—especially startups—without clear rules on how to comply.

What Happens on August 2?

Although the AI Act was passed in 2024, its rules are being phased in gradually. On August 2, some of the first obligations come into force—specifically for General Purpose AI (GPAI) models such as those developed by Google, OpenAI, Mistral, and others.

These initial provisions require AI developers to:

  • Draw up technical documentation

  • Disclose training data summaries

  • Comply with EU copyright laws

  • Conduct testing for bias, toxicity, and robustness

More rigorous rules apply to high-impact and systemic-risk models, which will need:

  • Adversarial testing

  • Incident reporting

  • Risk assessments

  • Energy efficiency disclosures

However, full enforcement—particularly penalties and oversight powers—doesn’t begin until August 2, 2026.

Why Are Companies Pushing for a Delay?

Tech companies argue that they lack clarity on how to comply with the law. A promised AI Code of Practice, meant to serve as the act’s compliance manual, was due on May 2 but has not been published. The European AI Board is now discussing pushing the guidance release to late 2025.

In an open letter, 45 European AI firms called for a two-year “clock-stop”—a suspension of the countdown to enforcement—until key standards are finalized. They also asked for simpler regulations, warning that unclear requirements could damage European innovation.

Lobbying group CCIA Europe, which represents companies like Google and Meta, said:

“A bold ‘stop-the-clock’ intervention is urgently needed to give AI developers and deployers legal certainty.”

Will the EU Postpone It?

Officially, the European Commission has not signaled a postponement. It insists that the August 2 start date for GPAI obligations stands, although the lack of finalized guidance suggests informal delays in compliance expectations.

Some political figures—including Swedish Prime Minister Ulf Kristersson—have also expressed concern, calling the act “confusing” and backing the idea of a pause.

What Comes Next?

Even if the AI Act’s initial deadlines hold, enforcement might be soft or flexible in the early stages due to the lack of practical tools. The AI Code of Practice remains the critical next step for clarity.

Meanwhile, the tension highlights a broader EU challenge: balancing innovation with regulation, especially in fast-moving fields like artificial intelligence.

Trump, DeepSeek in Focus as Nations Gather at Paris AI Summit

The Paris AI Summit on February 10-11 is set to bring together nearly 100 countries to discuss the safe development and deployment of artificial intelligence (AI), with a particular spotlight on U.S. President Donald Trump’s administration and China’s DeepSeek. This summit follows last year’s meeting at Bletchley Park in England, expanding the conversation globally.

France, alongside India, is hosting the event with a focus on areas where it holds a competitive edge: open-source systems and clean energy for data centers. The summit will also address labor disruptions and AI market sovereignty. Top executives, including those from Alphabet and Microsoft, are expected to attend, with keynotes such as one from OpenAI’s Sam Altman, the CEO of ChatGPT.

The U.S. delegation, led by Vice President JD Vance, faces challenges in reaching consensus with China and other nations due to ongoing political tensions. Since President Trump’s administration began in January, several executive orders have reversed Biden’s approach, including pulling out of the Paris Climate Agreement and revisiting AI export controls to counter China.

A major point of discussion will be the creation of a non-binding communiqué on AI stewardship, which, if agreed upon, would mark significant progress. While the French presidency has emphasized that the summit will give a voice to all nations, it is clear that discussions will be influenced by the competition between the U.S. and China, particularly in AI development.

The summit will not focus on new regulations but will instead discuss frameworks for AI policy, aiming to balance innovation with safety. European nations, especially France, are keen to avoid regulations that might slow down the advancement of their national AI companies.

A notable highlight is the inclusion of China’s DeepSeek, which has recently disrupted the global AI scene by offering models that compete with U.S. companies at a fraction of the cost. This has bolstered the argument that the global race for AI supremacy remains open, as DeepSeek challenges established leaders in human-like reasoning technology.

At the summit, philanthropies and businesses are expected to commit substantial capital—starting with $500 million and potentially rising to $2.5 billion over five years—to fund public-interest AI projects across the globe. Additionally, energy concerns will be discussed, with France positioning its clean nuclear energy as a potential solution to the high power demands of AI models.