Yazılar

EU Unveils Draft AI Code of Practice Focusing on Copyright and Safety for Companies

The European Commission revealed a draft code of practice on Thursday aimed at helping companies comply with the European Union’s evolving artificial intelligence regulations. The voluntary code emphasizes safeguarding copyright-protected content and implementing measures to reduce systemic risks linked to AI technologies.

Developed by 13 independent experts, the code is part of the broader EU AI regulatory framework. While signing up is optional, companies that do not join will miss out on the legal certainty offered to adherents. The rules will apply to major AI providers including Alphabet (Google), Meta (Facebook), OpenAI, Anthropic, Mistral, and others.

Under the code, signatories must publish summaries detailing the data sources used to train their general-purpose AI models. They are required to ensure that copyright-protected materials are only used appropriately, especially when employing web crawlers, and must take steps to prevent outputs that infringe copyright.

To address systemic risks, companies will also need to establish frameworks to identify and analyze potential hazards. While transparency and copyright guidelines apply to all general-purpose AI providers, specific safety and security provisions target providers of advanced models like OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, and Anthropic’s Claude.

The EU’s AI Act, effective since last June, imposes strict transparency rules on high-risk AI systems and lighter obligations for general-purpose AI models. It also regulates AI use in military, crime, and security contexts. The new AI rules for large language models will become legally binding on August 2, with enforcement beginning a year later for new models. Existing models will have until August 2, 2027, to comply.

Henna Virkkunen, the EU’s technology commissioner, encouraged AI stakeholders to adopt the code, highlighting its collaborative design and its role in simplifying compliance with the EU AI Act. The code’s final approval by EU member states and the Commission is expected by the end of the year.

EU Faces Mounting Pressure to Delay Enforcement of AI Act as Deadline Nears

With key provisions of the EU Artificial Intelligence Act (AI Act) set to begin on August 2, major tech companies and political figures are urging the European Commission to delay enforcement. Critics say the current framework lacks sufficient guidance, placing a heavy burden on businesses—especially startups—without clear rules on how to comply.

What Happens on August 2?

Although the AI Act was passed in 2024, its rules are being phased in gradually. On August 2, some of the first obligations come into force—specifically for General Purpose AI (GPAI) models such as those developed by Google, OpenAI, Mistral, and others.

These initial provisions require AI developers to:

  • Draw up technical documentation

  • Disclose training data summaries

  • Comply with EU copyright laws

  • Conduct testing for bias, toxicity, and robustness

More rigorous rules apply to high-impact and systemic-risk models, which will need:

  • Adversarial testing

  • Incident reporting

  • Risk assessments

  • Energy efficiency disclosures

However, full enforcement—particularly penalties and oversight powers—doesn’t begin until August 2, 2026.

Why Are Companies Pushing for a Delay?

Tech companies argue that they lack clarity on how to comply with the law. A promised AI Code of Practice, meant to serve as the act’s compliance manual, was due on May 2 but has not been published. The European AI Board is now discussing pushing the guidance release to late 2025.

In an open letter, 45 European AI firms called for a two-year “clock-stop”—a suspension of the countdown to enforcement—until key standards are finalized. They also asked for simpler regulations, warning that unclear requirements could damage European innovation.

Lobbying group CCIA Europe, which represents companies like Google and Meta, said:

“A bold ‘stop-the-clock’ intervention is urgently needed to give AI developers and deployers legal certainty.”

Will the EU Postpone It?

Officially, the European Commission has not signaled a postponement. It insists that the August 2 start date for GPAI obligations stands, although the lack of finalized guidance suggests informal delays in compliance expectations.

Some political figures—including Swedish Prime Minister Ulf Kristersson—have also expressed concern, calling the act “confusing” and backing the idea of a pause.

What Comes Next?

Even if the AI Act’s initial deadlines hold, enforcement might be soft or flexible in the early stages due to the lack of practical tools. The AI Code of Practice remains the critical next step for clarity.

Meanwhile, the tension highlights a broader EU challenge: balancing innovation with regulation, especially in fast-moving fields like artificial intelligence.