Yazılar

EU Unveils Draft AI Code of Practice Focusing on Copyright and Safety for Companies

The European Commission revealed a draft code of practice on Thursday aimed at helping companies comply with the European Union’s evolving artificial intelligence regulations. The voluntary code emphasizes safeguarding copyright-protected content and implementing measures to reduce systemic risks linked to AI technologies.

Developed by 13 independent experts, the code is part of the broader EU AI regulatory framework. While signing up is optional, companies that do not join will miss out on the legal certainty offered to adherents. The rules will apply to major AI providers including Alphabet (Google), Meta (Facebook), OpenAI, Anthropic, Mistral, and others.

Under the code, signatories must publish summaries detailing the data sources used to train their general-purpose AI models. They are required to ensure that copyright-protected materials are only used appropriately, especially when employing web crawlers, and must take steps to prevent outputs that infringe copyright.

To address systemic risks, companies will also need to establish frameworks to identify and analyze potential hazards. While transparency and copyright guidelines apply to all general-purpose AI providers, specific safety and security provisions target providers of advanced models like OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, and Anthropic’s Claude.

The EU’s AI Act, effective since last June, imposes strict transparency rules on high-risk AI systems and lighter obligations for general-purpose AI models. It also regulates AI use in military, crime, and security contexts. The new AI rules for large language models will become legally binding on August 2, with enforcement beginning a year later for new models. Existing models will have until August 2, 2027, to comply.

Henna Virkkunen, the EU’s technology commissioner, encouraged AI stakeholders to adopt the code, highlighting its collaborative design and its role in simplifying compliance with the EU AI Act. The code’s final approval by EU member states and the Commission is expected by the end of the year.

EU’s AI Code of Practice for Firms Likely Delayed Until End of 2025

The European Commission announced on Thursday that the Code of Practice designed to help companies comply with the EU’s Artificial Intelligence Act (AI Act) may only come into effect by late 2025. This code aims to guide thousands of businesses on meeting the new AI regulations, especially for general-purpose AI (GPAI) models like OpenAI’s ChatGPT, Google’s, and Mistral’s AI systems.

Background and Delay Calls

  • The Code of Practice was originally slated for publication on May 2, 2025, but its release has been delayed.

  • Major tech companies, including Alphabet (Google), Meta, and European firms such as Mistral and ASML, alongside some EU governments, have requested postponements due to the lack of clear compliance guidelines.

  • The European AI Board is currently debating the timeline, with end of 2025 under consideration for full implementation.

Voluntary but Important

  • Signing up for the Code is voluntary, but companies that refuse will not gain the legal certainty given to signatories.

  • The Code will clarify the expected quality standards AI service users can demand, reducing risks of misleading claims by providers, according to Nick Moës, Executive Director of AI advocacy group The Future Society.

  • The Code also involves oversight by legally mandated authorities to assess AI service quality.

EU’s Position and Industry Reaction

  • Despite calls for delay, the Commission insists it remains committed to the AI Act’s goals of harmonized, risk-based AI regulations and market safety.

  • Critics, such as campaign group Corporate Europe Observatory, accuse Big Tech of using delay tactics to weaken crucial AI safeguards.

Enforcement Timeline

  • The AI Act’s rules on GPAI models become legally binding on August 2, 2025, but enforcement will begin only a year later, on August 2, 2026, for new models entering the market.

  • Existing AI models have until August 2, 2027, to comply fully with the regulations.

Spain Moves to Fine Companies for Unlabelled AI-Generated Content

Spain’s government has approved a new bill imposing hefty fines on companies that fail to label AI-generated content properly. The measure, aimed at combating misinformation and the spread of deepfakes, aligns with the European Union’s AI Act, which enforces strict transparency rules for high-risk AI applications.

Digital Transformation Minister Oscar Lopez emphasized the dual nature of AI, describing it as both a powerful tool for improving lives and a potential threat to democracy through disinformation. Spain is among the first EU nations to implement these regulations, setting a more rigid standard compared to the United States’ largely voluntary approach.

The proposed law classifies the failure to properly label AI-generated content as a “serious offense,” punishable by fines of up to €35 million ($38.2 million) or 7% of a company’s global annual revenue. The bill also prohibits subliminal AI techniques used to manipulate vulnerable populations, such as chatbots that encourage gambling addiction or AI-powered toys that promote risky behavior among children.

Another key provision bans the use of AI to classify individuals based on biometric data for scoring purposes, preventing organizations from assessing a person’s eligibility for benefits or predicting criminal behavior. However, authorities will still be permitted to use real-time biometric surveillance for national security purposes.

Spain’s newly established AI supervisory agency, AESIA, will oversee enforcement, except in areas such as data privacy, elections, finance, and crime, which will fall under their respective regulatory bodies. The bill must still pass the lower house before becoming law.