Yazılar

EU Considers Pausing Parts of Landmark AI Act Amid Pressure from U.S. and Big Tech

The European Commission is considering pausing parts of its landmark Artificial Intelligence Act, following growing pressure from U.S. officials and major tech companies such as Meta and Alphabet, the Financial Times reported on Friday.

According to the report, the move comes after months of lobbying from Silicon Valley giants and warnings from the Trump administration that strict EU regulations could strain transatlantic trade relations.

A senior EU official told the FT that Brussels has been “engaging” with Washington on potential adjustments to the AI Act and related digital regulations as part of a broader simplification effort, which is expected to be adopted on November 19.

The AI Act, which became law in August 2024, is the world’s first comprehensive framework to regulate artificial intelligence technologies. It categorizes AI systems by risk level — from minimal to unacceptable — and imposes restrictions on areas like facial recognition, biometric surveillance, and generative AI transparency.

While a European Commission spokesperson had previously dismissed calls for delays, officials are now reportedly weighing temporary pauses for specific provisions, particularly those affecting companies developing large AI models.

An EU spokesperson told the FT that “various options” are being discussed but emphasized that the bloc remains “fully behind the AI Act and its objectives.”

The proposal reflects Europe’s balancing act between maintaining AI safety and innovation leadership while addressing geopolitical and trade pressures from the United States and industry stakeholders.

EU Unveils Draft AI Code of Practice Focusing on Copyright and Safety for Companies

The European Commission revealed a draft code of practice on Thursday aimed at helping companies comply with the European Union’s evolving artificial intelligence regulations. The voluntary code emphasizes safeguarding copyright-protected content and implementing measures to reduce systemic risks linked to AI technologies.

Developed by 13 independent experts, the code is part of the broader EU AI regulatory framework. While signing up is optional, companies that do not join will miss out on the legal certainty offered to adherents. The rules will apply to major AI providers including Alphabet (Google), Meta (Facebook), OpenAI, Anthropic, Mistral, and others.

Under the code, signatories must publish summaries detailing the data sources used to train their general-purpose AI models. They are required to ensure that copyright-protected materials are only used appropriately, especially when employing web crawlers, and must take steps to prevent outputs that infringe copyright.

To address systemic risks, companies will also need to establish frameworks to identify and analyze potential hazards. While transparency and copyright guidelines apply to all general-purpose AI providers, specific safety and security provisions target providers of advanced models like OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, and Anthropic’s Claude.

The EU’s AI Act, effective since last June, imposes strict transparency rules on high-risk AI systems and lighter obligations for general-purpose AI models. It also regulates AI use in military, crime, and security contexts. The new AI rules for large language models will become legally binding on August 2, with enforcement beginning a year later for new models. Existing models will have until August 2, 2027, to comply.

Henna Virkkunen, the EU’s technology commissioner, encouraged AI stakeholders to adopt the code, highlighting its collaborative design and its role in simplifying compliance with the EU AI Act. The code’s final approval by EU member states and the Commission is expected by the end of the year.

EU Firm on AI Rules Timeline Despite Industry Calls for Delay

The European Commission reaffirmed on Friday that it will adhere to the legal timeline for implementing the European Union’s groundbreaking Artificial Intelligence Act, rejecting recent appeals from major tech companies and some member states to postpone the rollout.

Key Points:

  • Major tech players including Alphabet (Google), Meta (Facebook), as well as European firms like Mistral and ASML, had urged the Commission to delay the AI Act by several years.

  • Commission spokesperson Thomas Regnier made clear at a press conference:

    • No pause, no grace period, and no stop-the-clock on the AI Act timeline.

    • Initial provisions took effect in February 2024.

    • Rules for general purpose AI models will begin enforcement in August 2024.

    • Requirements for high-risk AI models will start in August 2026.

  • The Commission indicated plans to simplify digital rules later this year, potentially reducing reporting obligations for smaller companies.

  • Concerns from companies center on the compliance costs and strict regulations, as the AI Act seeks to regulate a technology critical to sectors dominated by the US and China.