Yazılar

EU Faces Mounting Pressure to Delay Enforcement of AI Act as Deadline Nears

With key provisions of the EU Artificial Intelligence Act (AI Act) set to begin on August 2, major tech companies and political figures are urging the European Commission to delay enforcement. Critics say the current framework lacks sufficient guidance, placing a heavy burden on businesses—especially startups—without clear rules on how to comply.

What Happens on August 2?

Although the AI Act was passed in 2024, its rules are being phased in gradually. On August 2, some of the first obligations come into force—specifically for General Purpose AI (GPAI) models such as those developed by Google, OpenAI, Mistral, and others.

These initial provisions require AI developers to:

  • Draw up technical documentation

  • Disclose training data summaries

  • Comply with EU copyright laws

  • Conduct testing for bias, toxicity, and robustness

More rigorous rules apply to high-impact and systemic-risk models, which will need:

  • Adversarial testing

  • Incident reporting

  • Risk assessments

  • Energy efficiency disclosures

However, full enforcement—particularly penalties and oversight powers—doesn’t begin until August 2, 2026.

Why Are Companies Pushing for a Delay?

Tech companies argue that they lack clarity on how to comply with the law. A promised AI Code of Practice, meant to serve as the act’s compliance manual, was due on May 2 but has not been published. The European AI Board is now discussing pushing the guidance release to late 2025.

In an open letter, 45 European AI firms called for a two-year “clock-stop”—a suspension of the countdown to enforcement—until key standards are finalized. They also asked for simpler regulations, warning that unclear requirements could damage European innovation.

Lobbying group CCIA Europe, which represents companies like Google and Meta, said:

“A bold ‘stop-the-clock’ intervention is urgently needed to give AI developers and deployers legal certainty.”

Will the EU Postpone It?

Officially, the European Commission has not signaled a postponement. It insists that the August 2 start date for GPAI obligations stands, although the lack of finalized guidance suggests informal delays in compliance expectations.

Some political figures—including Swedish Prime Minister Ulf Kristersson—have also expressed concern, calling the act “confusing” and backing the idea of a pause.

What Comes Next?

Even if the AI Act’s initial deadlines hold, enforcement might be soft or flexible in the early stages due to the lack of practical tools. The AI Code of Practice remains the critical next step for clarity.

Meanwhile, the tension highlights a broader EU challenge: balancing innovation with regulation, especially in fast-moving fields like artificial intelligence.

US Senate Removes AI Regulation Ban from Trump Tax Bill in Overwhelming Vote

The Republican-controlled U.S. Senate voted 99-1 on Tuesday to eliminate a 10-year federal moratorium that would have prevented states from regulating artificial intelligence. This amendment, offered by Republican Senator Marsha Blackburn, was adopted during a lengthy “vote-a-rama” session on President Trump’s tax-cut and spending bill.

Only Senator Thom Tillis voted to keep the ban. The Senate later passed the broader tax legislation with a narrow 51-50 vote. The original bill sought to restrict states from accessing a $500 million fund for AI infrastructure if they imposed AI regulations.

Major AI companies like Google and OpenAI had supported the moratorium, arguing that a consistent federal approach would foster innovation without a patchwork of state rules. However, Democratic Senator Maria Cantwell opposed the ban, emphasizing the importance of state laws to protect consumers from risks like robocalls, deepfakes, and unsafe autonomous vehicles.

Seventeen Republican governors also called for abandoning the moratorium. Arkansas Governor Sarah Huckabee Sanders said the Senate’s decision would enable states to protect children from unregulated AI harms.

Blackburn introduced the amendment shortly after withdrawing support for a compromise that would have shortened the ban to five years and allowed limited state regulation on matters such as child safety and artist protections. She stated that until Congress enacts comprehensive federal legislation, states must retain the right to protect their citizens.

OpenAI Denies Plans to Use Google’s In-House AI Chips Despite Cloud Collaboration

OpenAI has clarified that it has no current plans to adopt Google’s in-house AI chips (TPUs) to power its products, pushing back against recent reports that suggested the ChatGPT maker was turning to its rival’s hardware to meet increasing computing demands.

A spokesperson for OpenAI stated on Sunday that while the company is testing Google’s TPUs in early stages, there are no plans to deploy them at scale for production use. Google, for its part, declined to comment on the matter.

Testing multiple AI chip platforms is standard industry practice, but shifting large-scale workloads to a new hardware platform would require significant architectural and software adjustments. Currently, OpenAI continues to rely heavily on Nvidia’s GPUs and is also utilizing AMD’s AI chips to fuel its operations. Additionally, OpenAI is actively developing its own custom AI chip, expected to reach the “tape-out” milestone later this year — marking the point where chip design is finalized for manufacturing.

Earlier this month, Reuters reported that OpenAI had signed on to use Google Cloud services, a move seen as a notable collaboration between two competitors in the generative AI space. However, the bulk of OpenAI’s computing needs are still being handled by CoreWeave, a cloud provider specializing in GPU-based infrastructure.

Google has recently begun expanding external access to its TPUs, previously used mostly for internal projects. This shift has attracted a number of high-profile customers, including Apple, as well as AI startups Anthropic and Safe Superintelligence (SSI) — both of which were founded by former OpenAI executives and are direct rivals in the AI field.