Yazılar

Vance Warns Europeans That Heavy AI Regulations Could Stifle Innovation

U.S. Vice President JD Vance warned European leaders on Tuesday that heavy regulation on artificial intelligence (AI) could stifle the industry’s potential, arguing that “massive” regulations in Europe might “kill a transformative industry.” Speaking at the AI summit in Paris, Vance expressed opposition to the European Union’s strict regulatory approach, particularly criticizing the Digital Services Act and GDPR privacy rules, which he argued impose legal compliance costs on smaller firms.

Vance emphasized that AI must remain free from ideological bias and rejected the idea of AI being used as a tool for “authoritarian censorship.” In his speech, he argued that while ensuring safety online is important, it should not extend to restricting access to opinions deemed “misinformation” by governments. The U.S. delegation, led by Vance, did not sign the final statement of the summit, which endorsed principles of inclusive, ethical, and safe AI, diverging from the positions of Europe and other countries.

Vance also took the opportunity to address competition from China, warning about partnering with authoritarian regimes, which he said could pose a risk to nations’ information infrastructure. His comments seemed to reference the recent rise of Chinese startup DeepSeek, which challenged U.S. AI leadership with its freely distributed AI model.

While European leaders like French President Macron and European Commission chief Ursula von der Leyen supported trimming regulatory red tape, they stressed that regulation is crucial for ensuring trust in AI. Macron called for “trustworthy AI,” while von der Leyen assured that the EU would reduce bureaucracy and invest more in AI development.

The U.S. and the UK did not explain why they did not sign the final statement, but the decision aligns with their focus on encouraging innovation over regulatory measures. Russell Wald, executive director at the Stanford Institute for Human-Centered Artificial Intelligence, noted that the U.S. policy shift suggests a focus on accelerating innovation rather than safety-focused regulations.

Capgemini CEO Criticizes EU’s AI Regulations as Too Restrictive

Aiman Ezzat, CEO of Capgemini, expressed concerns that the European Union has overreached with its artificial intelligence regulations, making it more challenging for global companies to deploy AI in the region. In an interview, Ezzat highlighted the difficulties businesses face as they navigate different AI laws across multiple countries. His remarks come ahead of the AI Action Summit in Paris and amidst growing frustration from the private sector regarding AI regulations.

The EU’s AI Act, which is touted as the world’s most comprehensive AI law, has been criticized by some companies for stifling innovation. Ezzat commented, “In Europe, we went too far and too fast on AI regulation,” emphasizing that the absence of global AI standards has made the regulatory landscape increasingly complex.

Capgemini, one of Europe’s largest IT services firms, partners with major companies like Microsoft, Google Cloud, and Amazon Web Services (AWS), and serves clients such as Heathrow Airport and Deutsche Telekom. At the upcoming summit in Paris, AI policy frameworks are expected to be discussed, and Ezzat anticipates efforts to align global policy on AI.

While the AI Act won’t be fully implemented for several years, concerns have already arisen regarding privacy law violations by AI actors. Several European data protection authorities are reviewing DeepSeek, a Chinese startup that has drawn attention for its ability to compete with U.S. companies at a fraction of the cost. Despite DeepSeek’s open-source model, Ezzat noted its transparency limitations, such as the lack of access to the datasets used to train the models.

Capgemini is in the early stages of exploring the integration of DeepSeek’s models with clients, according to Ezzat.

EU Announces Guidelines to Prevent AI Misuse by Employers, Websites, and Police

The European Commission unveiled new guidelines on Tuesday aimed at curbing the misuse of artificial intelligence (AI) in various sectors, including employment, online services, and law enforcement. As part of the European Union’s broader AI regulations, the guidelines prohibit practices such as using AI to track employees’ emotions or to manipulate consumers into spending money online.

The guidelines are part of the EU’s Artificial Intelligence Act, which, while legally binding since last year, will be fully enforceable by August 2, 2026. Some provisions, such as those concerning specific AI practices, take effect earlier, including the ban on deceptive AI practices from February 2 this year.

Prohibited practices under the guidelines include the use of AI to create “dark patterns” on websites designed to manipulate users into making financial commitments, as well as AI applications that exploit individuals based on factors like age, disability, or socio-economic status. Additionally, social scoring systems that use personal data, such as race or origin, to categorize individuals are banned, alongside the use of biometric data by police to predict criminal behavior without proper verification.

Employers are also restricted from using surveillance tools like webcams or voice recognition systems to monitor employees’ emotions. The guidelines further prohibit the use of mobile CCTV cameras equipped with facial recognition for law enforcement, except under strict conditions with safeguards in place.

The EU has given member countries until August 2 to designate market surveillance authorities to enforce these AI rules. Companies found in violation could face hefty fines ranging from 1.5% to 7% of their global revenue. This comprehensive regulatory framework contrasts with the United States’ voluntary compliance approach and China’s focus on maintaining social stability through state-controlled AI.