Yazılar

US Tightens Control Over AI Chip Exports, Targeting Global Flow and China

HEADER: US Tightens Control Over AI Chip Exports, Targeting Global Flow and China

REWRITING TEXT:

The U.S. government announced on Monday new regulations aimed at tightening control over the global flow of artificial intelligence (AI) chips and technology, with a focus on limiting China’s access to these critical resources. The new rules, part of a broader U.S. effort to maintain its global leadership in AI, will cap the number of AI chips that can be exported to most countries while granting unlimited access to U.S. technology for its closest allies. This move, which intensifies the Biden administration’s previous restrictions, also ensures a continued blockade of China, Russia, Iran, and North Korea.

Strategic Implications and Global Impact

Commerce Secretary Gina Raimondo emphasized the importance of the U.S. maintaining its dominant position in AI, stating, “The U.S. leads AI now – both AI development and AI chip design, and it’s critical that we keep it that way.” The new regulations are the culmination of a four-year push to limit China’s access to advanced chips, which have military applications and could bolster the country’s capabilities in AI. These efforts also aim to close loopholes and introduce new safeguards to protect the U.S. AI industry’s competitive advantage.

The regulations set to take effect in 120 days from publication allow for specific country restrictions. Among them, the U.S. will divide the world into three categories: Tier 1 countries (Japan, South Korea, Britain, and the Netherlands), which will face minimal restrictions; countries like Singapore, Israel, and the UAE, which will face country caps; and nations like China, Russia, and Iran, which will be barred entirely from accessing the technology.

Effects on AI Chip Manufacturers

Advanced graphics processing units (GPUs), which are crucial for training AI models and are predominantly produced by U.S. companies like Nvidia and AMD, are among the chips subject to the new rules. Nvidia shares dropped by 5%, while AMD saw a 1% decline in early trading, as investors reacted to the anticipated regulatory changes. Major cloud service providers such as Microsoft, Google, and Amazon can still seek global authorizations to build data centers in countries that are unable to import sufficient chips due to the U.S. quotas. Once approved, these companies would be able to operate without export licenses for AI chips, provided they meet stringent security, reporting, and human rights requirements.

Industry Pushback

The rules have sparked significant criticism from key players in the tech industry. Nvidia, in particular, voiced concerns about the regulations, calling them “sweeping overreach.” The company argues that the restrictions would limit access to technology already available in consumer hardware, potentially hindering global competition and benefitting Chinese competitors. Oracle, a data center provider, echoed similar concerns, stating that the restrictions would primarily benefit China’s competitors in the AI and GPU market. Notably, the new rules do not apply to gaming chips, which remain outside the scope of the restrictions.

National Security and Long-Term Strategy

U.S. officials have justified the new rules by highlighting the potential risks associated with the rapid advancement of AI, which can be used for both beneficial and harmful purposes, including the development of advanced weapons, cyberattacks, and surveillance. National Security Adviser Jake Sullivan emphasized the need for the U.S. to stay ahead in the rapidly evolving AI landscape to safeguard both national security and economic interests.

As the Trump administration prepares to take office, questions remain about how the new regulations will be enforced. However, given the shared concern about China’s growing technological capabilities, many expect continuity in the U.S. approach to AI exports.

iGenius to Complete $1B Data Centre Project with Nvidia by Summer

Italian AI startup iGenius is on track to complete a $1 billion data centre project in southern Italy by the summer, utilizing Nvidia technology. The project, which will span five years, has prompted the company to extend its funding round from an initial target of 650 million euros. CEO Uljan Sharka shared that the new supercomputer built for the data centre will perform at an extraordinary rate, capable of executing 115 billion calculations per second. This marks a significant leap from Europe’s previous top supercomputers, which could handle only 0.5 billion calculations per second until last year.

The data centre will be powered by Nvidia’s advanced Blackwell chips, providing 35 times more computing power than their predecessors, while using 25 times less energy. The facility will house 80 of Nvidia’s most powerful servers, each containing 72 Blackwell chips. Southern Italy was chosen as the site for the project due to its surplus of renewable energy capacity, which will help meet the high power demands of the supercomputing operation.

iGenius, founded in 2016, is one of the few AI startups in Europe valued at over $1 billion, competing with other industry players such as France’s Mistral and Germany’s DeepL. The company recently launched Colosseum 355B, a large language model designed specifically for industries with stringent data protection needs, including finance, heavy industry, and government sectors. iGenius differentiates itself from competitors like OpenAI by providing open-source AI models for companies to run on their own infrastructure, allowing for greater control over sensitive data.

 

Nvidia Shifts Focus to New Advanced Packaging Technology

Nvidia’s CEO Jensen Huang confirmed that while the company’s demand for advanced packaging from TSMC remains robust, the specific type of technology required is evolving. At an event in Taichung, Taiwan, Huang explained that Nvidia is transitioning its focus from CoWoS-S to CoWoS-L for its upcoming Blackwell AI chips. This shift, however, does not signal a reduction in capacity, but rather an increase in the use of CoWoS-L, a newer, more advanced version of TSMC’s chip packaging technology.

Nvidia had previously relied heavily on CoWoS-S for its AI chips, including the Hopper platform. As the company moves into Blackwell, which was unveiled in March 2024, it plans to transition existing CoWoS-S capacity to CoWoS-L. This change will impact TSMC’s supply chain but is seen as a step forward in Nvidia’s push to meet the growing demand for its AI chips.

Huang also noted that while packaging capacity for these advanced chips had previously been a bottleneck, it had expanded significantly in recent years, with available capacity now approximately four times greater than it was two years ago. Despite the increased demand, Nvidia has not been cutting orders but is instead increasing its reliance on CoWoS-L, which is expected to better meet the needs of Blackwell’s design.

The move to CoWoS-L technology and changes in Nvidia’s order patterns have sparked speculation about the potential impact on TSMC’s revenue, particularly with analysts like Ming-Chi Kuo noting the shift in Nvidia’s focus. Huang declined to comment on recent U.S. export restrictions that limit AI chip sales to countries outside a select group of U.S. allies, but the company’s strategies continue to evolve in response to market demands and geopolitical factors.