Chip Design Software Stocks Surge After US Lifts Export Curbs on China

Shares of major chip design software companies Synopsys and Cadence Design Systems rose sharply on Thursday following the U.S. government’s decision to lift export restrictions on chip design software exports to China, alleviating market uncertainties and preserving access to a critical revenue source.

Market Impact

  • The restrictions, introduced in late May, had cut off over 10% of revenue for these companies, negatively impacting forecasts and share prices.

  • Analysts from Mizuho noted the export resumption will limit revenue loss to just one month in the current quarter.

  • The easing of trade tensions could facilitate China’s approval of Synopsys’s $35 billion acquisition of engineering software firm Ansys, a deal pending regulatory clearance primarily in China.

Stock Movements

  • Synopsys shares rose 5.5%, despite ongoing assessments of export curbs’ financial impacts.

  • Cadence Design Systems surged 6.1%, reaching a record high of $330.09.

  • Ansys gained around 3.5%, while Germany’s Siemens, another key player in electronic design automation (EDA), rose 1.5%.

Expert Insights and Context

  • Susannah Streeter of Hargreaves Lansdown described the move as “a distinct warming of relations and a small ceasefire in the chips war.”

  • However, she cautioned that it does not represent a broad easing of restrictions on high-end chip exports, especially from companies like Nvidia.

  • U.S. concerns persist over China’s technological advancements and potential military applications of American chip technology.

  • The curbs have driven increased domestic chip design efforts in China, supported by state subsidies, raising fears of retaliatory actions that could affect regulatory decisions like the Synopsys-Ansys deal.

Deal Deadline

  • The Synopsys-Ansys merger has been cleared in all jurisdictions except China, with a closure deadline of July 15 and an option to extend to January next year.

Ilya Sutskever Takes Charge of Safe Superintelligence After CEO Daniel Gross Joins Meta Amid AI Talent War

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has stepped up to lead Safe Superintelligence (SSI), the AI startup he founded last year, following the departure of CEO Daniel Gross who was poached by Meta Platforms to head its AI products division.

Key Developments

  • Daniel Gross left SSI to join Meta amid an intensifying AI talent war, where major tech companies compete fiercely with lucrative pay and strategic acquisitions.

  • Meta has also tried to recruit Sutskever and acquire SSI, which was valued recently at $32 billion, but Sutskever emphasized the startup’s focus on its mission, despite the interest.

  • SSI raised $1 billion last year aiming to build advanced AI systems that safely surpass human intelligence.

Background on Sutskever and Meta’s AI Push

  • Sutskever previously played a pivotal role at OpenAI but departed following internal leadership turmoil involving Sam Altman in late 2023.

  • Meta CEO Mark Zuckerberg recently created Meta Superintelligence Labs, consolidating the company’s AI efforts after challenges with its Llama 4 model and losing key talent.

  • This new unit will be led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub chief), with Meta investing $14.3 billion in Scale AI to bolster its AI capabilities.

Industry Connections and Meta’s Strategy

  • Gross and Friedman co-founded venture capital firm NFDG, which backs startups including SSI, Perplexity, and Figma.

  • Meta reportedly offered to buy a minority stake in NFDG’s funds, signaling a strategic push to influence key players in the AI startup ecosystem.

  • Gross’s background includes a 2013 startup acquisition by Apple and leadership roles in machine learning and AI at the tech giant.

EU’s AI Code of Practice for Firms Likely Delayed Until End of 2025

The European Commission announced on Thursday that the Code of Practice designed to help companies comply with the EU’s Artificial Intelligence Act (AI Act) may only come into effect by late 2025. This code aims to guide thousands of businesses on meeting the new AI regulations, especially for general-purpose AI (GPAI) models like OpenAI’s ChatGPT, Google’s, and Mistral’s AI systems.

Background and Delay Calls

  • The Code of Practice was originally slated for publication on May 2, 2025, but its release has been delayed.

  • Major tech companies, including Alphabet (Google), Meta, and European firms such as Mistral and ASML, alongside some EU governments, have requested postponements due to the lack of clear compliance guidelines.

  • The European AI Board is currently debating the timeline, with end of 2025 under consideration for full implementation.

Voluntary but Important

  • Signing up for the Code is voluntary, but companies that refuse will not gain the legal certainty given to signatories.

  • The Code will clarify the expected quality standards AI service users can demand, reducing risks of misleading claims by providers, according to Nick Moës, Executive Director of AI advocacy group The Future Society.

  • The Code also involves oversight by legally mandated authorities to assess AI service quality.

EU’s Position and Industry Reaction

  • Despite calls for delay, the Commission insists it remains committed to the AI Act’s goals of harmonized, risk-based AI regulations and market safety.

  • Critics, such as campaign group Corporate Europe Observatory, accuse Big Tech of using delay tactics to weaken crucial AI safeguards.

Enforcement Timeline

  • The AI Act’s rules on GPAI models become legally binding on August 2, 2025, but enforcement will begin only a year later, on August 2, 2026, for new models entering the market.

  • Existing AI models have until August 2, 2027, to comply fully with the regulations.