Yazılar

Ilya Sutskever Takes Charge of Safe Superintelligence After CEO Daniel Gross Joins Meta Amid AI Talent War

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has stepped up to lead Safe Superintelligence (SSI), the AI startup he founded last year, following the departure of CEO Daniel Gross who was poached by Meta Platforms to head its AI products division.

Key Developments

  • Daniel Gross left SSI to join Meta amid an intensifying AI talent war, where major tech companies compete fiercely with lucrative pay and strategic acquisitions.

  • Meta has also tried to recruit Sutskever and acquire SSI, which was valued recently at $32 billion, but Sutskever emphasized the startup’s focus on its mission, despite the interest.

  • SSI raised $1 billion last year aiming to build advanced AI systems that safely surpass human intelligence.

Background on Sutskever and Meta’s AI Push

  • Sutskever previously played a pivotal role at OpenAI but departed following internal leadership turmoil involving Sam Altman in late 2023.

  • Meta CEO Mark Zuckerberg recently created Meta Superintelligence Labs, consolidating the company’s AI efforts after challenges with its Llama 4 model and losing key talent.

  • This new unit will be led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub chief), with Meta investing $14.3 billion in Scale AI to bolster its AI capabilities.

Industry Connections and Meta’s Strategy

  • Gross and Friedman co-founded venture capital firm NFDG, which backs startups including SSI, Perplexity, and Figma.

  • Meta reportedly offered to buy a minority stake in NFDG’s funds, signaling a strategic push to influence key players in the AI startup ecosystem.

  • Gross’s background includes a 2013 startup acquisition by Apple and leadership roles in machine learning and AI at the tech giant.

AI Labs Wage Bidding War for Elite Researchers as Talent Becomes Key Battleground

The race to lead the artificial intelligence revolution is no longer just about compute power or datasets — it’s now centered on securing a small pool of elite AI researchers who can make or break the next generation of AI models. Companies like OpenAI, Google DeepMind, and Elon Musk’s xAI are aggressively courting this highly specialized talent, offering compensation packages in the tens of millions of dollars, luxury perks, and personal outreach from tech luminaries.

The explosive growth of generative AI following the 2022 release of ChatGPT has pushed the battle for talent to unprecedented levels, with some researchers receiving “professional athlete-style” incentives, including private jets, multimillion-dollar bonuses, and equity grants of over $20 million.

“The AI labs approach hiring like a game of chess,” said Ariel Herbert-Voss, a former OpenAI researcher. “They are like, do I have enough rooks? Enough knights?”

Elite Talent, Outsized Impact

Known internally as “ICs” (individual contributors), these researchers are seen as 10,000x engineers — a reference to the idea that in AI, the very best aren’t just 10 times better than average but can be 10,000 times more impactful, due to the leverage their innovations bring to large-scale model performance.

While the exact number of such talent is debated, industry insiders estimate there are only a few dozen to a thousand globally. With such scarcity, top labs are deploying every tool available to secure and retain them.

Top Offers and Retention Battles

  • OpenAI researchers have reportedly been offered retention bonuses of up to $2 million, plus equity increases exceeding $20 million, just to stay for one more year.

  • Google DeepMind has offered top researchers $20 million per year, while reducing vesting schedules on stock options to just 3 years, down from the typical 4.

  • Eleven Labs and SSI (founded by former OpenAI chief scientist Ilya Sutskever) have made competitive offers to lure away OpenAI talent, prompting preemptive counteroffers.

The bidding war has gotten so intense that OpenAI CEO Sam Altman famously tweeted in 2023 about the need for “10,000x researchers,” acknowledging their disproportionate value.

“It was actually financially not the best option that I had,” said Noam Brown, an OpenAI researcher recruited by several top labs, explaining that research resources and alignment with goals were more important to him than pure compensation.

Rising Stars and Strategic Hiring

To identify and cultivate new talent, data firms like Zeki Data have started using sports-style recruitment analytics, akin to the “Moneyball” approach, to discover undervalued researchers. Some companies, like Anthropic, have been recruiting heavily from theoretical physics and quantum computing backgrounds.

Meanwhile, Mira Murati, OpenAI’s former CTO, has poached over 20 employees for her still-stealth-mode startup, which is reportedly closing a record-breaking seed round based solely on its team strength.

The Bigger Picture

This frenzied battle for researchers is reshaping the AI landscape in Silicon Valley and beyond. With venture capital surging into early-stage AI startups — sometimes before they even launch a product — and top labs competing over a few hundred minds, the next major AI breakthrough may hinge less on hardware or scale and more on who can assemble the right intellectual firepower.

OpenAI Co-founder Sutskever’s Startup SSI in Talks for $20 Billion Valuation

Safe Superintelligence (SSI), a startup co-founded by OpenAI’s former chief scientist Ilya Sutskever, is reportedly in discussions to raise funding at a valuation of $20 billion. This would mark a significant increase from its previous $5 billion valuation during a September funding round, where it raised $1 billion from investors like Sequoia Capital, Andreessen Horowitz, and DST Global.

SSI’s talks come at a time when high-profile AI ventures are facing a reappraisal of their valuations following Chinese startup DeepSeek’s release of a cost-effective AI model. Despite not yet generating any revenue, SSI’s mission is to develop “safe superintelligence” that is both smarter than humans and aligned with human interests. However, much of the company’s work and approach remains under wraps, fueling intrigue among investors.

The company’s founders include Daniel Gross, previously of Apple, and Daniel Levy, a former OpenAI researcher. While SSI’s approach to AI is still not widely known, Sutskever’s reputation for groundbreaking work in AI, particularly in scaling and inference techniques, has garnered significant attention. SSI’s focus on “scaling in peace” aims to insulate progress from short-term commercial pressures, a stark contrast to the trajectory of OpenAI, which shifted to commercial products after the success of ChatGPT in 2022.

The conversation around SSI’s valuation highlights the ongoing competition in the AI space, with OpenAI in talks to potentially double its valuation to $300 billion, and rival Anthropic nearing a funding round that would value it at $60 billion.