Yazılar

Google Names DeepMind CTO Koray Kavukcuoglu as Chief AI Architect to Lead AI-Powered Product Development

Alphabet’s Google has appointed Koray Kavukcuoglu, the chief technology officer of its DeepMind AI lab, as its new chief AI architect and senior vice president, according to an internal memo from CEO Sundar Pichai. Kavukcuoglu will relocate from London to California and report directly to Pichai, while continuing his role as DeepMind CTO under CEO Demis Hassabis.

In this expanded leadership position, Kavukcuoglu will drive faster integration and iteration of Google’s cutting-edge AI models into its wide array of products, aiming to increase efficiency and seamless adoption as generative AI gains mainstream traction.

The move comes as Alphabet faces mounting pressure to justify its projected $75 billion AI investment this year by translating breakthroughs into tangible financial returns. Google must balance these efforts with maintaining profitability amid competition from rival AI developers and heightened antitrust scrutiny.

Google recently unveiled an AI subscription service priced at $249.99 per month targeting power users, alongside demonstrations of new AI-enhanced products like smart glasses during its May I/O conference. CEO Pichai emphasized that the ongoing generative AI expansion complements rather than replaces traditional online search.

Additionally, Google has formed a notable partnership with OpenAI — one of its biggest AI competitors — by agreeing to supply cloud computing resources to OpenAI’s operations, highlighting the evolving dynamics in the AI sector where collaboration and competition coexist.

This strategic leadership appointment signals Google’s intent to accelerate the transition into a new phase of AI platform development and adoption.

CoreWeave Gains Role in Google-OpenAI Cloud Deal to Supply AI Computing Power

CoreWeave, a specialized cloud computing company built on Nvidia GPUs, has become a key provider in Google’s new partnership with OpenAI, sources told Reuters. Under the deal, CoreWeave will supply computing capacity to Google Cloud, which will then sell these resources to OpenAI to support growing demand for AI services such as ChatGPT. Google will also contribute some of its own computing infrastructure directly to OpenAI.

This arrangement underscores the evolving relationship between major cloud hyperscalers like Google, Microsoft, and Amazon and emerging “neocloud” providers like CoreWeave, which focus heavily on AI workloads. CoreWeave went public in March and already has a significant presence in OpenAI’s infrastructure, holding a five-year $11.9 billion contract and an equity investment of $350 million from OpenAI.

The partnership was expanded last month with an additional agreement worth up to $4 billion through 2029. Bringing Google Cloud onboard as a customer helps CoreWeave diversify its revenue while leveraging Google’s deep pockets to secure better financing for data center expansions. For Google, it enhances its cloud business by tapping into the surging AI market and positions it as a neutral provider of compute resources amid competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270% since its IPO, reflecting strong investor confidence despite concerns over leverage and GPU demand shifts. Meanwhile, Microsoft, CoreWeave’s former largest customer, is reconsidering its data center strategy and renegotiating investment terms with OpenAI.

Neither CoreWeave, Google, nor OpenAI commented on the details of the deal.

Apple Opens Apple Intelligence to Developers, Keeps AI Rollout Cautious at WWDC

At its annual Worldwide Developers Conference (WWDC), Apple unveiled a series of incremental artificial intelligence updates, emphasizing practical features while keeping broader ambitions restrained compared to its tech rivals. The company announced that developers will now gain access to Apple Intelligence’s foundational on-device AI model, though cloud-based advanced capabilities remain out of reach.

Apple’s software chief Craig Federighi confirmed that third-party developers can integrate Apple’s on-device large language model (LLM), which operates at around 3 billion parameters. While this allows for enhanced privacy and offline functionality, it also limits the model’s capacity for more complex AI tasks that cloud-based systems can handle. Apple plans to supplement these with integrations from partners like OpenAI, allowing developers to use both Apple’s and OpenAI’s code completion tools directly within Apple’s developer platform, Xcode.

The updates reflect a shift from the sweeping promises made a year ago. Last year, Apple hinted at being a visionary in AI with talk of “AI agents.” This year, the company focused on concrete applications such as live translation during phone calls, call screening, and visual intelligence that helps users find products similar to those viewed online.

Federighi also announced a major design refresh across Apple’s operating systems, introducing a “Liquid Glass” aesthetic with semi-transparent icons and menus inspired by visionOS. Future OS versions will adopt year-based naming, replacing sequential version numbers.

While the AI additions may appear modest, Apple’s back-end infrastructure improvements suggest a longer-term strategy. The company prioritizes privacy-focused, on-device AI processing while allowing users to opt in when data is shared with third parties like OpenAI.

Despite these moves, analysts expressed mixed views. Some highlighted Apple’s cautious but practical approach, while others warned that Apple risks falling behind as competitors like OpenAI and Microsoft rapidly advance in AI development. Apple’s shares dipped 1.2% following the announcements.

In the broader context, OpenAI reported reaching a $10 billion annualized revenue run rate, underscoring the fast-paced evolution of the AI sector that Apple is cautiously navigating.