Yazılar

Google Introduces New AI Models for Rapidly Growing Robotics Industry

Google, the parent company of Alphabet, unveiled two new AI models on Wednesday, designed specifically for the rapidly advancing robotics industry. These models, based on Google’s Gemini 2.0 framework, aim to accelerate the development of robots across various sectors, especially in industrial settings.

The robotics field has experienced significant progress in recent years, with AI-driven advancements enabling faster commercialization of robots for tasks in factories and warehouses. Google’s new models are tailored to meet the growing demand for smarter robots capable of performing complex tasks.

The first model, Gemini Robotics, integrates vision, language, and physical action, enabling robots to interact with their environment through physical output. The second model, Gemini Robotics-ER, provides robots with a deeper spatial understanding, allowing them to reason and run programs with greater autonomy, expanding their capabilities.

These models cater to all types of robots, including humanoids and industrial robots, which are increasingly being adopted in warehouses and factories. Google emphasized that its AI models are designed to help startups reduce costs and speed up product development, which is crucial in a market where robotics innovation is moving quickly.

Google’s AI models have been tested on its ALOHA 2 bi-arm robotics platform but are versatile enough to be customized for other robots, such as Apptronik’s Apollo humanoid robot. Apptronik recently raised $350 million to scale production of its AI-powered robots, with Google participating in the funding round alongside other investors.

Though Google once owned the robotics firm Boston Dynamics, known for its advanced robot designs, it sold the company to SoftBank Group in 2017. However, the launch of these new AI models shows Google’s continued interest and involvement in the robotics space.

AI to Fuel Record Year for M&A in U.S. Power Sector

Dealmakers anticipate that 2025 will be a record year for mergers and acquisitions (M&A) in the U.S. power sector, driven by the surging demand for electricity to support artificial intelligence (AI). This growing appetite for power generation and infrastructure assets is fueled by the massive energy needs of data centers that power AI technologies.

According to sources in the industry and at the CERAWeek energy conference in Houston, the first two months of 2025 have already seen significant deal-making activity, with 27 power deals valued at $36.4 billion. A standout transaction was Constellation Energy’s acquisition of Calpine for $16.4 billion. This surge in deal volume contrasts sharply with the broader M&A market, which has experienced its weakest start since the global financial crisis.

Power sector deal flow is expected to increase as companies race to meet growing electricity consumption. Private equity firms and institutional investors, such as KKR and PSP Investments, are actively pursuing investments, with KKR and PSP’s $2.8 billion acquisition of a 20% stake in American Electric Power’s (AEP) transmission network as one of the major recent deals. Strong electricity price increases have boosted the shares of power companies, enabling larger transactions.

The influx of capital into energy investments is substantial, with $334 billion in dry powder (capital raised but not yet deployed) by the end of 2024. Much of this capital is earmarked for investments in power generation, infrastructure technologies, and renewable energy projects. These funds are also fueling the increasing trend of taking public power companies private, as seen in the $2.2 billion sale of Altus Power to TPG’s climate investment arm.

The demand for power infrastructure has also driven utilities to divest non-core business units. In early 2025, Eversource Energy agreed to sell its Aquarion Water unit for $2.4 billion, while National Grid announced the sale of its U.S. renewables business to Brookfield Asset Management.

Despite challenges, such as rising costs for essential components like steel, aluminum, and copper, and uncertainties around tax credits for renewable projects, the deal-making momentum in the power sector is expected to continue. Market volatility, including potential impacts from Trump administration policies and immigration reform, will likely make existing power assets even more valuable, spurring more deals.

Meta Tests Its First In-House AI Training Chip

Meta, the parent company of Facebook, has initiated testing of its first in-house chip designed specifically for training artificial intelligence (AI) systems. This development marks a significant step in Meta’s plan to reduce its reliance on external chip suppliers like Nvidia and move toward producing its own custom silicon. Sources told Reuters that Meta has begun a small deployment of the chip and plans to expand production if the test proves successful.

Meta’s push to develop in-house chips is part of a broader strategy to reduce the high infrastructure costs associated with its AI projects. The company has forecast total 2025 expenses between $114 billion and $119 billion, including up to $65 billion in capital expenditure largely driven by investments in AI infrastructure.

The new chip is a dedicated accelerator, meaning it is built specifically for AI tasks, making it more power-efficient compared to graphics processing units (GPUs) typically used for AI workloads. Meta is collaborating with Taiwan-based TSMC to produce the chip. The initial design, known as the “tape-out,” has been completed, a crucial milestone in chip development. While tape-out is expensive, costing tens of millions of dollars, it is an essential part of the process to test the chip’s functionality.

Meta has experienced setbacks in its Meta Training and Inference Accelerator (MTIA) series in the past, even scrapping one chip after its initial tests failed. However, last year, Meta began using a MTIA inference chip for content recommendation systems on platforms like Facebook and Instagram. This progress has encouraged Meta to pursue further development of custom chips, aiming to use them for both training and inference of AI models, including generative AI products like Meta AI.

Meta plans to start using its own chips by 2026 for training purposes, aiming to reduce costs associated with AI model training. Chris Cox, Meta’s Chief Product Officer, discussed the company’s phased approach, noting that while progress has been slow, the success of the first-generation inference chip for recommendations has been a significant achievement. Despite the setbacks in developing custom chips, Meta continues to rely heavily on Nvidia’s GPUs for its AI needs, making it one of Nvidia’s largest customers.

The broader AI industry has raised questions about the effectiveness of scaling up large language models with ever more data and computing power. Chinese startup DeepSeek has introduced new, more efficient AI models that rely more heavily on inference rather than the computationally expensive training process. This has sparked concerns about the future value of GPUs like those from Nvidia, which have faced significant market volatility this year.