Yazılar

Nvidia and Foxconn in Talks to Deploy Humanoid Robots at New Houston AI Server Factory

Taiwanese electronics manufacturer Foxconn and U.S. AI chipmaker Nvidia are reportedly negotiating to introduce humanoid robots at Foxconn’s upcoming factory in Houston, which will produce Nvidia AI servers. According to sources familiar with the discussions, this would mark the first time Nvidia products are manufactured with the help of humanoid robots and Foxconn’s inaugural use of such robots in an AI server production line.

The planned deployment, expected to be finalized within months, represents a significant advancement in the use of human-like robots in manufacturing, potentially transforming factory processes. Foxconn is working on its own humanoid robots in collaboration with Nvidia and has also tested humanoids developed by China’s UBTech. Details on the specific types, appearances, and number of robots planned for the Houston facility remain unclear.

The goal is to have the humanoid robots operational by the first quarter of next year, coinciding with the start of production for Nvidia’s GB300 AI servers at the Houston plant. While precise tasks have not been confirmed, Foxconn has been training humanoid robots for activities such as picking and placing objects, cable insertion, and assembly, according to a May company presentation.

Foxconn’s Houston factory is particularly suited for humanoid robot deployment due to its newness and ample space compared to existing AI server production sites. Nvidia and Foxconn declined to comment on the matter, and sources spoke anonymously due to lack of authorization to speak publicly.

At a recent event in Taipei, Leo Guo, general manager of Foxconn Industrial Internet’s robotics unit, revealed plans to showcase two humanoid robot models at Foxconn’s annual tech event in November—one with legs and another on a wheeled autonomous mobile robot base, the latter being the more cost-effective option.

Nvidia announced in April its plans to build AI supercomputer manufacturing plants in Texas, partnering with Foxconn in Houston and Wistron in Dallas, with production ramp-up expected within 12 to 15 months.

For Nvidia, integrating humanoid robots in AI server manufacturing signifies a deeper commitment to robotics technology, building on its existing platform that supports humanoid robot development. Nvidia CEO Jensen Huang has forecasted that widespread use of humanoid robots in manufacturing is less than five years away.

Several automakers, including Mercedes-Benz and BMW, have experimented with humanoid robots on production lines, while Tesla is developing its own. China also heavily supports humanoid robotics, anticipating that many factory tasks will eventually be carried out by these robots.

Honda-backed Helm.ai Unveils Vision System for Self-Driving Cars

Helm.ai, a California-based startup backed by Honda Motor, introduced its new camera-based urban environment interpretation system called Helm.ai Vision. The company is negotiating with multiple automakers to integrate its self-driving technology into mass-market vehicles.

Helm.ai is collaborating with Honda to embed the system in the upcoming 2026 Honda Zero series of electric vehicles, which will enable hands-free driving and allow drivers to take their eyes off the road.

CEO and founder Vladislav Voroninski told Reuters that the company’s business model centers on licensing this software, including foundation model software, to automakers. Helm.ai’s vision-focused system aligns with Tesla’s approach, relying on cameras rather than sensors like lidar or radar, which can add significant costs.

Voroninski acknowledged Helm.ai’s foundation models can work with other sensors but emphasized that the primary offering remains vision-centric. Industry experts, however, highlight that supplementary sensors such as lidar and radar are vital for safety, especially under poor visibility conditions.

In contrast, robotaxi companies like Alphabet’s Waymo and May Mobility use a sensor fusion approach combining radar, lidar, and cameras to ensure comprehensive environment perception.

Helm.ai has raised $102 million to date, with investors including Goodyear Ventures, Korean auto parts maker Sungwoo HiTech, and Amplo.

The Helm.ai Vision system merges inputs from multiple cameras to create a bird’s-eye view map that enhances vehicle planning and control. It is optimized for hardware platforms from Nvidia, Qualcomm, and others, facilitating automakers’ integration of Helm.ai Vision into existing vehicle systems.

Nvidia-Backed SandboxAQ Generates Synthetic Data to Accelerate Drug Discovery

Artificial intelligence startup SandboxAQ, spun out of Alphabet’s Google and backed by Nvidia, unveiled a large synthetic dataset designed to speed up drug discovery by improving predictions of how drugs bind to proteins. This crucial step helps scientists determine whether a drug candidate will effectively target biological processes involved in diseases.

Although the dataset is rooted in real-world experimental science, SandboxAQ created it computationally using Nvidia’s powerful chips rather than through lab experiments. By combining traditional scientific computing with advanced AI, the startup generated approximately 5.2 million new three-dimensional molecular structures that have not been observed naturally but are scientifically plausible based on existing data.

This synthetic data is being released publicly to train AI models capable of rapidly and accurately predicting drug-protein interactions, a process that would otherwise take far longer to compute manually—even on the fastest computers. SandboxAQ plans to monetize its own AI models developed using this data, offering a faster, cost-effective alternative to lab experiments.

Nadia Harhen, SandboxAQ’s general manager of AI simulation, explained the breakthrough: “This is a long-standing problem in biology that the industry has been trying to solve. Our synthetic data is tagged with ground-truth experimental results, enabling models trained on this data to achieve unprecedented accuracy.”

The approach represents a promising intersection of scientific computation and AI, potentially accelerating the development of new medicines and improving outcomes in pharmaceutical research.