Yazılar

Google Integrates SandboxAQ’s Quantitative AI Models into Cloud Services

Google Cloud has expanded its offerings by integrating SandboxAQ’s large quantitative models (LQMs), designed to process complex numerical data and perform advanced statistical analysis. This move highlights the growing interest of cloud providers in AI technology as a key driver of future growth.

Key Points:

  • Partnership with SandboxAQ: Quantum startup SandboxAQ has announced that its LQMs will be available on Google Cloud, making it easier for businesses to use and deploy these models. SandboxAQ, a spin-off of Google-parent Alphabet, is seeking to expand its reach and customer base through this collaboration.
  • Capabilities of LQMs: The models are designed to handle large-scale datasets and perform intricate calculations, ideal for creating advanced financial models, automating trading strategies, and addressing complex business problems. These models are particularly useful in industries like life sciences, financial services, and navigation.
  • Quantum AI Synergy: According to SandboxAQ CEO Jack Hidary, quantitative AI is essential for many sectors of the economy, especially where mathematical and quantitative relationships are fundamental. He emphasized the complementary nature of quantitative AI and language models in solving complex challenges.
  • SandboxAQ’s Growth: In the previous month, SandboxAQ raised $300 million in funding, which boosted its valuation to $5.6 billion. The company is backed by prominent investors including Fred Alger Management, T. Rowe Price, and Breyer Capital.
  • Broader Industry Impacts: Google’s push into quantum computing, including progress on new quantum chips, is seen as part of its broader strategy to lead in this emerging field. Competitors such as Microsoft and Nvidia have also been active in exploring quantum computing, although practical applications are still seen as years away.

Nvidia Unveils New Robotics, Gaming Chips, and Toyota Deal at CES 2025

At CES 2025, Nvidia CEO Jensen Huang revealed several groundbreaking products, showcasing the company’s ambitions to expand its business across robotics, gaming, and automotive technology. The announcements highlighted innovations in AI, gaming chips, and collaborations, including a new deal with Toyota.

One of the key highlights was the introduction of Nvidia’s Cosmos foundation models, which use artificial intelligence to generate photo-realistic video for robot and self-driving car training. By creating “synthetic” training data, these models simulate physical environments much more affordably than traditional data collection methods. Unlike the typical approach of placing cars on the road or having humans demonstrate tasks, Cosmos can generate videos based on a text description, adhering to the laws of physics. The models will be made available on an “open license,” much like Meta Platforms’ Llama 3 language models, which have seen widespread use in the tech industry. Huang expressed hopes that Cosmos could revolutionize robotics and industrial AI similarly to the impact Llama 3 has had on enterprise AI.

Despite the excitement, analysts, including Vivek Arya from Bank of America, raised concerns about whether the new robotics technology would substantially boost Nvidia’s sales. Arya questioned the challenge of making the products both reliable and affordable enough to create viable business models, similar to the niche opportunities of autonomous vehicles or the metaverse.

In addition to robotics, Nvidia unveiled new gaming chips, part of the RTX 50 series, that use Nvidia’s Blackwell AI technology. These chips aim to enhance gaming graphics, particularly through ‘shaders’ that add realistic imperfections to objects in video games, such as fingerprint smudges on surfaces. The new chips are also designed to improve the realism of human faces, which is a critical area of focus for developers. Prices for the chips range from $549 to $1,999, with the high-end models set to launch on January 30, followed by lower-tier models in February. Analysts, including Ben Bajarin of Creative Strategies, expect these chips to drive short-term sales growth for Nvidia.

Nvidia also debuted its first desktop computer, Project DIGITS, which is designed for software developers rather than regular consumers. Priced at $3,000, the computer runs on Nvidia’s Linux-based operating system and includes the same AI chip used in the company’s data center products. The desktop, which features a central processor co-designed with Taiwan’s MediaTek, is expected to help individual developers quickly test their AI systems. Project DIGITS will be available in March.

Additionally, Huang announced that Toyota Motor will integrate Nvidia’s Orin chips and automotive operating system into several of its models to power advanced driver assistance features. Although the company did not specify which models would feature the technology, the partnership signifies a growing presence in the automotive sector. Nvidia projects automotive hardware and software revenue will reach $5 billion by fiscal 2026, up from an expected $4 billion in the current year.

Nvidia’s stock surged to a record high of $149.43, increasing its market valuation to $3.66 trillion, making it the second-most valuable listed company in the world, behind Apple.

US Implements New AI Chip Regulation to Control Global Access

The U.S. government has introduced a new regulation to restrict global access to U.S.-designed artificial intelligence (AI) chips and technology. This regulation targets the export of advanced graphics processing units (GPUs), essential for building AI models, and aims to ensure that cutting-edge AI capabilities are developed and deployed securely and in trusted environments.

Which Chips Are Restricted?

The regulation focuses on GPUs, which were initially created to accelerate graphics rendering but have become critical for AI due to their ability to process large amounts of data simultaneously. U.S. companies, particularly Nvidia, dominate the production of these chips. GPUs like Nvidia’s H100 are used extensively in training advanced AI models, such as OpenAI’s ChatGPT.

What Is the U.S. Doing?

To regulate global access, the U.S. is extending restrictions on advanced GPUs, specifically those used in AI training clusters. The new rule sets limits based on compute power, measured by Total Processing Performance (TPP). For most countries, the cap is set at 790 million TPP until 2027, equivalent to roughly 50,000 H100 GPUs. These restrictions are meant to control access to the computing power required for large-scale AI research and applications.

However, certain companies, like Amazon Web Services and Microsoft Azure, that meet the requirements for special authorizations (called “Universal Verified End User” status) are exempt from these caps. Additionally, countries with “national Verified End User” status are allowed more advanced GPUs—about 320,000 over the next two years.

Exceptions to Licensing

There are exceptions for small GPU orders, such as those for universities or research institutions. Orders that do not exceed 1,700 H100 chips only require government notification and do not count toward the caps. This exception is designed to facilitate the global flow of AI technology for low-risk purposes.

GPUs intended for gaming are also excluded from the restrictions, ensuring that the gaming sector remains unaffected by the new rules.

Which Places Can Get Unlimited AI Chips?

Eighteen countries are exempt from the country-specific caps on GPUs. These countries include the U.S., Australia, Canada, Japan, South Korea, the European Union members, and Taiwan. This list reflects nations the U.S. considers aligned in terms of AI development and security.

What Is Being Done with ‘Model Weights’?

In addition to GPUs, the U.S. is regulating “model weights,” which are numerical parameters used in training AI models. These model weights, essential for refining the performance of AI algorithms, are considered sensitive information. The new rule establishes security measures to protect these parameters, ensuring that only trusted entities manage the most advanced AI systems.

Conclusion

The U.S. regulation reflects growing concerns over AI technology’s potential misuse and aims to ensure its responsible development. By controlling the flow of critical AI resources like GPUs and model weights, the U.S. seeks to maintain dominance in the AI field while preventing sensitive technology from reaching adversarial nations.