Yazılar

Oracle to Purchase $40 Billion Worth of Nvidia Chips for OpenAI’s US Data Center

Oracle is set to invest approximately $40 billion in purchasing Nvidia’s high-performance chips to support OpenAI’s new data center in the United States, according to a report by the Financial Times. This significant investment highlights the growing collaboration between cloud service providers and AI companies as they race to build advanced infrastructure to power next-generation artificial intelligence applications.

The new data center will be located in Abilene, Texas, and forms a critical part of the U.S. Stargate Project, a government-backed initiative involving leading AI companies aimed at strengthening America’s position in the global AI race. This project reflects the increasing emphasis on domestic AI capabilities amid intensifying competition with other countries developing their own AI technologies.

Oracle plans to acquire roughly 400,000 of Nvidia’s most advanced GB200 chips, which will be leased to OpenAI to provide the massive computing power required for AI workloads. While Oracle and OpenAI have not publicly commented on the deal, sources familiar with the arrangement confirmed the details to the Financial Times. Nvidia also declined to comment on the specifics.

The data center is expected to become fully operational by mid-2026, with Oracle securing a 15-year lease on the site. Financing for the project is backed primarily by JPMorgan, which has extended two loans totaling $9.6 billion, while the facility’s owners, Crusoe and Blue Owl Capital, have contributed approximately $5 billion in cash. This large-scale investment underscores the commitment of both private and public sectors to accelerate AI development on U.S. soil.

Anthropic CEO Dario Amodei Claims AI Models Experience Fewer Hallucinations Than Humans: Report

Anthropic CEO Dario Amodei recently stated that artificial intelligence (AI) models tend to hallucinate less frequently than humans do. This remark was made during the company’s first-ever Code With Claude event, held on Thursday. At this event, the San Francisco-based AI firm unveiled two new versions of its Claude 4 models, alongside several upgraded features such as enhanced memory and better tool integration. Amodei also commented on the skepticism surrounding AI development, suggesting that despite critics searching for obstacles, no significant barriers to AI progress have emerged so far.

During a press briefing reported by TechCrunch, Amodei elaborated on the nature of hallucinations in AI systems, explaining that these errors do not prevent AI from achieving artificial general intelligence (AGI). When asked about hallucinations, he said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.” This perspective highlights that while AI does make mistakes, the frequency might be lower than commonly assumed, though the mistakes can sometimes be unexpected.

Amodei also pointed out that errors are a common part of human activity, with TV presenters, politicians, and professionals making mistakes regularly. Therefore, the presence of errors in AI responses does not necessarily undermine its overall intelligence. Nonetheless, he acknowledged the issue of AI confidently presenting false information remains a challenge. A recent incident highlighted this when Anthropic’s lawyer had to apologize in court after the company’s Claude chatbot generated an incorrect citation in a legal filing. This mishap took place during Anthropic’s ongoing lawsuit against music publishers over alleged copyright violations related to hundreds of song lyrics.

Looking ahead, Amodei remains optimistic about the future of AI. In a paper published in October 2024, he claimed that Anthropic could achieve artificial general intelligence as soon as next year. AGI represents a breakthrough form of AI capable of understanding, learning, and performing a broad spectrum of tasks autonomously, without human assistance. If realized, this development would mark a significant milestone in AI research and its practical applications.

MediaTek Unveils Hybrid AI Computing Strategy at Computex 2025

At Computex 2025, MediaTek unveiled a series of cutting-edge artificial intelligence (AI) innovations designed to power a broad range of devices and platforms. Central to its showcase was the company’s first hybrid computing solution, aimed at enhancing AI performance at the edge by combining on-device AI processing with high-speed 5G connectivity. This approach promises to deliver faster, low-latency AI experiences while ensuring data privacy, setting the stage for more responsive and secure AI applications in smart homes and beyond.

MediaTek’s hybrid computing concept integrates its AI-powered Fixed Wireless Access (FWA) gateway with advanced AI capabilities directly on the device. Currently in the prototype phase, this solution has already undergone successful proof-of-concept trials with a leading telecommunications infrastructure provider. The company demonstrated an AI Hub that acts as a centralized control center, enabling on-device AI agents to work together and serve as personal assistants. This technology aims to allow users to control multiple smart home devices simultaneously through natural language commands, making connected living more intuitive and efficient.

Beyond hybrid computing, MediaTek revealed new AI-focused chipsets targeting various markets. One notable announcement was a new chipset designed specifically for in-car infotainment systems, bringing enhanced AI-powered features such as voice recognition, personalized user interfaces, and real-time data processing to vehicles. This expansion into automotive AI aligns with MediaTek’s broader strategy to embed AI capabilities across diverse platforms, including Internet of Things (IoT) devices, which the company also highlighted at the event.

MediaTek also showcased its collaboration with Nvidia on the GB10 Grace Blackwell Superchip, touted as the world’s smallest AI supercomputer. Designed for the DGX Spark system, this superchip can execute up to 1,000 trillion operations per second (TOPS) and run AI models containing up to 200 billion parameters locally, a significant leap for on-premise AI processing. Collectively, these innovations underscore MediaTek’s commitment to advancing AI technology at multiple levels—from edge devices and smart homes to automotive and cloud computing—driving forward its vision of a seamlessly AI-powered future.