Yazılar

IBM’s ‘Loon’ Chip Marks Major Step Toward Practical Quantum Computers by 2029

IBM has unveiled a new experimental quantum computing chip, dubbed “Loon,” that the company says achieves a critical milestone toward building useful, error-corrected quantum computers by 2029.

Quantum computers hold the potential to solve complex problems in chemistry, physics, and logistics that would take traditional supercomputers thousands of years to complete. However, the fragile quantum states that power these machines are notoriously prone to errors — a challenge that has long stood in the way of practical applications.

To address this, IBM in 2021 proposed an innovative approach to error correction, adapting algorithms originally developed to improve cellphone signal reliability. The method uses a hybrid system combining quantum and classical chips to stabilize qubits — the basic units of quantum computation.

According to Jay Gambetta, IBM Research director and IBM Fellow, the Loon chip was fabricated at the Albany NanoTech Complex in New York, using the same advanced semiconductor tools found in cutting-edge commercial fabs.

“Loon remains in early stages,” Gambetta said, “but it demonstrates a critical step toward error-corrected quantum computing that can outperform classical systems.”

IBM also introduced another chip, “Nighthawk,” which will be made available by the end of this year. The company expects Nighthawk to surpass classical computers on specific tasks by late 2026.

Analyst Mark Horvath of Gartner called the new design “very clever,” noting that the inclusion of quantum interconnections between qubits makes the chips harder to build but exponentially more capable.

IBM plans to make Nighthawk’s code openly available to researchers and startups, fostering a community-driven testing model to validate claims of quantum advantage — when quantum systems outperform classical ones.

Intel unveils new AI data center chip “Crescent Island” to relaunch AI ambitions

Intel has announced plans to launch a new artificial intelligence chip for data centers next year, marking a renewed effort to reclaim ground in the booming AI hardware market dominated by Nvidia and AMD.

The new GPU, named Crescent Island, will prioritize energy efficiency and be optimized for AI inference workloads, Intel Chief Technology Officer Sachin Katti said at the Open Compute Summit on Tuesday. “It emphasizes our focus on inference, optimized for AI, and for delivering the best performance per dollar,” Katti said.

The announcement represents Intel’s latest bid to reenter the AI race after CEO Lip-Bu Tan pledged to restart the company’s stalled AI programs, including the Gaudi and Falcon Shores lines. Despite trailing competitors, Intel hopes to capture a meaningful share of the rapidly expanding data center market fueled by generative AI adoption since ChatGPT’s 2022 debut.

Crescent Island will feature 160 gigabytes of memory, though slower than the high-bandwidth memory (HBM) used in AMD and Nvidia’s top-tier AI chips. The chip will be based on Intel’s existing consumer GPU architecture, underscoring the company’s modular approach that allows customers to mix and match chips from multiple vendors.

Intel also committed to releasing new data center AI chips annually, matching the cadence of rivals AMD, Nvidia, and major cloud providers developing their own silicon.

The move follows Nvidia’s $5 billion investment in Intel, which gave it a 4% stake and launched a partnership to co-develop future AI and PC chips. Katti said the collaboration aims to ensure Intel CPUs remain integrated into AI systems worldwide as the company seeks to position itself as an indispensable player in next-generation computing.

AI Chipmaker Cerebras Withdraws U.S. IPO Filing After $1.1 Billion Fundraising Round

Cerebras Systems, the California-based AI chip startup seen as one of the most promising challengers to Nvidia, has withdrawn its planned U.S. initial public offering (IPO), according to a regulatory filing on Friday. The decision takes effect immediately and comes just days after the company closed a massive $1.1 billion funding round.

The move surprised some investors given that U.S. IPO activity has recently rebounded sharply, buoyed by surging enthusiasm for AI-related stocks. Recent debuts, such as Fermi’s data center REIT listing, have drawn strong investor demand, reversing a slump caused by trade-policy and market uncertainty earlier in the year.

Analysts said the withdrawal likely reflects strategic timing rather than weak market sentiment. “Given that Cerebras just very recently completed a sizeable fund raise, it is of no surprise that they are holding off to pursue the IPO at this time,” said Josef Schuster, CEO of IPO research firm IPOX.

Cerebras’ latest financing round—led by Fidelity Management & Research and Atreides Management—valued the company at $8.1 billion and included participation from Tiger Global, Valor Equity Partners, and 1789 Capital, a fund partially linked to Donald Trump Jr.

Despite withdrawing the IPO filing, CEO Andrew Feldman emphasized that the company still intends to go public eventually. “We’re continuing to execute on our roadmap,” he said earlier in the week, noting that Cerebras’ focus remains on scaling production and commercialization of its high-performance AI chips designed to accelerate the training of large models.

The company had initially filed for a Nasdaq listing last year, but the process was delayed by a U.S. national security review of a $335 million investment from G42, an Abu Dhabi-based cloud and AI firm. That review reportedly examined potential concerns about foreign influence and technology transfer.

Industry observers view Cerebras’ decision as a pause, not a retreat. “This is more a company-specific strategic decision and does not tell us anything about the state of U.S. IPO sentiment, which we view as exceptionally strong,” Schuster added.

Founded in Sunnyvale, California, Cerebras Systems specializes in ultra-large AI processors and computing systems, including its flagship Wafer Scale Engine (WSE), a chip designed to massively outperform traditional GPUs in AI workloads. The company has become a key player in the rapidly expanding AI hardware ecosystem—one now defined by fierce competition, colossal valuations, and geopolitical scrutiny.