Yazılar

Experts Divided Over Whether AI Boom Is the Next Big Bubble

The record-breaking wave of artificial intelligence investments has sparked fierce debate across global markets, with opinions divided over whether the sector is inflating into a bubble reminiscent of the early 2000s dot-com frenzy.

According to Bank of America Global Research, 54% of surveyed fund managers now believe AI stocks are in a bubble, compared to 38% who disagree. The discussion has gained urgency as companies pour hundreds of billions into AI infrastructure, data centers, and startups, pushing valuations to new extremes.

The Bank of England warned that a sharp market correction tied to fading AI optimism could ripple through the global financial system. “The risk of a sharp market correction has increased,” its Financial Policy Committee said in an October update.

Singapore’s GIC investment chief Bryan Yeo also described “a little bit of a hype bubble” in the venture space, saying startups labeled as AI firms are being valued “at huge multiples” of modest revenue.

Amazon founder Jeff Bezos offered a nuanced view, saying industrial bubbles often leave lasting benefits even if many investors lose money. “When the dust settles and you see who are the winners, society benefits from those inventions,” he said.

Others, such as Goldman Sachs economist Joseph Briggs and ABB CEO Morten Wierod, argue the AI investment surge remains justified given long-term potential — though both caution about bottlenecks in infrastructure and human resources.

By contrast, Michael Burry — famed for predicting the 2008 financial crisis — has bet against high-flying AI stocks like Nvidia and Palantir, warning that the boom mirrors past speculative manias.

IMF chief economist Pierre-Olivier Gourinchas agreed that a correction could come but emphasized it would likely be contained. “This is not financed by debt,” he said, meaning any fallout would primarily hurt equity investors.

OpenAI CEO Sam Altman echoed that sentiment, admitting that investors may be “overexcited” and predicting that “someone is going to lose a phenomenal amount of money.”

Yet, UBS strategists note that even among those who believe in an AI bubble, about 90% are still invested — a sign of the sector’s magnetic pull despite growing caution.

OpenAI to Offer UK Data Residency Through Government Partnership

penAI is introducing a new UK data residency option, allowing businesses and government bodies to store their data locally. The initiative, officially announced by Deputy Prime Minister David Lammy, stems from a partnership between OpenAI and the UK Ministry of Justice (MoJ). It aims to enhance privacy, cybersecurity, and national resilience while unlocking greater potential for AI innovation across the public sector.

Lammy highlighted how AI is already transforming operations within the MoJ. Over 1,000 probation officers will use “Justice Transcribe,” an AI-powered tool that records and transcribes conversations, cutting administrative time and improving efficiency. “By adopting AI, we’re freeing up staff to focus on what truly matters—protecting the public,” Lammy said.

OpenAI CEO Sam Altman noted a fourfold increase in UK users over the past year and expressed excitement about how local businesses are leveraging AI for productivity gains. The UK data residency option will be available for customers using OpenAI’s API Platform, ChatGPT Enterprise, and ChatGPT Edu. The move comes as OpenAI continues to expand its product ecosystem, recently launching ChatGPT Atlas, an AI-driven browser designed to transform online search.

India Proposes Tough AI Labelling Rules to Curb Deepfakes and Misinformation

India’s government has unveiled draft regulations requiring artificial intelligence and social media platforms to clearly label AI-generated content, in a sweeping effort to combat deepfakes and misinformation amid rising concerns over the technology’s misuse.

The proposed rules, released Wednesday by the Ministry of Electronics and Information Technology, would compel companies such as OpenAI, Google, Meta, and X to include visible AI markers covering at least 10% of a video or image’s surface area, or the first 10% of an audio clip’s duration, to indicate that the material was artificially created.

India — home to nearly 1 billion internet users — has faced an explosion of AI-generated deepfakes and false information, particularly during elections, in a country already divided along ethnic and religious lines. Officials warn that manipulated videos and fake news could incite violence and erode public trust.

Under the proposal, platforms must also ask users to declare whether their uploads are AI-generated and introduce technical safeguards to verify authenticity. The ministry said the rules aim to ensure “visible labelling, metadata traceability, and transparency for all public-facing AI media.”

The government cited a growing threat from generative AI tools capable of impersonating individuals, spreading propaganda, or manipulating elections. “The potential for harm has grown significantly,” it said in a statement inviting public and industry feedback by November 6.

Legal experts noted that the new labelling rule is one of the first in the world to set a quantifiable visibility standard. Dhruv Garg, founding partner of the Indian Governance and Policy Project, said it would require AI platforms to develop automated detection and tagging systems that identify synthetic content at the moment of creation.

The issue has already reached India’s courts. Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan recently sued to block AI-generated videos using their likenesses, while challenging YouTube’s AI training policies.

India’s fast-growing digital landscape has made it a major market for AI firms. OpenAI CEO Sam Altman said in February that the country is the company’s second-largest market by user numbers, which have tripled in the past year.