Yazılar

Right-Wing Media Figures and AI Pioneers Unite to Call for Superintelligent AI Ban

A coalition of U.S. right-wing media figures and AI pioneers has issued a joint statement urging a global ban on developing superintelligent artificial intelligence, warning that progress toward machines exceeding human cognition must halt until society can ensure safety and democratic oversight.

The initiative, announced Wednesday by the Future of Life Institute (FLI), includes signatures from Steve Bannon, Glenn Beck, and tech luminaries Geoffrey Hinton and Yoshua Bengio—two of the so-called “godfathers of AI.” The non-profit, founded in 2014 and initially supported by Elon Musk and tech investor Jaan Tallinn, has long advocated for responsible AI development and limits on advanced machine intelligence.

The statement calls for governments worldwide to prohibit the creation of AI systems capable of surpassing human intelligence until “science shows a safe way forward” and “the public demands it.” It argues that current AI development races are reckless and could produce technologies that threaten human autonomy, stability, and safety.

The unusual alliance between conservative media figures and leading scientists highlights the broadening political and cultural anxiety surrounding AI’s rapid evolution. It also reflects growing skepticism on the populist right, where some commentators have warned that unchecked AI could concentrate power in corporate and political elites.

While many in the technology industry and the U.S. government have dismissed calls for AI moratoriums as harmful to innovation and economic competitiveness, the involvement of influential figures like Bannon, Beck, and Apple co-founder Steve Wozniak could amplify public debate. Other signatories include former Irish President Mary Robinson and Virgin Group founder Richard Branson.

Supporters of the ban say the move is not anti-technology but a precautionary measure. “The race to build superintelligent AI must not outpace our ability to control it,” said an FLI spokesperson. “Without democratic input and safety guarantees, the risks are existential.”

The statement follows a broader series of warnings from experts and public figures, including Musk and OpenAI co-founder Sam Altman, who have both urged the creation of global AI safety frameworks.

MI5 chief warns AI could pose future security risks, but dismisses “Hollywood doom”

The head of Britain’s domestic intelligence agency, MI5, has warned that artificial intelligence systems acting independently of human oversight could one day pose serious national security challenges — though he dismissed notions of a “Terminator”-style apocalypse.

In his annual speech on national threats, MI5 Director General Ken McCallum said that while AI is already being used to strengthen British security operations, it is also being exploited by terrorists, hostile states, and cybercriminals. He said AI tools are helping adversaries spread propaganda, conduct reconnaissance, and manipulate elections.

“But in 2025, while contending with today’s threats, we also need to scope out the next frontier: potential future risks from non-human, autonomous AI systems which may evade human oversight and control,” McCallum said.

He emphasized that his warning was not a prediction of science-fiction-style catastrophe, but a call for preparedness as AI technology rapidly evolves. “Given the risk of hype and scare-mongering, I will choose my words carefully. I am not forecasting Hollywood movie scenarios,” he noted.

McCallum added that while AI may never intentionally mean harm, ignoring its potential dangers would be “reckless.” MI5 and other intelligence agencies are studying the long-term implications of increasingly autonomous systems.

The remarks reflect a broader debate within global intelligence and tech circles about balancing the benefits of AI innovation with the risks of automation and loss of control over powerful systems.

Global regulators step up oversight of AI risks in finance

Global financial watchdogs are intensifying their scrutiny of artificial intelligence (AI) in the banking sector, warning that heavy reliance on shared AI systems could threaten financial stability. As the use of AI accelerates across global markets, regulators are moving to monitor systemic risks and strengthen their own technological capabilities.

In a report published Friday, the Financial Stability Board (FSB) — which advises G20 governments — said widespread adoption of the same AI models and infrastructure could create “herd-like behaviour” across financial institutions. “This heavy reliance can create vulnerabilities if there are few alternatives available,” the FSB cautioned, warning that such concentration could amplify shocks during market stress.

A separate study by the Bank for International Settlements (BIS) urged regulators and central banks to “raise their game” in monitoring and using AI. The BIS said authorities must not only understand AI’s potential to reshape markets but also adopt the technology themselves to improve supervision and data analysis.

The report comes amid an international race — led by the United States and China — to dominate next-generation AI tools and applications, including those that underpin financial services.

While the FSB said there is currently “little empirical evidence” that AI-driven correlations have directly impacted market outcomes, it warned that AI could increase exposure to cyberattacks and algorithmic fraud.

Some jurisdictions have already acted. The European Union’s Digital Operational Resilience Act (DORA), which took effect in January, establishes new rules for digital and AI-based systems used by financial institutions.

The emerging consensus among regulators is clear: AI promises efficiency and insight, but without vigilant oversight, it could become a new source of systemic risk in global finance.