Right-Wing Media Figures and AI Pioneers Unite to Call for Superintelligent AI Ban

A coalition of U.S. right-wing media figures and AI pioneers has issued a joint statement urging a global ban on developing superintelligent artificial intelligence, warning that progress toward machines exceeding human cognition must halt until society can ensure safety and democratic oversight.

The initiative, announced Wednesday by the Future of Life Institute (FLI), includes signatures from Steve Bannon, Glenn Beck, and tech luminaries Geoffrey Hinton and Yoshua Bengio—two of the so-called “godfathers of AI.” The non-profit, founded in 2014 and initially supported by Elon Musk and tech investor Jaan Tallinn, has long advocated for responsible AI development and limits on advanced machine intelligence.

The statement calls for governments worldwide to prohibit the creation of AI systems capable of surpassing human intelligence until “science shows a safe way forward” and “the public demands it.” It argues that current AI development races are reckless and could produce technologies that threaten human autonomy, stability, and safety.

The unusual alliance between conservative media figures and leading scientists highlights the broadening political and cultural anxiety surrounding AI’s rapid evolution. It also reflects growing skepticism on the populist right, where some commentators have warned that unchecked AI could concentrate power in corporate and political elites.

While many in the technology industry and the U.S. government have dismissed calls for AI moratoriums as harmful to innovation and economic competitiveness, the involvement of influential figures like Bannon, Beck, and Apple co-founder Steve Wozniak could amplify public debate. Other signatories include former Irish President Mary Robinson and Virgin Group founder Richard Branson.

Supporters of the ban say the move is not anti-technology but a precautionary measure. “The race to build superintelligent AI must not outpace our ability to control it,” said an FLI spokesperson. “Without democratic input and safety guarantees, the risks are existential.”

The statement follows a broader series of warnings from experts and public figures, including Musk and OpenAI co-founder Sam Altman, who have both urged the creation of global AI safety frameworks.