Ilya Sutskever Predicts Unpredictable AI With Advanced Reasoning Power

Ilya Sutskever, a prominent figure in artificial intelligence and co-founder of OpenAI, made a bold prediction during the NeurIPS conference in Vancouver on Friday: the rise of reasoning capabilities in AI will make the technology significantly less predictable.

Accepting the “Test of Time” award for his influential 2014 paper with Google’s Oriol Vinyals and Quoc Le, Sutskever reflected on the evolution of AI. He acknowledged that the idea his team explored a decade ago—scaling up data for pre-training AI systems—has fueled groundbreaking advancements like OpenAI’s ChatGPT, launched in 2022. However, he warned that this approach is nearing its limits.

“Pre-training as we know it will unquestionably end,” Sutskever stated, explaining that while computing power continues to grow, data availability is constrained. “We have but one internet,” he noted, highlighting the finite nature of online data.

Sutskever proposed potential solutions to this challenge. He suggested that AI itself could generate new data, or models could adopt methods that involve evaluating multiple answers before selecting the best response, thereby improving accuracy. Other researchers, he noted, are exploring the integration of real-world data to expand AI’s capabilities.

Looking ahead, Sutskever envisioned a future of superintelligent machines with profound reasoning capabilities and self-awareness. These AI systems, he claimed, will reason through problems in ways similar to humans but will inherently become more unpredictable as a result.

“The more it reasons, the more unpredictable it becomes,” Sutskever explained, referencing how systems like DeepMind’s AlphaGo astounded experts during its historic match against Lee Sedol in 2016 with its unconventional and inscrutable moves. Similarly, advanced chess AIs often make decisions that defy human logic, even at the highest levels of play.

Sutskever also hinted at long-awaited advancements in AI agents, which he believes will emerge in the future, demonstrating deeper understanding and problem-solving abilities. He emphasized that these developments will lead to AI systems that are “radically different” from current technology.

Sutskever’s comments come as he embarks on a new venture, Safe Superintelligence Inc., a company he co-founded this year following his brief involvement in the controversial ousting of Sam Altman from OpenAI. Reflecting on that event, Sutskever has publicly expressed regret for his role in the decision.

While Sutskever’s outlook on AI’s future is ambitious, it also underscores a growing debate in the AI community. As reasoning power increases, the trade-off between predictability and capability raises questions about how these systems will integrate into society and impact human decision-making.