OpenAI Launches Open-Weight Reasoning Models Optimized for Laptop Use
OpenAI announced on Tuesday the release of two open-weight language models designed for advanced reasoning tasks and optimized to run efficiently on laptops, delivering performance comparable to its smaller proprietary reasoning models. Unlike fully open-source models, open-weight models provide publicly accessible trained parameters (weights) but do not include full source code or training data, allowing developers to run and fine-tune them locally or behind their own firewalls.
OpenAI co-founder Greg Brockman highlighted that the ability to operate these models locally offers users greater control over security and infrastructure. The two models, gpt-oss-120b and gpt-oss-20b, differ in size: the larger model runs on a single GPU, while the smaller one can run directly on personal computers. Both excel at coding, competitive mathematics, and health-related questions, having been trained on text-focused datasets with an emphasis on science and math.
Separately, Amazon Web Services (AWS) announced that OpenAI’s open-weight models are now available on its Bedrock generative AI marketplace—a first for OpenAI on the platform. Bedrock director Atul Deo praised the models as strong open-weight options for AWS customers.
This launch marks OpenAI’s first release of open models since GPT-2 in 2019, entering a competitive landscape that includes Meta’s Llama series and China’s DeepSeek-R1, both of which have influenced open-weight and open-source AI development trajectories this year.
OpenAI, backed by Microsoft and valued at around $300 billion, is currently seeking to raise up to $40 billion in a funding round led by Softbank Group.











