Skip to main content

OpenAI releases two open weight reasoning models

OpenAI is joining the open weight model game with the launch of gpt-oss-120b and gpt-oss-20b. 

Gpt-oss-120b is optimized for production, high reasoning use cases, and gpt-oss-20b is designed for lower latency or local use cases. 

According to the company, these open models are comparable to its closed models in terms of performance and capability, but at a much lower cost. For example, gpt-oss-120b running on an 80 GB GPU achieved similar performance to o4-mini on core reasoning benchmarks, while gpt-oss-20b running on an edge device with 16 GB of memory was comparable to o3-mini on several common benchmarks. 

“Releasing gpt-oss-120b and gpt-oss-20b marks a significant step forward for open-weight models,” OpenAI wrote in a post. “At their size, these models deliver meaningful advancements in both reasoning capabilities and safety. Open models complement our hosted models, giving developers a wider range of tools to accelerate leading edge research, foster innovation and enable safer, more transparent AI development across a wide range of use cases.”

The new open models are ideal for developers who want to be able to customize and deploy models in their own environment, while developers looking for multimodal support, built-in tools, and integration with OpenAI’s platform would be better suited with the company’s closed models. 

Both new models are available under the Apache 2.0 license, and are compatible with OpenAI’s Responses API, can be used within agentic workflows, and provide full chain-of-thought.

According to OpenAI, these models were trained using its advanced pre- and post-training techniques, with a focus on reasoning, efficiency, and real-world usability in various types of deployment environments. 

Both models are available for download on Hugging Face and are quantized in MXFP4 to enable gpt-oss-120B to run with 80 GB of memory and gpt-oss-2bb to run with 16 GB. OpenAI also created a playground for developers to experiment with the models online. 

The company partnered with several deployment providers for these models, including Azure, vLLM, Ollama, llama.cpp, LM Studio, AWS, Fireworks, Together AI, Baseten, Databricks, Vercel, Cloudflare, and OpenRouter. It also worked with NVIDIA, AMD, Cerebras, and Groq to help ensure consistent performance across different systems. 

As part of the initial release, Microsoft will be providing GPU-optimized versions of the smaller model to Windows devices. 

“A healthy open model ecosystem is one dimension to helping make AI widely accessible and beneficial for everyone. We invite developers and researchers to use these models to experiment, collaborate and push the boundaries of what’s possible. We look forward to seeing what you build,” the company wrote.

The post OpenAI releases two open weight reasoning models appeared first on SD Times.



from SD Times https://ift.tt/c6iQ2Vy

Comments

Popular posts from this blog

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet...

Olive and NTT DATA Join Forces to Accelerate the Global Development and Deployment of AI Solutions

U.S.A., March 14, 2021 — Olive , the automation company creating the Internet of Healthcare, today announced an alliance with NTT DATA , a global digital business and IT services leader. The collaboration will fast track the creation of new healthcare solutions to transform the health experience for humans — both in the traditional healthcare setting and at home. As a member of Olive’s Deploy, Develop and Distribute Partnership Programs , NTT DATA is leveraging Olive’s open platform to innovate, build and distribute solutions to Olive’s customers, which include some of the country’s largest health providers. Olive and NTT DATA will co-develop new Loops — applications that work on Olive’s platform to provide humans real-time intelligence — and new machine learning and robotic process automation (RPA) models. NTT DATA and Olive will devote an early focus to enabling efficiencies in supply chain and IT, with other disciplines to follow. “This is an exciting period of growth at Olive, so...

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are dec...