Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
Olmo 3 represents a comprehensive family of open models featuring variations with 7 billion and 32 billion parameters, offering exceptional capabilities in base performance, reasoning, instruction, and reinforcement learning, while also providing transparency throughout the model development process, which includes access to raw training datasets, intermediate checkpoints, training scripts, extended context support (with a window of 65,536 tokens), and provenance tools. The foundation of these models is built upon the Dolma 3 dataset, which comprises approximately 9 trillion tokens and utilizes a careful blend of web content, scientific papers, programming code, and lengthy documents; this thorough pre-training, mid-training, and long-context approach culminates in base models that undergo post-training enhancements through supervised fine-tuning, preference optimization, and reinforcement learning with accountable rewards, resulting in the creation of the Think and Instruct variants. Notably, the 32 billion Think model has been recognized as the most powerful fully open reasoning model to date, demonstrating performance that closely rivals that of proprietary counterparts in areas such as mathematics, programming, and intricate reasoning tasks, thereby marking a significant advancement in open model development. This innovation underscores the potential for open-source models to compete with traditional, closed systems in various complex applications.
Description
Phi-4-reasoning-plus is an advanced reasoning model with 14 billion parameters, enhancing the capabilities of the original Phi-4-reasoning. It employs reinforcement learning for better inference efficiency, processing 1.5 times the number of tokens compared to its predecessor, which results in improved accuracy. Remarkably, this model performs better than both OpenAI's o1-mini and DeepSeek-R1 across various benchmarks, including challenging tasks in mathematical reasoning and advanced scientific inquiries. Notably, it even outperforms the larger DeepSeek-R1, which boasts 671 billion parameters, on the prestigious AIME 2025 assessment, a qualifier for the USA Math Olympiad. Furthermore, Phi-4-reasoning-plus is accessible on platforms like Azure AI Foundry and HuggingFace, making it easier for developers and researchers to leverage its capabilities. Its innovative design positions it as a top contender in the realm of reasoning models.
API Access
Has API
API Access
Has API
Integrations
Hugging Face
Microsoft Azure
Microsoft Foundry
Pricing Details
Free
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Ai2
Founded
2014
Country
United States
Website
allenai.org/blog/olmo3
Vendor Details
Company Name
Microsoft
Founded
1975
Country
United States
Website
azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/