Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
Phi-4-reasoning is an advanced transformer model featuring 14 billion parameters, specifically tailored for tackling intricate reasoning challenges, including mathematics, programming, algorithm development, and strategic planning. Through a meticulous process of supervised fine-tuning on select "teachable" prompts and reasoning examples created using o3-mini, it excels at generating thorough reasoning sequences that optimize computational resources during inference. By integrating outcome-driven reinforcement learning, Phi-4-reasoning is capable of producing extended reasoning paths. Its performance notably surpasses that of significantly larger open-weight models like DeepSeek-R1-Distill-Llama-70B and nears the capabilities of the comprehensive DeepSeek-R1 model across various reasoning applications. Designed for use in settings with limited computing power or high latency, Phi-4-reasoning is fine-tuned with synthetic data provided by DeepSeek-R1, ensuring it delivers precise and methodical problem-solving. This model's ability to handle complex tasks with efficiency makes it a valuable tool in numerous computational contexts.
Description
Sarvam-M is an advanced, multilingual large language model that integrates hybrid reasoning to excel in various Indian languages, mathematical tasks, and programming challenges all within a single, streamlined framework. It is built on the foundation of Mistral-Small, boasting a robust architecture with 24 billion parameters, which has been refined through supervised fine-tuning, reinforcement learning with clear rewards, and optimizations for inference to enhance both precision and efficiency. This model is meticulously trained to proficiently handle over ten prominent Indic languages, accommodating native scripts, romanized text, and code-mixed submissions, thereby facilitating smooth multilingual interactions in a variety of linguistic environments. Moreover, Sarvam-M adopts a hybrid reasoning framework, enabling it to alternate between an in-depth “thinking” mode for intricate tasks such as mathematics, logic puzzles, and programming, and a rapid response mode for everyday inquiries, providing an effective balance between speed and performance. This versatility makes Sarvam-M an invaluable tool for users looking to engage with technology in an increasingly diverse linguistic landscape.
API Access
Has API
API Access
Has API
Integrations
Hugging Face
Microsoft Azure
Microsoft Foundry
Microsoft Foundry Models
Sarvam AI
Integrations
Hugging Face
Microsoft Azure
Microsoft Foundry
Microsoft Foundry Models
Sarvam AI
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Microsoft
Founded
1975
Country
United States
Website
azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/
Vendor Details
Company Name
Sarvam
Founded
2023
Country
India
Website
www.sarvam.ai/blogs/sarvam-m