Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
Phi-4-mini-reasoning is a transformer-based language model with 3.8 billion parameters, specifically designed to excel in mathematical reasoning and methodical problem-solving within environments that have limited computational capacity or latency constraints. Its optimization stems from fine-tuning with synthetic data produced by the DeepSeek-R1 model, striking a balance between efficiency and sophisticated reasoning capabilities. With training that encompasses over one million varied math problems, ranging in complexity from middle school to Ph.D. level, Phi-4-mini-reasoning demonstrates superior performance to its base model in generating lengthy sentences across multiple assessments and outshines larger counterparts such as OpenThinker-7B, Llama-3.2-3B-instruct, and DeepSeek-R1. Equipped with a 128K-token context window, it also facilitates function calling, which allows for seamless integration with various external tools and APIs. Moreover, Phi-4-mini-reasoning can be quantized through the Microsoft Olive or Apple MLX Framework, enabling its deployment on a variety of edge devices, including IoT gadgets, laptops, and smartphones. Its design not only enhances user accessibility but also expands the potential for innovative applications in mathematical fields.
Description
Perplexity has unveiled a new and improved AI search engine called Sonar, which is based on the Llama 3.3 70B model. This iteration of Sonar has received further training aimed at boosting the accuracy of facts and the clarity of responses in the standard search mode offered by Perplexity. The goal of these enhancements is to provide users with more accurate and easily understandable answers, all while preserving the platform's renowned speed and efficiency. Additionally, Sonar features capabilities for real-time, expansive web research and question-answering, which developers can seamlessly incorporate into their applications via an API that is both lightweight and cost-effective. Furthermore, the Sonar API accommodates advanced models such as sonar-reasoning-pro and sonar-pro, specifically designed to tackle intricate tasks that necessitate a profound understanding and retention of context. These sophisticated models are capable of delivering more comprehensive answers, offering an average of twice the citations compared to earlier versions, thus significantly improving the transparency and dependability of the information presented. With these updates, Sonar positions itself as a leader in providing users with high-quality search experiences.
API Access
Has API
API Access
Has API
Integrations
Azure AI Foundry
Cerebras
DeepSeek R1
Hugging Face
Llama 3.3
Microsoft Azure
NinjaTools.ai
Perplexity
Perplexity Deep Research
Perplexity Pro
Integrations
Azure AI Foundry
Cerebras
DeepSeek R1
Hugging Face
Llama 3.3
Microsoft Azure
NinjaTools.ai
Perplexity
Perplexity Deep Research
Perplexity Pro
Pricing Details
No price information available.
Free Trial
Free Version
Pricing Details
Free
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Microsoft
Founded
1975
Country
United States
Website
azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/
Vendor Details
Company Name
Perplexity
Founded
2022
Country
United States
Website
www.perplexity.ai/hub/blog/meet-new-sonar