Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
DeepCoder, an entirely open-source model for code reasoning and generation, has been developed through a partnership between Agentica Project and Together AI. Leveraging the foundation of DeepSeek-R1-Distilled-Qwen-14B, it has undergone fine-tuning via distributed reinforcement learning, achieving a notable accuracy of 60.6% on LiveCodeBench, which marks an 8% enhancement over its predecessor. This level of performance rivals that of proprietary models like o3-mini (2025-01-031 Low) and o1, all while operating with only 14 billion parameters. The training process spanned 2.5 weeks on 32 H100 GPUs, utilizing a carefully curated dataset of approximately 24,000 coding challenges sourced from validated platforms, including TACO-Verified, PrimeIntellect SYNTHETIC-1, and submissions to LiveCodeBench. Each problem mandated a legitimate solution along with a minimum of five unit tests to guarantee reliability during reinforcement learning training. Furthermore, to effectively manage long-range context, DeepCoder incorporates strategies such as iterative context lengthening and overlong filtering, ensuring it remains adept at handling complex coding tasks. This innovative approach allows DeepCoder to maintain high standards of accuracy and reliability in its code generation capabilities.
Description
StarCoder and StarCoderBase represent advanced Large Language Models specifically designed for code, developed using openly licensed data from GitHub, which encompasses over 80 programming languages, Git commits, GitHub issues, and Jupyter notebooks. In a manner akin to LLaMA, we constructed a model with approximately 15 billion parameters trained on a staggering 1 trillion tokens. Furthermore, we tailored the StarCoderBase model with 35 billion Python tokens, leading to the creation of what we now refer to as StarCoder.
Our evaluations indicated that StarCoderBase surpasses other existing open Code LLMs when tested against popular programming benchmarks and performs on par with or even exceeds proprietary models like code-cushman-001 from OpenAI, the original Codex model that fueled early iterations of GitHub Copilot. With an impressive context length exceeding 8,000 tokens, the StarCoder models possess the capability to handle more information than any other open LLM, thus paving the way for a variety of innovative applications. This versatility is highlighted by our ability to prompt the StarCoder models through a sequence of dialogues, effectively transforming them into dynamic technical assistants that can provide support in diverse programming tasks.
API Access
Has API
API Access
Has API
Integrations
ChatGPT
CodeQwen
Git
GitHub
Hugging Face
LM Studio
OpenAI
Python
Tabby
Taylor AI
Integrations
ChatGPT
CodeQwen
Git
GitHub
Hugging Face
LM Studio
OpenAI
Python
Tabby
Taylor AI
Pricing Details
Free
Free Trial
Free Version
Pricing Details
Free
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Agentica Project
Founded
2025
Country
United States
Website
agentica-project.com
Vendor Details
Company Name
BigCode
Founded
2023
Website
huggingface.co/blog/starcoder