Best Mistral NeMo Alternatives in 2024
Find the top alternatives to Mistral NeMo currently available. Compare ratings, reviews, pricing, and features of Mistral NeMo alternatives in 2024. Slashdot lists the best Mistral NeMo alternatives on the market that offer competing products that are similar to Mistral NeMo. Sort through Mistral NeMo alternatives below to make the best choice for your needs
-
1
Jamba
AI21 Labs
Jamba is a powerful and efficient long context model that is open to builders, but built for enterprises. Jamba's latency is superior to all other leading models of similar size. Jamba's 256k window is the longest available. Jamba's Mamba Transformer MoE Architecture is designed to increase efficiency and reduce costs. Jamba includes key features from OOTB, including function calls, JSON output, document objects and citation mode. Jamba 1.5 models deliver high performance throughout the entire context window. Jamba 1.5 models score highly in common quality benchmarks. Secure deployment tailored to your enterprise. Start using Jamba immediately on our production-grade SaaS Platform. Our strategic partners can deploy the Jamba model family. For enterprises who require custom solutions, we offer VPC and on-premise deployments. We offer hands-on management and continuous pre-training for enterprises with unique, bespoke needs. -
2
OLMo 2
Ai2
OLMo 2 is an open language model family developed by the Allen Institute for AI. It provides researchers and developers with open-source code and reproducible training recipes. These models can be trained with up to 5 trillion tokens, and they are competitive against other open-weight models such as Llama 3.0 on English academic benchmarks. OLMo 2 focuses on training stability by implementing techniques that prevent loss spikes in long training runs. It also uses staged training interventions to address capability deficits during late pretraining. The models incorporate the latest post-training methods from AI2's Tulu 3 resulting in OLMo 2-Instruct. The Open Language Modeling Evaluation System, or OLMES, was created to guide improvements throughout the development stages. It consists of 20 evaluation benchmarks assessing key capabilities. -
3
Mistral Large 2
Mistral AI
FreeMistral Large 2 comes with a 128k window that supports dozens of different languages, including French, German and Spanish. It also supports Arabic, Hindi, Russian and Chinese. It also supports 80+ programming languages, including Python, Java and C++. Mistral Large 2 was designed with single-node applications in mind. Its size of 123 million parameters allows it to run fast on a single computer. Mistral Large 2 is released under the Mistral Research License which allows modification and usage for research and noncommercial purposes. -
4
Mathstral
Mistral AI
As a tribute for Archimedes' 2311th birthday, which we celebrate this year, we release our first Mathstral 7B model, designed specifically for math reasoning and scientific discoveries. The model comes with a 32k context-based window that is published under the Apache 2.0 License. Mathstral is a tool we're donating to the science community in order to help solve complex mathematical problems that require multi-step logical reasoning. The Mathstral release was part of a larger effort to support academic project, and it was produced as part of our collaboration with Project Numina. Mathstral, like Isaac Newton at his time, stands on Mistral 7B's shoulders and specializes in STEM. It has the highest level of reasoning in its size category, based on industry-standard benchmarks. It achieves 56.6% in MATH and 63.47% in MMLU. The following table shows the MMLU performance differences between Mathstral and Mistral 7B. -
5
Pixtral 12B
Mistral AI
FreePixtral 12B, a multimodal AI model pioneered by Mistral AI and designed to process and understand both text and images data seamlessly, is a groundbreaking AI model. This model represents a significant advance in the integration of data types. It allows for more intuitive interaction and enhanced content creation abilities. Pixtral 12B, which is based on Mistral's NeMo 12B Text Model, incorporates an additional Vision Adapter that adds 400 million parameters. This allows it to handle visual inputs of up to 1024x1024 pixels. This model is capable of a wide range of applications from image analysis to answering visual content questions. Its versatility is demonstrated in real-world scenarios. Pixtral 12B is a powerful tool for developers, as it not only has a large context of 128k tokens, but also uses innovative techniques such as GeLU activation and RoPE 2D for its vision components. -
6
Mistral 7B
Mistral AI
We solve the most difficult problems to make AI models efficient, helpful and reliable. We are the pioneers of open models. We give them to our users, and empower them to share their ideas. Mistral-7B is a powerful, small model that can be adapted to many different use-cases. Mistral 7B outperforms Llama 13B in all benchmarks. It has 8k sequence length, natural coding capabilities, and is faster than Llama 2. It is released under Apache 2.0 License and we made it simple to deploy on any cloud. -
7
LongLLaMA
LongLLaMA
FreeThis repository contains a research preview of LongLLaMA. It is a large language-model capable of handling contexts up to 256k tokens. LongLLaMA was built on the foundation of OpenLLaMA, and fine-tuned with the Focused Transformer method. LongLLaMA code was built on the foundation of Code Llama. We release a smaller base variant of the LongLLaMA (not instruction-tuned) on a permissive licence (Apache 2.0), and inference code that supports longer contexts for hugging face. Our model weights are a drop-in replacement for LLaMA (for short contexts up to 2048 tokens) in existing implementations. We also provide evaluation results, and comparisons with the original OpenLLaMA model. -
8
NVIDIA NeMo
NVIDIA
NVIDIA NeMoLLM is a service that allows you to quickly customize and use large language models that have been trained on multiple frameworks. Developers can use NeMo LLM to deploy enterprise AI applications on both public and private clouds. They can also experiment with Megatron 530B, one of the most powerful language models, via the cloud API or the LLM service. You can choose from a variety of NVIDIA models or community-developed models to best suit your AI applications. You can get better answers in minutes to hours by using prompt learning techniques and providing context for specific use cases. Use the NeMo LLM Service and the cloud API to harness the power of NVIDIA megatron 530B, the largest language model, or NVIDIA Megatron 535B. Use models for drug discovery in the NVIDIA BioNeMo framework and the cloud API. -
9
Codestral Mamba
Mistral AI
Codestral Mamba is a Mamba2 model that specializes in code generation. It is available under the Apache 2.0 license. Codestral Mamba represents another step in our efforts to study and provide architectures. We hope that it will open up new perspectives in architecture research. Mamba models have the advantage of linear inference of time and the theoretical ability of modeling sequences of unlimited length. Users can interact with the model in a more extensive way with rapid responses, regardless of the input length. This efficiency is particularly relevant for code productivity use-cases. We trained this model with advanced reasoning and code capabilities, enabling the model to perform at par with SOTA Transformer-based models. -
10
NVIDIA NeMo Megatron
NVIDIA
NVIDIA NeMo megatron is an end to-end framework that can be used to train and deploy LLMs with billions or trillions of parameters. NVIDIA NeMo Megatron is part of the NVIDIAAI platform and offers an efficient, cost-effective, and cost-effective containerized approach to building and deploying LLMs. It is designed for enterprise application development and builds upon the most advanced technologies of NVIDIA research. It provides an end-to–end workflow for automated distributed processing, training large-scale customized GPT-3 and T5 models, and deploying models to infer at scale. The validation of converged recipes that allow for training and inference is a key to unlocking the power and potential of LLMs. The hyperparameter tool makes it easy to customize models. It automatically searches for optimal hyperparameter configurations, performance, and training/inference for any given distributed GPU cluster configuration. -
11
Falcon-40B
Technology Innovation Institute (TII)
FreeFalcon-40B is a 40B parameter causal decoder model, built by TII. It was trained on 1,000B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-40B Falcon-40B is the best open source model available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties. This is a raw model that should be finetuned to fit most uses. If you're looking for a model that can take generic instructions in chat format, we suggest Falcon-40B Instruct. -
12
Llama 2
Meta
FreeThe next generation of the large language model. This release includes modelweights and starting code to pretrained and fine tuned Llama languages models, ranging from 7B-70B parameters. Llama 1 models have a context length of 2 trillion tokens. Llama 2 models have a context length double that of Llama 1. The fine-tuned Llama 2 models have been trained using over 1,000,000 human annotations. Llama 2, a new open-source language model, outperforms many other open-source language models in external benchmarks. These include tests of reasoning, coding and proficiency, as well as knowledge tests. Llama 2 has been pre-trained using publicly available online data sources. Llama-2 chat, a fine-tuned version of the model, is based on publicly available instruction datasets, and more than 1 million human annotations. We have a wide range of supporters in the world who are committed to our open approach for today's AI. These companies have provided early feedback and have expressed excitement to build with Llama 2 -
13
Baichuan-13B
Baichuan Intelligent Technology
FreeBaichuan-13B, a large-scale language model with 13 billion parameters that is open source and available commercially by Baichuan Intelligent, was developed following Baichuan -7B. It has the best results for a language model of the same size in authoritative Chinese and English benchmarks. This release includes two versions of pretraining (Baichuan-13B Base) and alignment (Baichuan-13B Chat). Baichuan-13B has more data and a larger size. It expands the number parameters to 13 billion based on Baichuan -7B, and trains 1.4 trillion coins on high-quality corpus. This is 40% more than LLaMA-13B. It is open source and currently the model with the most training data in 13B size. Support Chinese and English bi-lingual, use ALiBi code, context window is 4096. -
14
CodeGemma
Google
CodeGemma consists of powerful lightweight models that are capable of performing a variety coding tasks, including fill-in the middle code completion, code creation, natural language understanding and mathematical reasoning. CodeGemma offers 3 variants: a 7B model that is pre-trained to perform code completion, code generation, and natural language-to code chat. A 7B model that is instruction-tuned for instruction following and natural language-to code chat. You can complete lines, functions, or even entire blocks of code whether you are working locally or with Google Cloud resources. CodeGemma models are trained on 500 billion tokens primarily of English language data taken from web documents, mathematics and code. They generate code that is not only syntactically accurate but also semantically meaningful. This reduces errors and debugging times. -
15
MPT-7B
MosaicML
FreeIntroducing MPT-7B - the latest addition to our MosaicML Foundation Series. MPT-7B, a transformer that is trained from scratch using 1T tokens of code and text, is the latest entry in our MosaicML Foundation Series. It is open-source, available for commercial purposes, and has the same quality as LLaMA-7B. MPT-7B trained on the MosaicML Platform in 9.5 days, with zero human interaction at a cost $200k. You can now train, fine-tune and deploy your private MPT models. You can either start from one of our checkpoints, or you can start from scratch. For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the last of which uses a context length of 65k tokens! -
16
GPT-5
OpenAI
$0.0200 per 1000 tokensGPT-5 is OpenAI's Generative Pretrained Transformer. It is a large-language model (LLM), which is still in development. LLMs have been trained to work with massive amounts of text and can generate realistic and coherent texts, translate languages, create different types of creative content and answer your question in a way that is informative. It's still not available to the public. OpenAI has not announced a release schedule, but some believe it could launch in 2024. It's expected that GPT-5 will be even more powerful. GPT-4 has already proven to be impressive. It is capable of writing creative content, translating languages and generating text of human-quality. GPT-5 will be expected to improve these abilities, with improved reasoning, factual accuracy and ability to follow directions. -
17
Hippocratic AI
Hippocratic AI
Hippocratic AI, the new SOTA model, is outperforming GPT-4 in 105 of 114 healthcare certifications and exams. Hippocratic AI outperformed GPT-4 in 105 of 114 tests, outperforming by a margin greater than five percent on 74 certifications and by a larger margin on 43 certifications. Most language models are pre-trained on the common crawling of the Internet. This may include incorrect or misleading information. Hippocratic AI, unlike these LLMs is heavily investing in legally acquiring evidenced-based healthcare content. We use healthcare professionals to train the model and validate its readiness for deployment. This is called RLHF-HP. Hippocratic AI won't release the model until many of these licensed professionals have deemed it safe. -
18
Megatron-Turing
NVIDIA
Megatron-Turing Natural Language Generation Model (MT-NLG) is the largest and most powerful monolithic English language model. It has 530 billion parameters. This 105-layer transformer-based MTNLG improves on the previous state-of-the art models in zero, one, and few shot settings. It is unmatched in its accuracy across a wide range of natural language tasks, including Completion prediction and Reading comprehension. NVIDIA has announced an Early Access Program for its managed API service in MT-NLG Mode. This program will allow customers to experiment with, employ and apply a large language models on downstream language tasks. -
19
LLaVA
LLaVA
FreeLLaVA is a multimodal model that combines a Vicuna language model with a vision encoder to facilitate comprehensive visual-language understanding. LLaVA's chat capabilities are impressive, emulating multimodal functionality of models such as GPT-4. LLaVA 1.5 has achieved the best performance in 11 benchmarks using publicly available data. It completed training on a single 8A100 node in about one day, beating methods that rely upon billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been crucial in training LLaVA for a wide range of visual and linguistic tasks. -
20
Hermes 3
Nous Research
FreeHermes 3 contains advanced long-term context retention and multi-turn conversation capabilities, complex roleplaying and internal monologue abilities, and enhanced agentic function-calling. Hermes 3 has advanced long-term contextual retention, multi-turn conversation capabilities, complex roleplaying, internal monologue, and enhanced agentic functions-calling. Our training data encourages the model in a very aggressive way to follow the system prompts and instructions exactly and in a highly adaptive manner. Hermes 3 was developed by fine-tuning Llama 3.0 8B, 70B and 405B and training with a dataset primarily containing synthetic responses. The model has a performance that is comparable to Llama 3.1, but with deeper reasoning and creative abilities. Hermes 3 is an instruct and tool-use model series with strong reasoning and creativity abilities. -
21
Falcon-7B
Technology Innovation Institute (TII)
FreeFalcon-7B is a 7B parameter causal decoder model, built by TII. It was trained on 1,500B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-7B Falcon-7B? It outperforms similar open-source models, such as MPT-7B StableLM RedPajama, etc. It is a result of being trained using 1,500B tokens from RefinedWeb enhanced by curated corpora. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties. -
22
InstructGPT
OpenAI
$0.0200 per 1000 tokensInstructGPT is an open source framework that trains language models to generate natural language instruction from visual input. It uses a generative, pre-trained transformer model (GPT) and the state of the art object detector Mask R-CNN to detect objects in images. Natural language sentences are then generated that describe the image. InstructGPT has been designed to be useful in all domains including robotics, gaming, and education. It can help robots navigate complex tasks using natural language instructions or it can help students learn by giving descriptive explanations of events or processes. -
23
CodeQwen
QwenLM
FreeCodeQwen, developed by the Qwen Team, Alibaba Cloud, is the code version. It is a transformer based decoder only language model that has been pre-trained with a large number of codes. A series of benchmarks shows that the code generation is strong and that it performs well. Supporting long context generation and understanding with a context length of 64K tokens. CodeQwen is a 92-language coding language that provides excellent performance for text-to SQL, bug fixes, and more. CodeQwen chat is as simple as writing a few lines of code using transformers. We build the tokenizer and model using pre-trained methods and use the generate method for chatting. The chat template is provided by the tokenizer. Following our previous practice, we apply the ChatML Template for chat models. The model will complete the code snippets in accordance with the prompts without any additional formatting. -
24
ERNIE 3.0 Titan
Baidu
Pre-trained models of language have achieved state-of the-art results for various Natural Language Processing (NLP). GPT-3 has demonstrated that scaling up language models pre-trained can further exploit their immense potential. Recently, a framework named ERNIE 3.0 for pre-training large knowledge enhanced models was proposed. This framework trained a model that had 10 billion parameters. ERNIE 3.0 performed better than the current state-of-the art models on a variety of NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. We also design a self supervised adversarial and a controllable model language loss to make ERNIE Titan generate credible texts. -
25
Teuken 7B
OpenGPT-X
FreeTeuken-7B, a multilingual open source language model, was developed under the OpenGPT-X project. It is specifically designed to accommodate Europe's diverse linguistic landscape. It was trained on a dataset that included over 50% non-English text, covering all 24 official European Union languages, to ensure robust performance. Teuken-7B's custom multilingual tokenizer is a key innovation. It has been optimized for European languages and enhances training efficiency. The model comes in two versions: Teuken-7B Base, a pre-trained foundational model, and Teuken-7B Instruct, a model that has been tuned to better follow user prompts. Hugging Face makes both versions available, promoting transparency and cooperation within the AI community. The development of Teuken-7B demonstrates a commitment to create AI models that reflect Europe’s diversity. -
26
GPT-4 (Generative Pretrained Transformer 4) a large-scale, unsupervised language model that is yet to be released. GPT-4, which is the successor of GPT-3, is part of the GPT -n series of natural-language processing models. It was trained using a dataset of 45TB text to produce text generation and understanding abilities that are human-like. GPT-4 is not dependent on additional training data, unlike other NLP models. It can generate text and answer questions using its own context. GPT-4 has been demonstrated to be capable of performing a wide range of tasks without any task-specific training data, such as translation, summarization and sentiment analysis.
-
27
Cerebras-GPT
Cerebras
FreeThe training of state-of-the art language models is extremely difficult. They require large compute budgets, complex distributed computing techniques and deep ML knowledge. Few organizations are able to train large language models from scratch. The number of organizations that do not open source their results is increasing, even though they have the expertise and resources to do so. We at Cerebras believe in open access to the latest models. Cerebras is proud to announce that Cerebras GPT, a family GPT models with 111 million to thirteen billion parameters, has been released to the open-source community. These models are trained using the Chinchilla Formula and provide the highest accuracy within a given computing budget. Cerebras GPT has faster training times and lower training costs. It also consumes less power than any other publicly available model. -
28
Phi-2
Microsoft
Phi-2 is a 2.7-billion-parameter language-model that shows outstanding reasoning and language-understanding capabilities. It represents the state-of-the art performance among language-base models with less than thirteen billion parameters. Phi-2 can match or even outperform models 25x larger on complex benchmarks, thanks to innovations in model scaling. Phi-2's compact size makes it an ideal playground for researchers. It can be used for exploring mechanistic interpretationability, safety improvements or fine-tuning experiments on a variety tasks. We have included Phi-2 in the Azure AI Studio catalog to encourage research and development of language models. -
29
Qwen2
Alibaba
FreeQwen2 is a large language model developed by Qwen Team, Alibaba Cloud. Qwen2 is an extensive series of large language model developed by the Qwen Team at Alibaba Cloud. It includes both base models and instruction-tuned versions, with parameters ranging from 0.5 to 72 billion. It also features dense models and a Mixture of Experts model. The Qwen2 Series is designed to surpass previous open-weight models including its predecessor Qwen1.5 and to compete with proprietary model across a wide spectrum of benchmarks, such as language understanding, generation and multilingual capabilities. -
30
StarCoder
BigCode
FreeStarCoderBase and StarCoder are Large Language Models (Code LLMs), trained on permissively-licensed data from GitHub. This includes data from 80+ programming language, Git commits and issues, Jupyter Notebooks, and Git commits. We trained a 15B-parameter model for 1 trillion tokens, similar to LLaMA. We refined the StarCoderBase for 35B Python tokens. The result is a new model we call StarCoder. StarCoderBase is a model that outperforms other open Code LLMs in popular programming benchmarks. It also matches or exceeds closed models like code-cushman001 from OpenAI, the original Codex model which powered early versions GitHub Copilot. StarCoder models are able to process more input with a context length over 8,000 tokens than any other open LLM. This allows for a variety of interesting applications. By prompting the StarCoder model with a series dialogues, we allowed them to act like a technical assistant. -
31
Mixtral 8x7B
Mistral AI
FreeMixtral 8x7B has open weights and is a high quality sparse mixture expert model (SMoE). Licensed under Apache 2.0. Mixtral outperforms Llama 70B in most benchmarks, with 6x faster Inference. It is the strongest model with an open-weight license and the best overall model in terms of cost/performance tradeoffs. It matches or exceeds GPT-3.5 in most standard benchmarks. -
32
Gemma 2
Google
Gemini models are a family of light-open, state-of-the art models that was created using the same research and technology as Gemini models. These models include comprehensive security measures, and help to ensure responsible and reliable AI through selected data sets. Gemma models have exceptional comparative results, even surpassing some larger open models, in their 2B and 7B sizes. Keras 3.0 offers seamless compatibility with JAX TensorFlow PyTorch and JAX. Gemma 2 has been redesigned to deliver unmatched performance and efficiency. It is optimized for inference on a variety of hardware. The Gemma models are available in a variety of models that can be customized to meet your specific needs. The Gemma models consist of large text-to text lightweight language models that have a decoder and are trained on a large set of text, code, or mathematical content. -
33
NVIDIA Nemotron
NVIDIA
NVIDIA Nemotron, a family open-source models created by NVIDIA is designed to generate synthetic language data for commercial applications. The Nemotron-4 model 340B is an important release by NVIDIA. It offers developers a powerful tool for generating high-quality data, and filtering it based upon various attributes, using a reward system. -
34
GPT4All
Nomic AI
FreeGPT4All provides an ecosystem for training and deploying large language models, which run locally on consumer CPUs. The goal is to be the best assistant-style language models that anyone or any enterprise can freely use and distribute. A GPT4All is a 3GB to 8GB file you can download and plug in the GPT4All ecosystem software. Nomic AI maintains and supports this software ecosystem in order to enforce quality and safety, and to enable any person or company to easily train and deploy large language models on the edge. Data is a key ingredient in building a powerful and general-purpose large-language model. The GPT4All Community has created the GPT4All Open Source Data Lake as a staging area for contributing instruction and assistance tuning data for future GPT4All Model Trains. -
35
LTM-1
Magic AI
Magic's LTM-1 provides context windows 50x larger than transformers. Magic has trained a Large Language Model that can take in huge amounts of context to generate suggestions. Magic, our coding assistant can now see all of your code. AI models can refer to more factual and explicit information with larger context windows. They can also reference their own actions history. This research will hopefully improve reliability and coherence. -
36
XLNet
XLNet
FreeXLNet, a new unsupervised language representation method, is based on a novel generalized Permutation Language Modeling Objective. XLNet uses Transformer-XL as its backbone model. This model is excellent for language tasks that require long context. Overall, XLNet achieves state of the art (SOTA) results in various downstream language tasks, including question answering, natural languages inference, sentiment analysis and document ranking. -
37
OPT
Meta
The ability of large language models to learn in zero- and few shots, despite being trained for hundreds of thousands or even millions of days, has been remarkable. These models are expensive to replicate, due to their high computational cost. The few models that are available via APIs do not allow access to the full weights of the model, making it difficult to study. Open Pre-trained Transformers is a suite decoder-only pre-trained transforms with parameters ranging from 175B to 125M. We aim to share this fully and responsibly with interested researchers. We show that OPT-175B has a carbon footprint of 1/7th that of GPT-3. We will also release our logbook, which details the infrastructure challenges we encountered, as well as code for experimenting on all of the released model. -
38
GPT-J
EleutherAI
FreeGPT-J, a cutting edge language model developed by EleutherAI, is a leading-edge language model. GPT-J's performance is comparable to OpenAI's GPT-3 model on a variety of zero-shot tasks. GPT-J, in particular, has shown that it can surpass GPT-3 at tasks relating to code generation. The latest version of this language model is GPT-J-6B and is built on a linguistic data set called The Pile. This dataset is publically available and contains 825 gibibytes worth of language data organized into 22 subsets. GPT-J has some similarities with ChatGPT. However, GPTJ is not intended to be a chatbot. Its primary function is to predict texts. Databricks made a major development in March 2023 when they introduced Dolly, an Apache-licensed model that follows instructions. -
39
Qwen-7B
Alibaba
FreeQwen-7B, also known as Qwen-7B, is the 7B-parameter variant of the large language models series Qwen. Tongyi Qianwen, proposed by Alibaba Cloud. Qwen-7B, a Transformer-based language model, is pretrained using a large volume data, such as web texts, books, code, etc. Qwen-7B is also used to train Qwen-7B Chat, an AI assistant that uses large models and alignment techniques. The Qwen-7B features include: Pre-trained with high quality data. We have pretrained Qwen-7B using a large-scale, high-quality dataset that we constructed ourselves. The dataset contains over 2.2 trillion tokens. The dataset contains plain texts and codes and covers a wide range domains including general domain data as well as professional domain data. Strong performance. We outperform our competitors in a series benchmark datasets that evaluate natural language understanding, mathematics and coding. And more. -
40
Dolly
Databricks
FreeDolly is an inexpensive LLM that demonstrates a surprising amount of the capabilities of ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly uses an open source model with 6 billion parameters from EleutherAI, which is modified to include new capabilities like brainstorming and text creation that were not present in the original. -
41
Code Llama
Meta
FreeCode Llama, a large-language model (LLM), can generate code using text prompts. Code Llama, the most advanced publicly available LLM for code tasks, has the potential to improve workflows for developers and reduce the barrier for those learning to code. Code Llama can be used to improve productivity and educate programmers to create more robust, well documented software. Code Llama, a state-of the-art LLM, is capable of generating both code, and natural languages about code, based on both code and natural-language prompts. Code Llama can be used for free in research and commercial purposes. Code Llama is a new model that is built on Llama 2. It is available in 3 models: Code Llama is the foundational model of code; Codel Llama is a Python-specific language. Code Llama-Instruct is a finely tuned natural language instruction interpreter. -
42
Le Chat
Mistral AI
FreeLe Chat is an interactive conversational interface to interact with Mistral AI's models. It is a fun and pedagogical way to learn about Mistral AI. Le Chat can use Mistral Large, Mistral Small, or a prototype called Mistral Next. This model is designed to be concise and brief. Our models are constantly being improved to be as useful as possible and to be as opinionated as we can. However, there is still much to improve! Le Chat's system-level moderation allows you to customize the way you are warned when you push the conversation in a direction where the assistant could produce sensitive or controversial content. -
43
DBRX
Databricks
Databricks has created an open, general purpose LLM called DBRX. DBRX is the new benchmark for open LLMs. It also provides open communities and enterprises that are building their own LLMs capabilities that were previously only available through closed model APIs. According to our measurements, DBRX surpasses GPT 3.5 and is competitive with Gemini 1.0 Pro. It is a code model that is more capable than specialized models such as CodeLLaMA 70B, and it also has the strength of a general-purpose LLM. This state-of the-art quality is accompanied by marked improvements in both training and inference performances. DBRX is the most efficient open model thanks to its finely-grained architecture of mixtures of experts (MoE). Inference is 2x faster than LLaMA2-70B and DBRX has about 40% less parameters in total and active count compared to Grok-1. -
44
IBM Granite
IBM
FreeIBM® Granite™ is an AI family that was designed from scratch for business applications. It helps to ensure trust and scalability of AI-driven apps. Granite models are open source and available today. We want to make AI accessible to as many developers as we can. We have made the core Granite Code, Time Series models, Language and GeoSpatial available on Hugging Face, under a permissive Apache 2.0 licence that allows for broad commercial use. Granite models are all trained using carefully curated data. The data used to train them is transparent at a level that is unmatched in the industry. We have also made the tools that we use available to ensure that the data is of high quality and meets the standards required by enterprise-grade applications. -
45
Chinchilla
Google DeepMind
Chinchilla has a large language. Chinchilla has the same compute budget of Gopher, but 70B more parameters and 4x as much data. Chinchilla consistently and significantly outperforms Gopher 280B, GPT-3 175B, Jurassic-1 178B, and Megatron-Turing (530B) in a wide range of downstream evaluation tasks. Chinchilla also uses less compute to perform fine-tuning, inference and other tasks. This makes it easier for downstream users to use. Chinchilla reaches a high-level average accuracy of 67.5% for the MMLU benchmark. This is a greater than 7% improvement compared to Gopher. -
46
GPT-4o mini
OpenAI
A small model with superior textual Intelligence and multimodal reasoning. GPT-4o Mini's low cost and low latency enable a wide range of tasks, including applications that chain or paralelize multiple model calls (e.g. calling multiple APIs), send a large amount of context to the models (e.g. full code base or history of conversations), or interact with clients through real-time, fast text responses (e.g. customer support chatbots). GPT-4o Mini supports text and vision today in the API. In the future, it will support text, image and video inputs and outputs. The model supports up to 16K outputs tokens per request and has knowledge until October 2023. It has a context of 128K tokens. The improved tokenizer shared by GPT-4o makes it easier to handle non-English text. -
47
Claude 3.5 Sonnet
Anthropic
FreeClaude 3.5 Sonnet is a new benchmark for the industry in terms of graduate-level reasoning (GPQA), undergrad-level knowledge (MMLU), as well as coding proficiency (HumanEval). It is exceptional in writing high-quality, relatable content that is written with a natural and relatable tone. It also shows marked improvements in understanding nuance, humor and complex instructions. Claude 3.5 Sonnet is twice as fast as Claude 3 Opus. Claude 3.5 Sonnet is ideal for complex tasks, such as providing context-sensitive support to customers and orchestrating workflows. Claude 3.5 Sonnet can be downloaded for free from Claude.ai and Claude iOS, and subscribers to the Claude Pro and Team plans will have access to it at rates that are significantly higher. It is also accessible via the Anthropic AI, Amazon Bedrock and Google Cloud Vertex AI. The model costs $3 for every million input tokens. It costs $15 for every million output tokens. There is a 200K token window. -
48
Aya
Cohere AI
Aya is an open-source, state-of-the art, massively multilingual large language research model (LLM), which covers 101 different languages. This is more than twice the number of languages that are covered by open-source models. Aya helps researchers unlock LLMs' powerful potential for dozens of cultures and languages that are largely ignored by the most advanced models available today. We open-source both the Aya Model, as well as the most comprehensive multilingual instruction dataset with 513 million words covering 114 different languages. This data collection contains rare annotations by native and fluent speakers from around the world. This ensures that AI technology is able to effectively serve a global audience who have had limited access up until now. -
49
LLaMA
Meta
LLaMA (Large Language Model meta AI) is a state of the art foundational large language model that was created to aid researchers in this subfield. LLaMA allows researchers to use smaller, more efficient models to study these models. This furtherdemocratizes access to this rapidly-changing field. Because it takes far less computing power and resources than large language models, such as LLaMA, to test new approaches, validate other's work, and explore new uses, training smaller foundation models like LLaMA can be a desirable option. Foundation models are trained on large amounts of unlabeled data. This makes them perfect for fine-tuning for many tasks. We make LLaMA available in several sizes (7B-13B, 33B and 65B parameters), and also share a LLaMA card that explains how the model was built in line with our Responsible AI practices. -
50
Granite Code
IBM
FreeWe introduce the Granite family of decoder only code models for code generation tasks (e.g. fixing bugs, explaining codes, documenting codes), trained with code in 116 programming language. The Granite Code family has been evaluated on a variety of tasks and demonstrates that the models are consistently at the top of their game among open source code LLMs. Granite Code models have a number of key advantages. Granite Code models are able to perform at a competitive level or even at the cutting edge of technology in a variety of code-related tasks including code generation, explanations, fixing, translation, editing, and more. Demonstrating the ability to solve a variety of coding tasks. IBM's Corporate Legal team guides all models for trustworthy enterprise use. All models are trained using license-permissible datasets collected according to IBM's AI Ethics Principles.