Best RedPajama Alternatives in 2025

Find the top alternatives to RedPajama currently available. Compare ratings, reviews, pricing, and features of RedPajama alternatives in 2025. Slashdot lists the best RedPajama alternatives on the market that offer competing products that are similar to RedPajama. Sort through RedPajama alternatives below to make the best choice for your needs

  • 1
    Dolly Reviews
    Dolly is an inexpensive LLM that demonstrates a surprising amount of the capabilities of ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly uses an open source model with 6 billion parameters from EleutherAI, which is modified to include new capabilities like brainstorming and text creation that were not present in the original.
  • 2
    Alpaca Reviews

    Alpaca

    Stanford Center for Research on Foundation Models (CRFM)

    Instruction-following models such as GPT-3.5 (text-DaVinci-003), ChatGPT, Claude, and Bing Chat have become increasingly powerful. These models are now used by many users, and some even for work. However, despite their widespread deployment, instruction-following models still have many deficiencies: they can generate false information, propagate social stereotypes, and produce toxic language. It is vital that the academic community engages in order to make maximum progress towards addressing these pressing issues. Unfortunately, doing research on instruction-following models in academia has been difficult, as there is no easily accessible model that comes close in capabilities to closed-source models such as OpenAI's text-DaVinci-003. We are releasing our findings about an instruction-following language model, dubbed Alpaca, which is fine-tuned from Meta's LLaMA 7B model.
  • 3
    OpenLLaMA Reviews
    OpenLLaMA, a permissively-licensed open source reproduction of Meta AI’s LLaMA 7B, is trained on the RedPajama data set. Our model weights are a drop-in replacement for LLaMA7B in existing implementations. We also offer a smaller 3B version of the LLaMA Model.
  • 4
    Falcon-40B Reviews

    Falcon-40B

    Technology Innovation Institute (TII)

    Free
    Falcon-40B is a 40B parameter causal decoder model, built by TII. It was trained on 1,000B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-40B Falcon-40B is the best open source model available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties. This is a raw model that should be finetuned to fit most uses. If you're looking for a model that can take generic instructions in chat format, we suggest Falcon-40B Instruct.
  • 5
    MPT-7B Reviews
    Introducing MPT-7B - the latest addition to our MosaicML Foundation Series. MPT-7B, a transformer that is trained from scratch using 1T tokens of code and text, is the latest entry in our MosaicML Foundation Series. It is open-source, available for commercial purposes, and has the same quality as LLaMA-7B. MPT-7B trained on the MosaicML Platform in 9.5 days, with zero human interaction at a cost $200k. You can now train, fine-tune and deploy your private MPT models. You can either start from one of our checkpoints, or you can start from scratch. For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the last of which uses a context length of 65k tokens!
  • 6
    Vicuna Reviews
    Vicuna-13B, an open-source chatbot, is trained by fine-tuning LLaMA using user-shared conversations from ShareGPT. Vicuna-13B's preliminary evaluation using GPT-4, as a judge, shows that it achieves a quality of more than 90%* for OpenAI ChatGPT or Google Bard and outperforms other models such as LLaMA or Stanford Alpaca. Vicuna-13B costs around $300 to train. The online demo and the code, along with weights, are available to non-commercial users.
  • 7
    TinyLlama Reviews
    The TinyLlama Project aims to pretrain an 1.1B Llama on 3 trillion tokens. We can achieve this in "just" 90 day using 16 A100-40G graphics cards with some optimization. We used the exact same architecture and tokenizers as Llama 2 TinyLlama is compatible with many open-source Llama projects. TinyLlama has only 1.1B of parameters. This compactness allows TinyLlama to be used for a variety of applications that require a small computation and memory footprint.
  • 8
    Falcon-7B Reviews

    Falcon-7B

    Technology Innovation Institute (TII)

    Free
    Falcon-7B is a 7B parameter causal decoder model, built by TII. It was trained on 1,500B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-7B Falcon-7B? It outperforms similar open-source models, such as MPT-7B StableLM RedPajama, etc. It is a result of being trained using 1,500B tokens from RefinedWeb enhanced by curated corpora. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties.
  • 9
    Janus-Pro-7B Reviews
    Janus-Pro-7B is a trailblazing AI model by DeepSeek, crafted to master the art of multimodal interaction, seamlessly blending text, imagery, and video into a unified processing experience. Its innovative design splits visual processing into dedicated streams for understanding and creation, allowing it to shine in generative tasks and complex visual interpretation. Outshining peers such as DALL-E 3 and Stable Diffusion, this model comes in scalable sizes from 1 to 7 billion parameters, ensuring flexibility for diverse computational needs. Freely accessible under the MIT License, Janus-Pro-7B invites both researchers and developers to explore its potential across platforms like Linux, MacOS, and Windows with Docker support, marking a new era in open-source AI innovation.
  • 10
    PygmalionAI Reviews
    PygmalionAI, a community of open-source projects based upon EleutherAI’s GPT-J 6B models and Meta’s LLaMA model, was founded in 2009. Pygmalion AI is designed for roleplaying and chatting. The 7B variant of the Pygmalion AI is currently actively supported. It is based on Meta AI’s LLaMA AI model. Pygmalion's chat capabilities are superior to larger language models that require much more resources. Our curated datasets of high-quality data on roleplaying ensure that your bot is the best RP partner. The model weights as well as the code used to train the model are both open-source. You can modify/re-distribute them for any purpose you like. Pygmalion and other language models run on GPUs because they require fast memory and massive processing to produce coherent text at a reasonable speed.
  • 11
    OLMo 2 Reviews
    OLMo 2 is an open language model family developed by the Allen Institute for AI. It provides researchers and developers with open-source code and reproducible training recipes. These models can be trained with up to 5 trillion tokens, and they are competitive against other open-weight models such as Llama 3.0 on English academic benchmarks. OLMo 2 focuses on training stability by implementing techniques that prevent loss spikes in long training runs. It also uses staged training interventions to address capability deficits during late pretraining. The models incorporate the latest post-training methods from AI2's Tulu 3 resulting in OLMo 2-Instruct. The Open Language Modeling Evaluation System, or OLMES, was created to guide improvements throughout the development stages. It consists of 20 evaluation benchmarks assessing key capabilities.
  • 12
    IBM Granite Reviews
    IBM® Granite™ is an AI family that was designed from scratch for business applications. It helps to ensure trust and scalability of AI-driven apps. Granite models are open source and available today. We want to make AI accessible to as many developers as we can. We have made the core Granite Code, Time Series models, Language and GeoSpatial available on Hugging Face, under a permissive Apache 2.0 licence that allows for broad commercial use. Granite models are all trained using carefully curated data. The data used to train them is transparent at a level that is unmatched in the industry. We have also made the tools that we use available to ensure that the data is of high quality and meets the standards required by enterprise-grade applications.
  • 13
    DBRX Reviews
    Databricks has created an open, general purpose LLM called DBRX. DBRX is the new benchmark for open LLMs. It also provides open communities and enterprises that are building their own LLMs capabilities that were previously only available through closed model APIs. According to our measurements, DBRX surpasses GPT 3.5 and is competitive with Gemini 1.0 Pro. It is a code model that is more capable than specialized models such as CodeLLaMA 70B, and it also has the strength of a general-purpose LLM. This state-of the-art quality is accompanied by marked improvements in both training and inference performances. DBRX is the most efficient open model thanks to its finely-grained architecture of mixtures of experts (MoE). Inference is 2x faster than LLaMA2-70B and DBRX has about 40% less parameters in total and active count compared to Grok-1.
  • 14
    Baichuan-13B Reviews

    Baichuan-13B

    Baichuan Intelligent Technology

    Free
    Baichuan-13B, a large-scale language model with 13 billion parameters that is open source and available commercially by Baichuan Intelligent, was developed following Baichuan -7B. It has the best results for a language model of the same size in authoritative Chinese and English benchmarks. This release includes two versions of pretraining (Baichuan-13B Base) and alignment (Baichuan-13B Chat). Baichuan-13B has more data and a larger size. It expands the number parameters to 13 billion based on Baichuan -7B, and trains 1.4 trillion coins on high-quality corpus. This is 40% more than LLaMA-13B. It is open source and currently the model with the most training data in 13B size. Support Chinese and English bi-lingual, use ALiBi code, context window is 4096.
  • 15
    Tülu 3 Reviews
    Tülu 3 is a cutting-edge instruction-following language model created by the Allen Institute for AI (AI2), designed to enhance reasoning, coding, mathematics, knowledge retrieval, and safety. Built on the Llama 3 Base model, Tülu 3 undergoes a four-stage post-training process that includes curated prompt synthesis, supervised fine-tuning, preference tuning with diverse datasets, and reinforcement learning to improve targeted skills with verifiable results. As an open-source model, it prioritizes transparency by providing access to training data, evaluation tools, and code, bridging the gap between open and proprietary AI fine-tuning techniques. Performance evaluations demonstrate that Tülu 3 surpasses other similarly sized open-weight models, including Llama 3.1-Instruct and Qwen2.5-Instruct, across multiple benchmarks.
  • 16
    Llama 2 Reviews
    The next generation of the large language model. This release includes modelweights and starting code to pretrained and fine tuned Llama languages models, ranging from 7B-70B parameters. Llama 1 models have a context length of 2 trillion tokens. Llama 2 models have a context length double that of Llama 1. The fine-tuned Llama 2 models have been trained using over 1,000,000 human annotations. Llama 2, a new open-source language model, outperforms many other open-source language models in external benchmarks. These include tests of reasoning, coding and proficiency, as well as knowledge tests. Llama 2 has been pre-trained using publicly available online data sources. Llama-2 chat, a fine-tuned version of the model, is based on publicly available instruction datasets, and more than 1 million human annotations. We have a wide range of supporters in the world who are committed to our open approach for today's AI. These companies have provided early feedback and have expressed excitement to build with Llama 2
  • 17
    Aya Reviews
    Aya is an open-source, state-of-the art, massively multilingual large language research model (LLM), which covers 101 different languages. This is more than twice the number of languages that are covered by open-source models. Aya helps researchers unlock LLMs' powerful potential for dozens of cultures and languages that are largely ignored by the most advanced models available today. We open-source both the Aya Model, as well as the most comprehensive multilingual instruction dataset with 513 million words covering 114 different languages. This data collection contains rare annotations by native and fluent speakers from around the world. This ensures that AI technology is able to effectively serve a global audience who have had limited access up until now.
  • 18
    NVIDIA Nemotron Reviews
    NVIDIA Nemotron, a family open-source models created by NVIDIA is designed to generate synthetic language data for commercial applications. The Nemotron-4 model 340B is an important release by NVIDIA. It offers developers a powerful tool for generating high-quality data, and filtering it based upon various attributes, using a reward system.
  • 19
    Sky-T1 Reviews
    Sky-T1-32B is an open-source reasoning tool developed by the NovaSky group at UC Berkeley’s Sky Computing Lab. It is comparable to proprietary models such as o1 preview on reasoning and coding tests, but was trained for less than $450. This shows the feasibility of cost-effective high-level reasoning abilities. The model was fine-tuned using Qwen2.5 32B-Instruct and a curated dataset with 17,000 examples from diverse domains including math and coding. The training took 19 hours using eight H100 GPUs and DeepSpeed Zero-3 offloading. All aspects of the project are open-source including the data, code and model weights. This allows the academic and open source communities to duplicate and enhance the performance.
  • 20
    Llama 3.2 Reviews
    There are now more versions of the open-source AI model that you can refine, distill and deploy anywhere. Choose from 1B or 3B, or build with Llama 3. Llama 3.2 consists of a collection large language models (LLMs), which are pre-trained and fine-tuned. They come in sizes 1B and 3B, which are multilingual text only. Sizes 11B and 90B accept both text and images as inputs and produce text. Our latest release allows you to create highly efficient and performant applications. Use our 1B and 3B models to develop on-device applications, such as a summary of a conversation from your phone, or calling on-device features like calendar. Use our 11B and 90B models to transform an existing image or get more information from a picture of your surroundings.
  • 21
    DeepSeek-V2 Reviews
    DeepSeek-V2, developed by DeepSeek-AI, is a cutting-edge Mixture-of-Experts (MoE) language model designed for cost-effective training and high-speed inference. Boasting a massive 236 billion parameters—though only 21 billion are active per token—it efficiently handles a context length of up to 128K tokens. The model leverages advanced architectural innovations such as Multi-head Latent Attention (MLA) to optimize inference by compressing the Key-Value (KV) cache and DeepSeekMoE to enable economical training via sparse computation. Compared to its predecessor, DeepSeek 67B, it slashes training costs by 42.5%, shrinks the KV cache by 93.3%, and boosts generation throughput by 5.76 times. Trained on a vast 8.1 trillion token dataset, DeepSeek-V2 excels in natural language understanding, programming, and complex reasoning, positioning itself as a premier choice in the open-source AI landscape.
  • 22
    OpenEuroLLM Reviews
    OpenEuroLLM is an initiative that brings together Europe's top AI companies and research institutes to create a series open-source foundation models in Europe for transparent AI. The project focuses on transparency by sharing data, documentation and training, testing, and evaluation metrics. This encourages community involvement. It ensures compliance to EU regulations and aims to provide large language models that are aligned with European standards. The focus is on linguistic diversity and cultural diversity. Multilingual capabilities are extended to include all EU official language and beyond. The initiative aims to improve access to foundational models that can be fine-tuned for various applications, expand the evaluation results in multiple language, and increase availability of training datasets. Transparency throughout the training process is maintained by sharing tools and methodologies, as well as intermediate results.
  • 23
    Falcon Mamba 7B Reviews

    Falcon Mamba 7B

    Technology Innovation Institute (TII)

    Free
    Falcon Mamba 7B is the first open-source State Space Language Model (SSLM), introducing a revolutionary advancement in Falcon's architecture. Independently ranked as the top-performing open-source SSLM by Hugging Face, it redefines efficiency in AI language models. With low memory requirements and the ability to generate long text sequences without additional computational costs, Falcon Mamba 7B outperforms traditional transformer models like Meta’s Llama 3.1 8B and Mistral’s 7B. This cutting-edge model highlights Abu Dhabi’s leadership in AI research and innovation, pushing the boundaries of what’s possible in open-source machine learning.
  • 24
    DeepSeek R1 Reviews
    DeepSeek-R1 is a cutting-edge open-source reasoning model crafted by DeepSeek, designed to compete with leading models like OpenAI's o1. Available through web platforms, applications, and APIs, it excels in tackling complex challenges such as mathematics and programming. With outstanding performance on benchmarks like the AIME and MATH, DeepSeek-R1 leverages a mixture of experts (MoE) architecture, utilizing 671 billion total parameters while activating 37 billion parameters per token for exceptional efficiency and accuracy. This model exemplifies DeepSeek’s dedication to driving advancements in artificial general intelligence (AGI) through innovative and open source solutions.
  • 25
    Stable LM Reviews
    StableLM: Stability AI language models StableLM builds upon our experience with open-sourcing previous language models in collaboration with EleutherAI. This nonprofit research hub. These models include GPTJ, GPTNeoX and the Pythia Suite, which were all trained on The Pile dataset. Cerebras GPT and Dolly-2 are two recent open-source models that continue to build upon these efforts. StableLM was trained on a new dataset that is three times bigger than The Pile and contains 1.5 trillion tokens. We will provide more details about the dataset at a later date. StableLM's richness allows it to perform well in conversational and coding challenges, despite the small size of its dataset (3-7 billion parameters, compared to GPT-3's 175 billion). The development of Stable LM 3B broadens the range of applications that are viable on the edge or on home PCs. This means that individuals and companies can now develop cutting-edge technologies with strong conversational capabilities – like creative writing assistance – while keeping costs low and performance high.
  • 26
    Stable Beluga Reviews
    Stability AI, in collaboration with its CarperAI Lab, announces Stable Beluga 1 (formerly codenamed FreeWilly) and its successor Stable Beluga 2 - two powerful, new Large Language Models. Both models show exceptional reasoning abilities across a variety of benchmarks. Stable Beluga 1 leverages the original LLaMA 65B foundation model and was carefully fine-tuned with a new synthetically-generated dataset using Supervised Fine-Tune (SFT) in standard Alpaca format. Stable Beluga 2 uses the LLaMA 270B foundation model for industry-leading performance.
  • 27
    StarCoder Reviews
    StarCoderBase and StarCoder are Large Language Models (Code LLMs), trained on permissively-licensed data from GitHub. This includes data from 80+ programming language, Git commits and issues, Jupyter Notebooks, and Git commits. We trained a 15B-parameter model for 1 trillion tokens, similar to LLaMA. We refined the StarCoderBase for 35B Python tokens. The result is a new model we call StarCoder. StarCoderBase is a model that outperforms other open Code LLMs in popular programming benchmarks. It also matches or exceeds closed models like code-cushman001 from OpenAI, the original Codex model which powered early versions GitHub Copilot. StarCoder models are able to process more input with a context length over 8,000 tokens than any other open LLM. This allows for a variety of interesting applications. By prompting the StarCoder model with a series dialogues, we allowed them to act like a technical assistant.
  • 28
    Falcon 2 Reviews

    Falcon 2

    Technology Innovation Institute (TII)

    Free
    Falcon 2 11B is a cutting-edge open-source AI model, designed for multilingual and multimodal tasks, and the only one featuring vision-to-language capabilities. It outperforms Meta’s Llama 3 8B and rivals Google’s Gemma 7B, as verified by the Hugging Face Leaderboard. The next step in its evolution includes integrating a 'Mixture of Experts' framework to further elevate its performance and expand its capabilities.
  • 29
    Hermes 3 Reviews
    Hermes 3 contains advanced long-term context retention and multi-turn conversation capabilities, complex roleplaying and internal monologue abilities, and enhanced agentic function-calling. Hermes 3 has advanced long-term contextual retention, multi-turn conversation capabilities, complex roleplaying, internal monologue, and enhanced agentic functions-calling. Our training data encourages the model in a very aggressive way to follow the system prompts and instructions exactly and in a highly adaptive manner. Hermes 3 was developed by fine-tuning Llama 3.0 8B, 70B and 405B and training with a dataset primarily containing synthetic responses. The model has a performance that is comparable to Llama 3.1, but with deeper reasoning and creative abilities. Hermes 3 is an instruct and tool-use model series with strong reasoning and creativity abilities.
  • 30
    Sarvam AI Reviews
    We are developing large language models that are efficient for India's diverse cultural diversity and enabling GenAI applications with bespoke enterprise models. We are building a platform for enterprise-grade apps that allows you to develop and evaluate them. We believe that open-source can accelerate AI innovation. We will be contributing open-source datasets and models, and leading efforts for large data curation projects in the public-good space. We are a dynamic team of AI experts, combining expertise in research, product design, engineering and business operations. Our diverse backgrounds are united by a commitment to excellence in science, and creating societal impact. We create an environment in which tackling complex tech problems is not only a job but a passion.
  • 31
    Arcee-SuperNova Reviews
    Our new flagship model, the Small Language Model (SLM), has all the power and performance that you would expect from a leading LLM. Excels at generalized tasks, instruction-following, and human preferences. The best 70B model available. SuperNova is a generalized task-based AI that can be used for any generalized task. It's similar to Open AI's GPT4o and Claude Sonnet 3.5. SuperNova is trained with the most advanced optimization & learning techniques to generate highly accurate responses. It is the most flexible, cost-effective, and secure language model available. Customers can save up to 95% in total deployment costs when compared with traditional closed-source models. SuperNova can be used to integrate AI in apps and products, as well as for general chat and a variety of other uses. Update your models regularly with the latest open source tech to ensure you're not locked into a single solution. Protect your data using industry-leading privacy features.
  • 32
    Open R1 Reviews
    Open R1 is an open-source, community-driven initiative that aims to replicate the advanced AI capabilities found in DeepSeek-R1 using transparent methodologies. Open R1 AI Model or DeepSeek R1 online chat is available for free on Open R1. The project provides a comprehensive implementation, under the MIT licence, of DeepSeek's reasoning optimized training pipeline. This includes tools for GRPO-training, SFT-fine-tuning and synthetic data generation. Open R1 offers the entire toolchain to users, allowing them to create and fine-tune models using their own data.
  • 33
    Ai2 OLMoE Reviews

    Ai2 OLMoE

    The Allen Institute for Artificial Intelligence

    Free
    Ai2 OLMoE, an open-source mixture-of experts language model, can run completely on the device. This allows you to test our model in a private and secure environment. Our app is designed to help researchers explore ways to improve on-device intelligence and to allow developers to quickly prototype AI experiences. All without cloud connectivity. OLMoE is the highly efficient mix-of-experts model of the Ai2 OLMo models. Discover what real-world tasks are possible with state-of-the art local models. Learn how to improve AI models for small systems. You can test your own models using our open-source codebase. Integrate OLMoE with other iOS applications. The Ai2 OLMoE application provides privacy and security because it operates entirely on the device. Share the output of your conversation with friends and colleagues. The OLMoE application code and model are both open source.
  • 34
    OpenGPT-X Reviews
    OpenGPT is a German initiative that focuses on developing large AI languages models tailored to European requirements, with an emphasis on versatility, trustworthiness and multilingual capabilities. It also emphasizes open-source accessibility. The project brings together partners to cover the whole generative AI value-chain, from scalable GPU-based infrastructure to data for training large language model to model design, practical applications, and prototypes and proofs-of concept. OpenGPT-X aims at advancing cutting-edge research, with a focus on business applications. This will accelerate the adoption of generative AI within the German economy. The project also stresses responsible AI development to ensure that the models are reliable and aligned with European values and laws. The project provides resources, such as the LLM Workbook and a three part reference guide with examples and resources to help users better understand the key features and characteristics of large AI language model.
  • 35
    FLUX.1 Reviews

    FLUX.1

    Black Forest Labs

    Free
    FLUX.1, built by Black Forest Labs, emerges as a revolutionary set of open-source text-to-image AI models, boasting 12 billion parameters to redefine visual creativity. It eclipses competitors like Midjourney V6 and DALL-E 3 with its unmatched image quality, intricate detail, and adherence to user prompts, spanning an expansive spectrum of artistic styles and scenes. Offered in three distinct editions - Pro for premium commercial applications, Dev for academic research with Pro-like performance, and Schnell for swift personal projects - all under the permissive Apache 2.0 license. FLUX.1 leverages novel techniques like flow matching and rotary positional embeddings, making it a pivotal tool for anyone looking to push the boundaries of AI-generated art.
  • 36
    Llama 3.1 Reviews
    Open source AI model that you can fine-tune and distill anywhere. Our latest instruction-tuned models are available in 8B 70B and 405B version. Our open ecosystem allows you to build faster using a variety of product offerings that are differentiated and support your use cases. Choose between real-time or batch inference. Download model weights for further cost-per-token optimization. Adapt to your application, improve using synthetic data, and deploy on-prem. Use Llama components and extend the Llama model using RAG and zero shot tools to build agentic behavior. Use 405B high-quality data to improve specialized model for specific use cases.
  • 37
    ChatGLM Reviews
    ChatGLM-6B, a Chinese-English bilingual dialogue model based on General Language Model architecture (GLM), has 6.2 billion parameters. Users can deploy model quantization locally on consumer-grade graphic cards (only 6GB video memory required at INT4 quantization levels). ChatGLM-6B is based on technology similar to ChatGPT and optimized for Chinese dialogue and Q&A. After approximately 1T identifiers for Chinese and English bilingual training and supplemented with supervision and fine-tuning as well as feedback self-help and human feedback reinforcement learning, ChatGLM-6B, with 6.2 billion parameters, has been able generate answers that are in line with human preference.
  • 38
    GPT4All Reviews
    GPT4All provides an ecosystem for training and deploying large language models, which run locally on consumer CPUs. The goal is to be the best assistant-style language models that anyone or any enterprise can freely use and distribute. A GPT4All is a 3GB to 8GB file you can download and plug in the GPT4All ecosystem software. Nomic AI maintains and supports this software ecosystem in order to enforce quality and safety, and to enable any person or company to easily train and deploy large language models on the edge. Data is a key ingredient in building a powerful and general-purpose large-language model. The GPT4All Community has created the GPT4All Open Source Data Lake as a staging area for contributing instruction and assistance tuning data for future GPT4All Model Trains.
  • 39
    Cohere Reviews
    Cohere is an AI company that provides advanced language models designed to help businesses and developers create intelligent text-based applications. Their models support tasks like text generation, summarization, and semantic search, with options such as the Command family for high-performance applications and Aya Expanse for multilingual capabilities across 23 languages. Cohere emphasizes flexibility and security, offering deployment on cloud platforms, private environments, and on-premises systems. The company partners with major enterprises like Oracle and Salesforce to enhance automation and customer interactions through generative AI. Additionally, its research division, Cohere For AI, contributes to machine learning innovation by fostering global collaboration and open-source advancements.
  • 40
    Lune AI Reviews
    A marketplace of LLMs created by developers on technical topics, and managed by a community. Outperforms standalone AI models. Lunes, which are constantly updated on the latest technical knowledge sources, such as Github repositories and documentation, can reduce hallucinations when it comes to technical queries. You can get references back, just like with Perplexity. Use hundreds of Lunes created by other users, ranging from Lunes that are trained on open-source software to curated collections based on tech blog posts. Get exposure by creating one using a variety sources, such as your own projects. Our API can be hot-swapped with OpenAI's. Integrate with Cursor and Continue, as well as other tools that support OpenAI compatible models. You can continue your conversation from your IDE on Lune Web anytime. Get paid for each approved comment you make directly in the chat. Create a public Lune, share it and get paid based on its popularity.
  • 41
    CodeGen Reviews
    CodeGen is a model for program synthesis that is open-source. Trained on TPU v4. OpenAI Codex is competitive with TPU-v4.
  • 42
    Jurassic-2 Reviews
    Jurassic-2 is the latest generation AI21 Studio foundation models. It's a game changer in the field AI, with new capabilities and top-tier quality. We're also releasing task-specific APIs with superior reading and writing capabilities. AI21 Studio's focus is to help businesses and developers leverage reading and writing AI in order to build real-world, tangible products. The release of Task-Specific and Jurassic-2 APIs marks two significant milestones. They will enable you to bring generative AI into production. Jurassic-2 (or J2, as we like to call it) is the next generation of our foundation models with significant improvements in quality and new capabilities including zero-shot instruction-following, reduced latency, and multi-language support. Task-specific APIs offer developers industry-leading APIs for performing specialized reading and/or writing tasks.
  • 43
    OpenELM Reviews
    OpenELM is a family of open-source language models developed by Apple. It uses a layering strategy to allocate parameters efficiently within each layer of a transformer model. This leads to improved accuracy compared to other open language models. OpenELM was trained using publicly available datasets, and it achieves the best performance for its size.
  • 44
    Code Llama Reviews
    Code Llama, a large-language model (LLM), can generate code using text prompts. Code Llama, the most advanced publicly available LLM for code tasks, has the potential to improve workflows for developers and reduce the barrier for those learning to code. Code Llama can be used to improve productivity and educate programmers to create more robust, well documented software. Code Llama, a state-of the-art LLM, is capable of generating both code, and natural languages about code, based on both code and natural-language prompts. Code Llama can be used for free in research and commercial purposes. Code Llama is a new model that is built on Llama 2. It is available in 3 models: Code Llama is the foundational model of code; Codel Llama is a Python-specific language. Code Llama-Instruct is a finely tuned natural language instruction interpreter.
  • 45
    Mixtral 8x7B Reviews
    Mixtral 8x7B has open weights and is a high quality sparse mixture expert model (SMoE). Licensed under Apache 2.0. Mixtral outperforms Llama 70B in most benchmarks, with 6x faster Inference. It is the strongest model with an open-weight license and the best overall model in terms of cost/performance tradeoffs. It matches or exceeds GPT-3.5 in most standard benchmarks.
  • 46
    LongLLaMA Reviews
    This repository contains a research preview of LongLLaMA. It is a large language-model capable of handling contexts up to 256k tokens. LongLLaMA was built on the foundation of OpenLLaMA, and fine-tuned with the Focused Transformer method. LongLLaMA code was built on the foundation of Code Llama. We release a smaller base variant of the LongLLaMA (not instruction-tuned) on a permissive licence (Apache 2.0), and inference code that supports longer contexts for hugging face. Our model weights are a drop-in replacement for LLaMA (for short contexts up to 2048 tokens) in existing implementations. We also provide evaluation results, and comparisons with the original OpenLLaMA model.
  • 47
    R1 1776 Reviews
    Perplexity AI has introduced R1 1776 as an open-source variant of DeepSeek R1, aiming to advance transparency and collaboration in AI research. By making the model’s architecture and codebase publicly accessible, developers and researchers can refine and expand its capabilities for diverse applications. This initiative not only encourages community-driven innovation but also reinforces ethical AI development by building on the foundation of DeepSeek R1.
  • 48
    Qwen2.5-1M Reviews
    Qwen2.5-1M is an advanced open-source language model developed by the Qwen team, capable of handling up to one million tokens in context. This release introduces two upgraded variants, Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, marking a significant expansion in Qwen's capabilities. To enhance efficiency, the team has also released an optimized inference framework built on vLLM, incorporating sparse attention techniques that accelerate processing speeds by 3x to 7x for long-context inputs. The update enables more efficient handling of extensive text sequences, making it ideal for complex tasks requiring deep contextual understanding. Additional insights into the model’s architecture and performance improvements are detailed in the accompanying technical report.
  • 49
    Cerebras-GPT Reviews
    The training of state-of-the art language models is extremely difficult. They require large compute budgets, complex distributed computing techniques and deep ML knowledge. Few organizations are able to train large language models from scratch. The number of organizations that do not open source their results is increasing, even though they have the expertise and resources to do so. We at Cerebras believe in open access to the latest models. Cerebras is proud to announce that Cerebras GPT, a family GPT models with 111 million to thirteen billion parameters, has been released to the open-source community. These models are trained using the Chinchilla Formula and provide the highest accuracy within a given computing budget. Cerebras GPT has faster training times and lower training costs. It also consumes less power than any other publicly available model.
  • 50
    fullmoon Reviews
    Fullmoon, an open-source, free application, allows users to interact directly with large language models on their devices. This ensures privacy and offline accessibility. It is optimized for Apple silicon and works seamlessly across iOS, iPadOS macOS, visionOS platforms. Users can customize the app with themes, fonts and system prompts. It also integrates with Apple Shortcuts to enhance functionality. Fullmoon supports models like Llama-3.2-1B-Instruct-4bit and Llama-3.2-3B-Instruct-4bit, facilitating efficient on-device AI interactions without the need for an internet connection.