Best Large Language Models for Linux of 2024

Find and compare the best Large Language Models for Linux in 2024

Use the comparison tool below to compare the top Large Language Models for Linux on the market. You can filter results by user reviews, pricing, features, platform, region, support options, integrations, and more.

  • 1
    BLACKBOX AI Reviews

    BLACKBOX AI

    BLACKBOX AI

    Free
    Available in more than 20 programming languages, including Python, JavaScript and TypeScript, Ruby, TypeScript, Go, Ruby and many others. BLACKBOX AI code search was created so that developers could find the best code fragments to use when building amazing products. Integrations with IDEs include VS Code and Github Codespaces. Jupyter Notebook, Paperspace, and many more. C#, Java, C++, C# and SQL, PHP, Go and TypeScript are just a few of the languages that can be used to search code in Python, Java and C++. It is not necessary to leave your coding environment in order to search for a specific function. Blackbox allows you to select the code from any video and then simply copy it into your text editor. Blackbox supports all programming languages and preserves the correct indentation. The Pro plan allows you to copy text from over 200 languages and all programming languages.
  • 2
    Stable LM Reviews

    Stable LM

    Stability AI

    Free
    StableLM: Stability AI language models StableLM builds upon our experience with open-sourcing previous language models in collaboration with EleutherAI. This nonprofit research hub. These models include GPTJ, GPTNeoX and the Pythia Suite, which were all trained on The Pile dataset. Cerebras GPT and Dolly-2 are two recent open-source models that continue to build upon these efforts. StableLM was trained on a new dataset that is three times bigger than The Pile and contains 1.5 trillion tokens. We will provide more details about the dataset at a later date. StableLM's richness allows it to perform well in conversational and coding challenges, despite the small size of its dataset (3-7 billion parameters, compared to GPT-3's 175 billion). The development of Stable LM 3B broadens the range of applications that are viable on the edge or on home PCs. This means that individuals and companies can now develop cutting-edge technologies with strong conversational capabilities – like creative writing assistance – while keeping costs low and performance high.
  • 3
    GPT4All Reviews

    GPT4All

    Nomic AI

    Free
    GPT4All provides an ecosystem for training and deploying large language models, which run locally on consumer CPUs. The goal is to be the best assistant-style language models that anyone or any enterprise can freely use and distribute. A GPT4All is a 3GB to 8GB file you can download and plug in the GPT4All ecosystem software. Nomic AI maintains and supports this software ecosystem in order to enforce quality and safety, and to enable any person or company to easily train and deploy large language models on the edge. Data is a key ingredient in building a powerful and general-purpose large-language model. The GPT4All Community has created the GPT4All Open Source Data Lake as a staging area for contributing instruction and assistance tuning data for future GPT4All Model Trains.
  • 4
    Qwen-7B Reviews

    Qwen-7B

    Alibaba

    Free
    Qwen-7B, also known as Qwen-7B, is the 7B-parameter variant of the large language models series Qwen. Tongyi Qianwen, proposed by Alibaba Cloud. Qwen-7B, a Transformer-based language model, is pretrained using a large volume data, such as web texts, books, code, etc. Qwen-7B is also used to train Qwen-7B Chat, an AI assistant that uses large models and alignment techniques. The Qwen-7B features include: Pre-trained with high quality data. We have pretrained Qwen-7B using a large-scale, high-quality dataset that we constructed ourselves. The dataset contains over 2.2 trillion tokens. The dataset contains plain texts and codes and covers a wide range domains including general domain data as well as professional domain data. Strong performance. We outperform our competitors in a series benchmark datasets that evaluate natural language understanding, mathematics and coding. And more.
  • 5
    Jurassic-2 Reviews

    Jurassic-2

    AI21

    $29 per month
    Jurassic-2 is the latest generation AI21 Studio foundation models. It's a game changer in the field AI, with new capabilities and top-tier quality. We're also releasing task-specific APIs with superior reading and writing capabilities. AI21 Studio's focus is to help businesses and developers leverage reading and writing AI in order to build real-world, tangible products. The release of Task-Specific and Jurassic-2 APIs marks two significant milestones. They will enable you to bring generative AI into production. Jurassic-2 (or J2, as we like to call it) is the next generation of our foundation models with significant improvements in quality and new capabilities including zero-shot instruction-following, reduced latency, and multi-language support. Task-specific APIs offer developers industry-leading APIs for performing specialized reading and/or writing tasks.
  • 6
    Dolly Reviews

    Dolly

    Databricks

    Free
    Dolly is an inexpensive LLM that demonstrates a surprising amount of the capabilities of ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly uses an open source model with 6 billion parameters from EleutherAI, which is modified to include new capabilities like brainstorming and text creation that were not present in the original.
  • 7
    mT5 Reviews

    mT5

    Google

    Free
    Multilingual T5 is a massively pretrained text-totext transformer model that has been trained using a similar recipe to T5. This repo can used to reproduce the experiments described in the mT5 article. The mC4 corpus covers 101 languages. Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, and more.
  • 8
    Cerebras-GPT Reviews
    The training of state-of-the art language models is extremely difficult. They require large compute budgets, complex distributed computing techniques and deep ML knowledge. Few organizations are able to train large language models from scratch. The number of organizations that do not open source their results is increasing, even though they have the expertise and resources to do so. We at Cerebras believe in open access to the latest models. Cerebras is proud to announce that Cerebras GPT, a family GPT models with 111 million to thirteen billion parameters, has been released to the open-source community. These models are trained using the Chinchilla Formula and provide the highest accuracy within a given computing budget. Cerebras GPT has faster training times and lower training costs. It also consumes less power than any other publicly available model.
  • 9
    Falcon-40B Reviews

    Falcon-40B

    Technology Innovation Institute (TII)

    Free
    Falcon-40B is a 40B parameter causal decoder model, built by TII. It was trained on 1,000B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-40B Falcon-40B is the best open source model available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties. This is a raw model that should be finetuned to fit most uses. If you're looking for a model that can take generic instructions in chat format, we suggest Falcon-40B Instruct.
  • 10
    Falcon-7B Reviews

    Falcon-7B

    Technology Innovation Institute (TII)

    Free
    Falcon-7B is a 7B parameter causal decoder model, built by TII. It was trained on 1,500B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-7B Falcon-7B? It outperforms similar open-source models, such as MPT-7B StableLM RedPajama, etc. It is a result of being trained using 1,500B tokens from RefinedWeb enhanced by curated corpora. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties.
  • 11
    RedPajama Reviews

    RedPajama

    RedPajama

    Free
    GPT-4 and other foundation models have accelerated AI's development. The most powerful models, however, are closed commercial models or partially open. RedPajama aims to create a set leading, open-source models. Today, we're excited to announce that the first phase of this project is complete: the reproduction of LLaMA's training dataset of more than 1.2 trillion tokens. The most capable foundations models are currently closed behind commercial APIs. This limits research, customization and their use with sensitive information. If the open community can bridge the quality gap between closed and open models, fully open-source models could be the answer to these limitations. Recent progress has been made in this area. AI is in many ways having its Linux moment. Stable Diffusion demonstrated that open-source software can not only compete with commercial offerings such as DALL-E, but also lead to incredible creative results from community participation.
  • 12
    Vicuna Reviews

    Vicuna

    lmsys.org

    Free
    Vicuna-13B, an open-source chatbot, is trained by fine-tuning LLaMA using user-shared conversations from ShareGPT. Vicuna-13B's preliminary evaluation using GPT-4, as a judge, shows that it achieves a quality of more than 90%* for OpenAI ChatGPT or Google Bard and outperforms other models such as LLaMA or Stanford Alpaca. Vicuna-13B costs around $300 to train. The online demo and the code, along with weights, are available to non-commercial users.
  • 13
    MPT-7B Reviews

    MPT-7B

    MosaicML

    Free
    Introducing MPT-7B - the latest addition to our MosaicML Foundation Series. MPT-7B, a transformer that is trained from scratch using 1T tokens of code and text, is the latest entry in our MosaicML Foundation Series. It is open-source, available for commercial purposes, and has the same quality as LLaMA-7B. MPT-7B trained on the MosaicML Platform in 9.5 days, with zero human interaction at a cost $200k. You can now train, fine-tune and deploy your private MPT models. You can either start from one of our checkpoints, or you can start from scratch. For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the last of which uses a context length of 65k tokens!
  • 14
    OpenLLaMA Reviews

    OpenLLaMA

    OpenLLaMA

    Free
    OpenLLaMA, a permissively-licensed open source reproduction of Meta AI’s LLaMA 7B, is trained on the RedPajama data set. Our model weights are a drop-in replacement for LLaMA7B in existing implementations. We also offer a smaller 3B version of the LLaMA Model.
  • 15
    ChatGLM-6B Reviews

    ChatGLM-6B

    Zhipu AI

    Free
    ChatGLM-6B, a Chinese-English bilingual dialogue model based on General Language Model architecture (GLM), has 6.2 billion parameters. Users can deploy model quantization locally on consumer-grade graphic cards (only 6GB video memory required at INT4 quantization levels). ChatGLM-6B is based on technology similar to ChatGPT and optimized for Chinese dialogue and Q&A. After approximately 1T identifiers for Chinese and English bilingual training and supplemented with supervision and fine-tuning as well as feedback self-help and human feedback reinforcement learning, ChatGLM-6B, with 6.2 billion parameters, has been able generate answers that are in line with human preference.
  • 16
    Code Llama Reviews
    Code Llama, a large-language model (LLM), can generate code using text prompts. Code Llama, the most advanced publicly available LLM for code tasks, has the potential to improve workflows for developers and reduce the barrier for those learning to code. Code Llama can be used to improve productivity and educate programmers to create more robust, well documented software. Code Llama, a state-of the-art LLM, is capable of generating both code, and natural languages about code, based on both code and natural-language prompts. Code Llama can be used for free in research and commercial purposes. Code Llama is a new model that is built on Llama 2. It is available in 3 models: Code Llama is the foundational model of code; Codel Llama is a Python-specific language. Code Llama-Instruct is a finely tuned natural language instruction interpreter.
  • 17
    Jan Reviews

    Jan

    Jan

    Free
    AI assistants that can be customized, global hotkeys and in-line AI will help you to double your productivity. Elegant features that seamlessly integrate into your mobile workflows. Conversations, preferences and model usage remain on your computer - secure, exportable and can be deleted any time.
  • 18
    Mixtral 8x7B Reviews

    Mixtral 8x7B

    Mistral AI

    Free
    Mixtral 8x7B has open weights and is a high quality sparse mixture expert model (SMoE). Licensed under Apache 2.0. Mixtral outperforms Llama 70B in most benchmarks, with 6x faster Inference. It is the strongest model with an open-weight license and the best overall model in terms of cost/performance tradeoffs. It matches or exceeds GPT-3.5 in most standard benchmarks.
  • 19
    Codestral Reviews

    Codestral

    Mistral AI

    Free
    We are proud to introduce Codestral, the first code model we have ever created. Codestral is a generative AI model that is open-weight and specifically designed for code generation. It allows developers to interact and write code using a shared API endpoint for instructions and completion. It can be used for advanced AI applications by software developers as it is able to master both code and English. Codestral has been trained on a large dataset of 80+ languages, including some of the most popular, such as Python and Java. It also includes C, C++ JavaScript, Bash, C, C++. It also performs well with more specific ones, such as Swift and Fortran. Codestral's broad language base allows it to assist developers in a variety of coding environments and projects.
  • 20
    PaLM Reviews
    PaLM API allows you to easily and safely build on top our best language models. We are currently making an efficient model, both in terms of size, and capabilities, available today. We will soon add more sizes. MakerSuite is an intuitive tool that allows you to quickly prototype ideas. Over time, it will include features for prompt engineering and synthetic data generation. It also supports custom-model tuning. All of this is supported by robust safety tools. Only a few developers have access to the PaLM API and MakerSuite in private preview today. Stay tuned for our waitlist.
  • 21
    StarCoder Reviews

    StarCoder

    BigCode

    Free
    StarCoderBase and StarCoder are Large Language Models (Code LLMs), trained on permissively-licensed data from GitHub. This includes data from 80+ programming language, Git commits and issues, Jupyter Notebooks, and Git commits. We trained a 15B-parameter model for 1 trillion tokens, similar to LLaMA. We refined the StarCoderBase for 35B Python tokens. The result is a new model we call StarCoder. StarCoderBase is a model that outperforms other open Code LLMs in popular programming benchmarks. It also matches or exceeds closed models like code-cushman001 from OpenAI, the original Codex model which powered early versions GitHub Copilot. StarCoder models are able to process more input with a context length over 8,000 tokens than any other open LLM. This allows for a variety of interesting applications. By prompting the StarCoder model with a series dialogues, we allowed them to act like a technical assistant.
  • 22
    Llama 2 Reviews
    The next generation of the large language model. This release includes modelweights and starting code to pretrained and fine tuned Llama languages models, ranging from 7B-70B parameters. Llama 1 models have a context length of 2 trillion tokens. Llama 2 models have a context length double that of Llama 1. The fine-tuned Llama 2 models have been trained using over 1,000,000 human annotations. Llama 2, a new open-source language model, outperforms many other open-source language models in external benchmarks. These include tests of reasoning, coding and proficiency, as well as knowledge tests. Llama 2 has been pre-trained using publicly available online data sources. Llama-2 chat, a fine-tuned version of the model, is based on publicly available instruction datasets, and more than 1 million human annotations. We have a wide range of supporters in the world who are committed to our open approach for today's AI. These companies have provided early feedback and have expressed excitement to build with Llama 2
  • 23
    Jurassic-1 Reviews
    Jurassic-1 comes in two sizes. The Jumbo version is the most advanced language model, with 178B parameters. It was released to developers for general use. AI21 Studio, currently in open beta allows anyone to sign up for the service and immediately begin querying Jurassic-1 with our API and interactive website environment. AI21 Labs' mission is to fundamentally change the way humans read and compose by introducing machines as partners in thought. We can only achieve this if we work together. Since the Mesozoic Era, or 2017, we have been researching language models. Jurassic-1 is based on this research and is the first generation we are making available to wide use.
  • 24
    PaLM 2 Reviews
    PaLM 2 is Google's next-generation large language model, which builds on Google’s research and development in machine learning. It excels in advanced reasoning tasks including code and mathematics, classification and question-answering, translation and multilingual competency, and natural-language generation better than previous state-of the-art LLMs including PaLM. It is able to accomplish these tasks due to the way it has been built - combining compute-optimal scale, an improved dataset mix, and model architecture improvement. PaLM 2 is based on Google's approach for building and deploying AI responsibly. It was rigorously evaluated for its potential biases and harms, as well as its capabilities and downstream applications in research and product applications. It is being used to power generative AI tools and features at Google like Bard, the PaLM API, and other state-ofthe-art models like Sec-PaLM and Med-PaLM 2.
  • 25
    DeepSeek LLM Reviews
    Introducing DeepSeek LLM - an advanced language model with 67 billion parameters. It was trained from scratch using a massive dataset of 2 trillion tokens, both in English and Chinese. To encourage research, we made DeepSeek LLM 67B Base and DeepSeek LLM 67B Chat available as open source to the research community.
  • Previous
  • You're on page 1
  • 2
  • Next