Best LFM-3B Alternatives in 2024

Find the top alternatives to LFM-3B currently available. Compare ratings, reviews, pricing, and features of LFM-3B alternatives in 2024. Slashdot lists the best LFM-3B alternatives on the market that offer competing products that are similar to LFM-3B. Sort through LFM-3B alternatives below to make the best choice for your needs

  • 1
    Qwen-7B Reviews
    Qwen-7B, also known as Qwen-7B, is the 7B-parameter variant of the large language models series Qwen. Tongyi Qianwen, proposed by Alibaba Cloud. Qwen-7B, a Transformer-based language model, is pretrained using a large volume data, such as web texts, books, code, etc. Qwen-7B is also used to train Qwen-7B Chat, an AI assistant that uses large models and alignment techniques. The Qwen-7B features include: Pre-trained with high quality data. We have pretrained Qwen-7B using a large-scale, high-quality dataset that we constructed ourselves. The dataset contains over 2.2 trillion tokens. The dataset contains plain texts and codes and covers a wide range domains including general domain data as well as professional domain data. Strong performance. We outperform our competitors in a series benchmark datasets that evaluate natural language understanding, mathematics and coding. And more.
  • 2
    Phi-2 Reviews
    Phi-2 is a 2.7-billion-parameter language-model that shows outstanding reasoning and language-understanding capabilities. It represents the state-of-the art performance among language-base models with less than thirteen billion parameters. Phi-2 can match or even outperform models 25x larger on complex benchmarks, thanks to innovations in model scaling. Phi-2's compact size makes it an ideal playground for researchers. It can be used for exploring mechanistic interpretationability, safety improvements or fine-tuning experiments on a variety tasks. We have included Phi-2 in the Azure AI Studio catalog to encourage research and development of language models.
  • 3
    Amazon Titan Reviews
    Amazon Titan models are exclusive to Amazon Bedrock. They incorporate Amazon's 25-year experience in AI and machine learning innovation across its business. Amazon Titan foundation models (FMs), via a fully-managed API, provide customers with an array of high-performing text, image, and multimodal models. Amazon Titan models were created by AWS, and pre-trained on large datasets. They are powerful, general purpose models that support a wide range of use cases while also supporting responsible AI. You can use them as-is or customize them privately with your own data. Amazon Titan Text Premier is an advanced model in the Amazon Titan Text family that delivers superior performance for a variety of enterprise applications. This model is optimized to integrate with Agents and knowledge bases for Amazon Bedrock. It's an ideal option for creating interactive generative AI apps.
  • 4
    LLaMA Reviews
    LLaMA (Large Language Model meta AI) is a state of the art foundational large language model that was created to aid researchers in this subfield. LLaMA allows researchers to use smaller, more efficient models to study these models. This furtherdemocratizes access to this rapidly-changing field. Because it takes far less computing power and resources than large language models, such as LLaMA, to test new approaches, validate other's work, and explore new uses, training smaller foundation models like LLaMA can be a desirable option. Foundation models are trained on large amounts of unlabeled data. This makes them perfect for fine-tuning for many tasks. We make LLaMA available in several sizes (7B-13B, 33B and 65B parameters), and also share a LLaMA card that explains how the model was built in line with our Responsible AI practices.
  • 5
    Chinchilla Reviews
    Chinchilla has a large language. Chinchilla has the same compute budget of Gopher, but 70B more parameters and 4x as much data. Chinchilla consistently and significantly outperforms Gopher 280B, GPT-3 175B, Jurassic-1 178B, and Megatron-Turing (530B) in a wide range of downstream evaluation tasks. Chinchilla also uses less compute to perform fine-tuning, inference and other tasks. This makes it easier for downstream users to use. Chinchilla reaches a high-level average accuracy of 67.5% for the MMLU benchmark. This is a greater than 7% improvement compared to Gopher.
  • 6
    OpenAI o1 Reviews
    OpenAI o1 is a new series AI models developed by OpenAI that focuses on enhanced reasoning abilities. These models, such as o1 preview and o1 mini, are trained with a novel reinforcement-learning approach that allows them to spend more time "thinking through" problems before presenting answers. This allows o1 excel in complex problem solving tasks in areas such as coding, mathematics, or science, outperforming other models like GPT-4o. The o1 series is designed to tackle problems that require deeper thinking processes. This marks a significant step in AI systems that can think more like humans.
  • 7
    Gemma 2 Reviews
    Gemini models are a family of light-open, state-of-the art models that was created using the same research and technology as Gemini models. These models include comprehensive security measures, and help to ensure responsible and reliable AI through selected data sets. Gemma models have exceptional comparative results, even surpassing some larger open models, in their 2B and 7B sizes. Keras 3.0 offers seamless compatibility with JAX TensorFlow PyTorch and JAX. Gemma 2 has been redesigned to deliver unmatched performance and efficiency. It is optimized for inference on a variety of hardware. The Gemma models are available in a variety of models that can be customized to meet your specific needs. The Gemma models consist of large text-to text lightweight language models that have a decoder and are trained on a large set of text, code, or mathematical content.
  • 8
    LFM-40B Reviews
    LFM-40B provides a new balance in model size and output. It uses 12B parameters that are activated at the time of use. Its performance is comparable with models larger than it, and its MoE architecture allows for higher throughput on more cost-effective equipment.
  • 9
    Baichuan-13B Reviews

    Baichuan-13B

    Baichuan Intelligent Technology

    Free
    Baichuan-13B, a large-scale language model with 13 billion parameters that is open source and available commercially by Baichuan Intelligent, was developed following Baichuan -7B. It has the best results for a language model of the same size in authoritative Chinese and English benchmarks. This release includes two versions of pretraining (Baichuan-13B Base) and alignment (Baichuan-13B Chat). Baichuan-13B has more data and a larger size. It expands the number parameters to 13 billion based on Baichuan -7B, and trains 1.4 trillion coins on high-quality corpus. This is 40% more than LLaMA-13B. It is open source and currently the model with the most training data in 13B size. Support Chinese and English bi-lingual, use ALiBi code, context window is 4096.
  • 10
    Claude 3.5 Haiku Reviews
    Our fastest model, which delivers advanced coding, tool usage, and reasoning for an affordable price Claude 3.5 Haiku, our next-generation model, is our fastest. Claude 3.5 Haiku is faster than Claude 3 Haiku and has improved in every skill set. It also surpasses Claude 3 Opus on many intelligence benchmarks. Claude 3.5 Haiku can be accessed via our first-party APIs, Amazon Bedrock and Google Cloud Vertex AI. Initially, it is available as a text only model, with image input coming later.
  • 11
    Jamba Reviews
    Jamba is a powerful and efficient long context model that is open to builders, but built for enterprises. Jamba's latency is superior to all other leading models of similar size. Jamba's 256k window is the longest available. Jamba's Mamba Transformer MoE Architecture is designed to increase efficiency and reduce costs. Jamba includes key features from OOTB, including function calls, JSON output, document objects and citation mode. Jamba 1.5 models deliver high performance throughout the entire context window. Jamba 1.5 models score highly in common quality benchmarks. Secure deployment tailored to your enterprise. Start using Jamba immediately on our production-grade SaaS Platform. Our strategic partners can deploy the Jamba model family. For enterprises who require custom solutions, we offer VPC and on-premise deployments. We offer hands-on management and continuous pre-training for enterprises with unique, bespoke needs.
  • 12
    CodeQwen Reviews
    CodeQwen, developed by the Qwen Team, Alibaba Cloud, is the code version. It is a transformer based decoder only language model that has been pre-trained with a large number of codes. A series of benchmarks shows that the code generation is strong and that it performs well. Supporting long context generation and understanding with a context length of 64K tokens. CodeQwen is a 92-language coding language that provides excellent performance for text-to SQL, bug fixes, and more. CodeQwen chat is as simple as writing a few lines of code using transformers. We build the tokenizer and model using pre-trained methods and use the generate method for chatting. The chat template is provided by the tokenizer. Following our previous practice, we apply the ChatML Template for chat models. The model will complete the code snippets in accordance with the prompts without any additional formatting.
  • 13
    Gemma Reviews
    Gemma is the family of lightweight open models that are built using the same research and technology as the Gemini models. Gemma was developed by Google DeepMind, along with other teams within Google. The name is derived from the Latin gemma meaning "precious stones". We're also releasing new tools to encourage developer innovation, encourage collaboration, and guide responsible use of Gemma model. Gemma models are based on the same infrastructure and technical components as Gemini, Google's largest and most powerful AI model. Gemma 2B, 7B and other open models can achieve the best performance possible for their size. Gemma models can run directly on a desktop or laptop computer for developers. Gemma is able to surpass much larger models in key benchmarks, while adhering our rigorous standards of safe and responsible outputs.
  • 14
    DBRX Reviews
    Databricks has created an open, general purpose LLM called DBRX. DBRX is the new benchmark for open LLMs. It also provides open communities and enterprises that are building their own LLMs capabilities that were previously only available through closed model APIs. According to our measurements, DBRX surpasses GPT 3.5 and is competitive with Gemini 1.0 Pro. It is a code model that is more capable than specialized models such as CodeLLaMA 70B, and it also has the strength of a general-purpose LLM. This state-of the-art quality is accompanied by marked improvements in both training and inference performances. DBRX is the most efficient open model thanks to its finely-grained architecture of mixtures of experts (MoE). Inference is 2x faster than LLaMA2-70B and DBRX has about 40% less parameters in total and active count compared to Grok-1.
  • 15
    Qwen Reviews
    Qwen LLM is a family of large-language models (LLMs), developed by Damo Academy, an Alibaba Cloud subsidiary. These models are trained using a large dataset of text and codes, allowing them the ability to understand and generate text that is human-like, translate languages, create different types of creative content and answer your question in an informative manner. Here are some of the key features of Qwen LLMs. Variety of sizes: Qwen's series includes sizes ranging from 1.8 billion parameters to 72 billion, offering options that meet different needs and performance levels. Open source: Certain versions of Qwen have open-source code, which is available to anyone for use and modification. Qwen is multilingual and can translate multiple languages including English, Chinese and Japanese. Qwen models are capable of a wide range of tasks, including text summarization and code generation, as well as generation and translation.
  • 16
    OPT Reviews
    The ability of large language models to learn in zero- and few shots, despite being trained for hundreds of thousands or even millions of days, has been remarkable. These models are expensive to replicate, due to their high computational cost. The few models that are available via APIs do not allow access to the full weights of the model, making it difficult to study. Open Pre-trained Transformers is a suite decoder-only pre-trained transforms with parameters ranging from 175B to 125M. We aim to share this fully and responsibly with interested researchers. We show that OPT-175B has a carbon footprint of 1/7th that of GPT-3. We will also release our logbook, which details the infrastructure challenges we encountered, as well as code for experimenting on all of the released model.
  • 17
    Claude 3.5 Sonnet Reviews
    Claude 3.5 Sonnet is a new benchmark for the industry in terms of graduate-level reasoning (GPQA), undergrad-level knowledge (MMLU), as well as coding proficiency (HumanEval). It is exceptional in writing high-quality, relatable content that is written with a natural and relatable tone. It also shows marked improvements in understanding nuance, humor and complex instructions. Claude 3.5 Sonnet is twice as fast as Claude 3 Opus. Claude 3.5 Sonnet is ideal for complex tasks, such as providing context-sensitive support to customers and orchestrating workflows. Claude 3.5 Sonnet can be downloaded for free from Claude.ai and Claude iOS, and subscribers to the Claude Pro and Team plans will have access to it at rates that are significantly higher. It is also accessible via the Anthropic AI, Amazon Bedrock and Google Cloud Vertex AI. The model costs $3 for every million input tokens. It costs $15 for every million output tokens. There is a 200K token window.
  • 18
    Pixtral 12B Reviews
    Pixtral 12B, a multimodal AI model pioneered by Mistral AI and designed to process and understand both text and images data seamlessly, is a groundbreaking AI model. This model represents a significant advance in the integration of data types. It allows for more intuitive interaction and enhanced content creation abilities. Pixtral 12B, which is based on Mistral's NeMo 12B Text Model, incorporates an additional Vision Adapter that adds 400 million parameters. This allows it to handle visual inputs of up to 1024x1024 pixels. This model is capable of a wide range of applications from image analysis to answering visual content questions. Its versatility is demonstrated in real-world scenarios. Pixtral 12B is a powerful tool for developers, as it not only has a large context of 128k tokens, but also uses innovative techniques such as GeLU activation and RoPE 2D for its vision components.
  • 19
    Qwen2 Reviews
    Qwen2 is a large language model developed by Qwen Team, Alibaba Cloud. Qwen2 is an extensive series of large language model developed by the Qwen Team at Alibaba Cloud. It includes both base models and instruction-tuned versions, with parameters ranging from 0.5 to 72 billion. It also features dense models and a Mixture of Experts model. The Qwen2 Series is designed to surpass previous open-weight models including its predecessor Qwen1.5 and to compete with proprietary model across a wide spectrum of benchmarks, such as language understanding, generation and multilingual capabilities.
  • 20
    Gemini Reviews
    Gemini was designed from the ground-up to be multimodal. It is highly efficient in tool and API integrations, and it is built to support future innovations like memory and planning. We're seeing multimodal capabilities that were not present in previous models. Gemini is our most flexible model to date -- it can run on anything from data centers to smartphones. Its cutting-edge capabilities will improve the way developers and enterprises build and scale AI. We've optimized Gemini 1.0 for three different sizes. Gemini Ultra - Our largest and most capable model, designed for highly complex tasks. Gemini Pro is our best model to scale across a variety of tasks. Gemini Nano -- our most efficient model for on-device tasks.
  • 21
    GPT-4o mini Reviews
    A small model with superior textual Intelligence and multimodal reasoning. GPT-4o Mini's low cost and low latency enable a wide range of tasks, including applications that chain or paralelize multiple model calls (e.g. calling multiple APIs), send a large amount of context to the models (e.g. full code base or history of conversations), or interact with clients through real-time, fast text responses (e.g. customer support chatbots). GPT-4o Mini supports text and vision today in the API. In the future, it will support text, image and video inputs and outputs. The model supports up to 16K outputs tokens per request and has knowledge until October 2023. It has a context of 128K tokens. The improved tokenizer shared by GPT-4o makes it easier to handle non-English text.
  • 22
    ERNIE 3.0 Titan Reviews
    Pre-trained models of language have achieved state-of the-art results for various Natural Language Processing (NLP). GPT-3 has demonstrated that scaling up language models pre-trained can further exploit their immense potential. Recently, a framework named ERNIE 3.0 for pre-training large knowledge enhanced models was proposed. This framework trained a model that had 10 billion parameters. ERNIE 3.0 performed better than the current state-of-the art models on a variety of NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. We also design a self supervised adversarial and a controllable model language loss to make ERNIE Titan generate credible texts.
  • 23
    StarCoder Reviews
    StarCoderBase and StarCoder are Large Language Models (Code LLMs), trained on permissively-licensed data from GitHub. This includes data from 80+ programming language, Git commits and issues, Jupyter Notebooks, and Git commits. We trained a 15B-parameter model for 1 trillion tokens, similar to LLaMA. We refined the StarCoderBase for 35B Python tokens. The result is a new model we call StarCoder. StarCoderBase is a model that outperforms other open Code LLMs in popular programming benchmarks. It also matches or exceeds closed models like code-cushman001 from OpenAI, the original Codex model which powered early versions GitHub Copilot. StarCoder models are able to process more input with a context length over 8,000 tokens than any other open LLM. This allows for a variety of interesting applications. By prompting the StarCoder model with a series dialogues, we allowed them to act like a technical assistant.
  • 24
    Gemini Ultra Reviews
    Gemini Ultra is an advanced new language model by Google DeepMind. It is the most powerful and largest model in the Gemini Family, which includes Gemini Pro & Gemini Nano. Gemini Ultra was designed to handle highly complex tasks such as machine translation, code generation, and natural language processing. It is the first language model that has outperformed human experts in the Massive Multitask Language Understanding test (MMLU), achieving a score 90%.
  • 25
    Martian Reviews
    Martian outperforms GPT-4 across OpenAI's evals (open/evals). Martian outperforms GPT-4 in all OpenAI's evaluations (open/evals). We transform opaque black boxes into interpretable visual representations. Our router is our first tool built using our model mapping method. Model mapping is being used in many other applications, including transforming transformers from unintelligible matrices to human-readable programs. Automatically reroute your customers to other providers if a company has an outage or a high latency period. Calculate how much money you could save using the Martian Model Router by using our interactive cost calculator. Enter the number of users and tokens per session. Also, specify how you want to trade off between cost and quality.
  • 26
    Llama 2 Reviews
    The next generation of the large language model. This release includes modelweights and starting code to pretrained and fine tuned Llama languages models, ranging from 7B-70B parameters. Llama 1 models have a context length of 2 trillion tokens. Llama 2 models have a context length double that of Llama 1. The fine-tuned Llama 2 models have been trained using over 1,000,000 human annotations. Llama 2, a new open-source language model, outperforms many other open-source language models in external benchmarks. These include tests of reasoning, coding and proficiency, as well as knowledge tests. Llama 2 has been pre-trained using publicly available online data sources. Llama-2 chat, a fine-tuned version of the model, is based on publicly available instruction datasets, and more than 1 million human annotations. We have a wide range of supporters in the world who are committed to our open approach for today's AI. These companies have provided early feedback and have expressed excitement to build with Llama 2
  • 27
    Alpa Reviews
    Alpa aims automate large-scale distributed training. Alpa was originally developed by people at UC Berkeley's Sky Lab. Alpa's advanced techniques were described in a paper published by OSDI'2022. Google is adding new members to the Alpa community. A language model is a probabilistic distribution of probability over a sequence of words. It uses all the words it has seen to predict the next word. It is useful in a variety AI applications, including the auto-completion of your email or chatbot service. You can find more information on the language model Wikipedia page. GPT-3 is a large language model with 175 billion parameters that uses deep learning to produce text that looks human-like. GPT-3 was described by many researchers and news articles as "one the most important and interesting AI systems ever created." GPT-3 is being used as a backbone for the latest NLP research.
  • 28
    OpenELM Reviews
    OpenELM is a family of open-source language models developed by Apple. It uses a layering strategy to allocate parameters efficiently within each layer of a transformer model. This leads to improved accuracy compared to other open language models. OpenELM was trained using publicly available datasets, and it achieves the best performance for its size.
  • 29
    Jurassic-1 Reviews
    Jurassic-1 comes in two sizes. The Jumbo version is the most advanced language model, with 178B parameters. It was released to developers for general use. AI21 Studio, currently in open beta allows anyone to sign up for the service and immediately begin querying Jurassic-1 with our API and interactive website environment. AI21 Labs' mission is to fundamentally change the way humans read and compose by introducing machines as partners in thought. We can only achieve this if we work together. Since the Mesozoic Era, or 2017, we have been researching language models. Jurassic-1 is based on this research and is the first generation we are making available to wide use.
  • 30
    RoBERTa Reviews
    RoBERTa is based on BERT's language-masking strategy. The system learns to predict hidden sections of text in unannotated language examples. RoBERTa was implemented in PyTorch and modifies key hyperparameters of BERT. This includes removing BERT’s next-sentence-pretraining objective and training with larger mini-batches. This allows RoBERTa improve on the masked-language modeling objective, which is comparable to BERT. It also leads to improved downstream task performance. We are also exploring the possibility of training RoBERTa with a lot more data than BERT and for a longer time. We used both existing unannotated NLP data sets as well as CC-News which was a new set of public news articles.
  • 31
    RedPajama Reviews
    GPT-4 and other foundation models have accelerated AI's development. The most powerful models, however, are closed commercial models or partially open. RedPajama aims to create a set leading, open-source models. Today, we're excited to announce that the first phase of this project is complete: the reproduction of LLaMA's training dataset of more than 1.2 trillion tokens. The most capable foundations models are currently closed behind commercial APIs. This limits research, customization and their use with sensitive information. If the open community can bridge the quality gap between closed and open models, fully open-source models could be the answer to these limitations. Recent progress has been made in this area. AI is in many ways having its Linux moment. Stable Diffusion demonstrated that open-source software can not only compete with commercial offerings such as DALL-E, but also lead to incredible creative results from community participation.
  • 32
    ALBERT Reviews
    ALBERT is a Transformer model that can be self-supervised and was trained on large amounts of English data. It does not need manual labelling and instead uses an automated process that generates inputs and labels from the raw text. It is trained with two distinct goals in mind. Masked Language Modeling is the first. This randomly masks 15% words in an input sentence and requires that the model predict them. This technique is different from autoregressive models such as GPT and RNNs in that it allows the model learn bidirectional sentence representations. Sentence Ordering Prediction is the second objective. This involves predicting the order of two consecutive text segments during pretraining.
  • 33
    Llama 3.2 Reviews
    There are now more versions of the open-source AI model that you can refine, distill and deploy anywhere. Choose from 1B or 3B, or build with Llama 3. Llama 3.2 consists of a collection large language models (LLMs), which are pre-trained and fine-tuned. They come in sizes 1B and 3B, which are multilingual text only. Sizes 11B and 90B accept both text and images as inputs and produce text. Our latest release allows you to create highly efficient and performant applications. Use our 1B and 3B models to develop on-device applications, such as a summary of a conversation from your phone, or calling on-device features like calendar. Use our 11B and 90B models to transform an existing image or get more information from a picture of your surroundings.
  • 34
    ChatGPT Reviews
    ChatGPT is an OpenAI language model. It can generate human-like responses to a variety prompts, and has been trained on a wide range of internet texts. ChatGPT can be used to perform natural language processing tasks such as conversation, question answering, and text generation. ChatGPT is a pretrained language model that uses deep-learning algorithms to generate text. It was trained using large amounts of text data. This allows it to respond to a wide variety of prompts with human-like ease. It has a transformer architecture that has been proven to be efficient in many NLP tasks. ChatGPT can generate text in addition to answering questions, text classification and language translation. This allows developers to create powerful NLP applications that can do specific tasks more accurately. ChatGPT can also process code and generate it.
  • 35
    GPT-4o Reviews

    GPT-4o

    OpenAI

    $5.00 / 1M tokens
    GPT-4o (o for "omni") is an important step towards a more natural interaction between humans and computers. It accepts any combination as input, including text, audio and image, and can generate any combination of outputs, including text, audio and image. It can respond to audio in as little as 228 milliseconds with an average of 325 milliseconds. This is similar to the human response time in a conversation (opens in new window). It is as fast and cheaper than GPT-4 Turbo on text in English or code. However, it has a significant improvement in text in non-English language. GPT-4o performs better than existing models at audio and vision understanding.
  • 36
    Mathstral Reviews
    As a tribute for Archimedes' 2311th birthday, which we celebrate this year, we release our first Mathstral 7B model, designed specifically for math reasoning and scientific discoveries. The model comes with a 32k context-based window that is published under the Apache 2.0 License. Mathstral is a tool we're donating to the science community in order to help solve complex mathematical problems that require multi-step logical reasoning. The Mathstral release was part of a larger effort to support academic project, and it was produced as part of our collaboration with Project Numina. Mathstral, like Isaac Newton at his time, stands on Mistral 7B's shoulders and specializes in STEM. It has the highest level of reasoning in its size category, based on industry-standard benchmarks. It achieves 56.6% in MATH and 63.47% in MMLU. The following table shows the MMLU performance differences between Mathstral and Mistral 7B.
  • 37
    Inflection AI Reviews
    Inflection AI, a leading artificial intelligence research and technology company, focuses on developing advanced AI systems that interact with humans more naturally and intuitively. The company was founded in 2022 by entrepreneurs like Mustafa Suleyman (one of the cofounders of DeepMind) and Reid Hoffman (co-founder of LinkedIn). Its mission is to make powerful AI accessible and aligned to human values. Inflection AI is a company that specializes in creating large-scale language systems to enhance human-AI interaction. It aims to transform industries from customer service to productivity by designing AI systems that are intelligent, responsive and ethical. The company's focus is on safety, transparency and user control to ensure that their innovations are positive for society while addressing the potential risks associated with AI.
  • 38
    MPT-7B Reviews
    Introducing MPT-7B - the latest addition to our MosaicML Foundation Series. MPT-7B, a transformer that is trained from scratch using 1T tokens of code and text, is the latest entry in our MosaicML Foundation Series. It is open-source, available for commercial purposes, and has the same quality as LLaMA-7B. MPT-7B trained on the MosaicML Platform in 9.5 days, with zero human interaction at a cost $200k. You can now train, fine-tune and deploy your private MPT models. You can either start from one of our checkpoints, or you can start from scratch. For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the last of which uses a context length of 65k tokens!
  • 39
    OpenAI o1-mini Reviews
    OpenAI o1 mini is a new and cost-effective AI designed to enhance reasoning, especially in STEM fields such as mathematics and coding. It is part of the o1 Series, which focuses on solving problems by spending more "thinking" time through solutions. The o1 mini is 80% cheaper and smaller than its sibling. It performs well in coding and mathematical reasoning tasks.
  • 40
    Mixtral 8x7B Reviews
    Mixtral 8x7B has open weights and is a high quality sparse mixture expert model (SMoE). Licensed under Apache 2.0. Mixtral outperforms Llama 70B in most benchmarks, with 6x faster Inference. It is the strongest model with an open-weight license and the best overall model in terms of cost/performance tradeoffs. It matches or exceeds GPT-3.5 in most standard benchmarks.
  • 41
    Claude 3 Opus Reviews
    Opus, our intelligent model, is superior to its peers in most of the common benchmarks for AI systems. These include undergraduate level expert knowledge, graduate level expert reasoning, basic mathematics, and more. It displays near-human levels in terms of comprehension and fluency when tackling complex tasks. This is at the forefront of general intelligence. All Claude 3 models have increased capabilities for analysis and forecasting. They also offer nuanced content generation, code generation and the ability to converse in non-English language such as Spanish, Japanese and French.
  • 42
    Galactica Reviews
    Information overload is a major barrier to scientific progress. The explosion of scientific literature and data makes it harder to find useful insights among a vast amount of information. Search engines are used to access scientific knowledge today, but they cannot organize it. Galactica is an extensive language model which can store, combine, and reason about scientific information. We train using a large corpus of scientific papers, reference material and knowledge bases, among other sources. We outperform other models in a variety of scientific tasks. Galactica performs better than the latest GPT-3 on technical knowledge probes like LaTeX Equations by 68.2% to 49.0%. Galactica is also good at reasoning. It outperforms Chinchilla in mathematical MMLU with a score between 41.3% and 35.7%. And PaLM 540B in MATH, with a score between 20.4% and 8.8%.
  • 43
    Falcon-7B Reviews

    Falcon-7B

    Technology Innovation Institute (TII)

    Free
    Falcon-7B is a 7B parameter causal decoder model, built by TII. It was trained on 1,500B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-7B Falcon-7B? It outperforms similar open-source models, such as MPT-7B StableLM RedPajama, etc. It is a result of being trained using 1,500B tokens from RefinedWeb enhanced by curated corpora. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties.
  • 44
    Falcon-40B Reviews

    Falcon-40B

    Technology Innovation Institute (TII)

    Free
    Falcon-40B is a 40B parameter causal decoder model, built by TII. It was trained on 1,000B tokens from RefinedWeb enhanced by curated corpora. It is available under the Apache 2.0 licence. Why use Falcon-40B Falcon-40B is the best open source model available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. OpenLLM Leaderboard. It has an architecture optimized for inference with FlashAttention, multiquery and multiquery. It is available under an Apache 2.0 license that allows commercial use without any restrictions or royalties. This is a raw model that should be finetuned to fit most uses. If you're looking for a model that can take generic instructions in chat format, we suggest Falcon-40B Instruct.
  • 45
    Vicuna Reviews
    Vicuna-13B, an open-source chatbot, is trained by fine-tuning LLaMA using user-shared conversations from ShareGPT. Vicuna-13B's preliminary evaluation using GPT-4, as a judge, shows that it achieves a quality of more than 90%* for OpenAI ChatGPT or Google Bard and outperforms other models such as LLaMA or Stanford Alpaca. Vicuna-13B costs around $300 to train. The online demo and the code, along with weights, are available to non-commercial users.
  • 46
    OpenLLaMA Reviews
    OpenLLaMA, a permissively-licensed open source reproduction of Meta AI’s LLaMA 7B, is trained on the RedPajama data set. Our model weights are a drop-in replacement for LLaMA7B in existing implementations. We also offer a smaller 3B version of the LLaMA Model.
  • 47
    ChatGPT Pro Reviews
    AI will become more sophisticated as it advances, and will solve increasingly complex problems. These capabilities require a lot more computing power. ChatGPT Pro, a $200/month plan, gives you access to OpenAI's best models and tools. This plan gives you unlimited access to OpenAI o1, our smartest model. It also includes o1-mini and Advanced Voice. It also includes the o1 pro version, a version that uses more computation to think harder and give even better answers to difficult problems. We expect to add to this plan in the future more powerful and compute-intensive productivity features. ChatGPT Pro gives you access to our most intelligent model, which thinks longer and more thoroughly for the most reliable answers. According to external expert testers' evaluations, the o1 pro mode consistently produces more accurate and comprehensive answers, especially in areas such as data science, programming and case law analysis.
  • 48
    Ferret Reviews
    A MLLM system that accepts any form of referral and grounds anything in response. Ferret Model- Hybrid Region representation + Spatial-aware visual sampler allows for fine-grained and open vocabulary referring and grounding. GRIT Dataset - A large-scale, hierarchical, robust ground-and refer instruction tuning dataset. Ferret Bench - A multimodal benchmark that requires Referring/Grounding as well as Semantics, Knowledge and Reasoning.
  • 49
    LongLLaMA Reviews
    This repository contains a research preview of LongLLaMA. It is a large language-model capable of handling contexts up to 256k tokens. LongLLaMA was built on the foundation of OpenLLaMA, and fine-tuned with the Focused Transformer method. LongLLaMA code was built on the foundation of Code Llama. We release a smaller base variant of the LongLLaMA (not instruction-tuned) on a permissive licence (Apache 2.0), and inference code that supports longer contexts for hugging face. Our model weights are a drop-in replacement for LLaMA (for short contexts up to 2048 tokens) in existing implementations. We also provide evaluation results, and comparisons with the original OpenLLaMA model.
  • 50
    T5 Reviews
    With T5, we propose re-framing all NLP into a unified format where the input and the output are always text strings. This is in contrast to BERT models which can only output a class label, or a span from the input. Our text-totext framework allows us use the same model and loss function on any NLP task. This includes machine translation, document summary, question answering and classification tasks. We can also apply T5 to regression by training it to predict a string representation of a numeric value instead of the actual number.