LFM-40B Description
LFM-40B provides a new balance in model size and output. It uses 12B parameters that are activated at the time of use. Its performance is comparable with models larger than it, and its MoE architecture allows for higher throughput on more cost-effective equipment.
LFM-40B Alternatives
Claude 3.5 Sonnet
Claude 3.5 Sonnet is a new benchmark for the industry in terms of graduate-level reasoning (GPQA), undergrad-level knowledge (MMLU), as well as coding proficiency (HumanEval). It is exceptional in writing high-quality, relatable content that is written with a natural and relatable tone. It also shows marked improvements in understanding nuance, humor and complex instructions. Claude 3.5 Sonnet is twice as fast as Claude 3 Opus. Claude 3.5 Sonnet is ideal for complex tasks, such as providing context-sensitive support to customers and orchestrating workflows.
Claude 3.5 Sonnet can be downloaded for free from Claude.ai and Claude iOS, and subscribers to the Claude Pro and Team plans will have access to it at rates that are significantly higher. It is also accessible via the Anthropic AI, Amazon Bedrock and Google Cloud Vertex AI. The model costs $3 for every million input tokens. It costs $15 for every million output tokens. There is a 200K token window.
Learn more
LFM-3B
LFM-3B offers incredible performance for its small size. It is ranked first among 3B parameter transforms, hybrids and RNN models. It also outperforms previous generations of 7B and13B models. It is also comparable to Phi-3.5 mini on multiple benchmarks while being 18.4% smaller. LFM-3B can be used for mobile applications and other text-based edge applications.
Learn more
Mixtral 8x22B
Mixtral 8x22B is our latest open model. It sets new standards for performance and efficiency in the AI community. It is a sparse Mixture-of-Experts model (SMoE), which uses only 39B active variables out of 141B. This offers unparalleled cost efficiency in relation to its size. It is fluently bilingual in English, French Italian, German and Spanish. It has strong math and coding skills. It is natively able to call functions; this, along with the constrained-output mode implemented on La Plateforme, enables application development at scale and modernization of tech stacks. Its 64K context window allows for precise information retrieval from large documents. We build models with unmatched cost-efficiency for their respective sizes. This allows us to deliver the best performance-tocost ratio among models provided by the Community. Mixtral 8x22B continues our open model family. Its sparse patterns of activation make it faster than any 70B model.
Learn more
Gemma
Gemma is the family of lightweight open models that are built using the same research and technology as the Gemini models. Gemma was developed by Google DeepMind, along with other teams within Google. The name is derived from the Latin gemma meaning "precious stones". We're also releasing new tools to encourage developer innovation, encourage collaboration, and guide responsible use of Gemma model. Gemma models are based on the same infrastructure and technical components as Gemini, Google's largest and most powerful AI model. Gemma 2B, 7B and other open models can achieve the best performance possible for their size. Gemma models can run directly on a desktop or laptop computer for developers. Gemma is able to surpass much larger models in key benchmarks, while adhering our rigorous standards of safe and responsible outputs.
Learn more
Integrations
Company Details
Company:
Liquid AI
Headquarters:
United States
Website:
www.liquid.ai/liquid-foundation-models
Recommended Products
Secure your business by securing your people.
Take the guesswork out of password management, shadow IT, infrastructure, and secret sharing so you can keep your people safe and your business moving.
Product Details
Platforms
SaaS
Type of Training
Documentation
Customer Support
Online
LFM-40B Features and Options
LFM-40B Lists
LFM-40B User Reviews
Write a Review- Previous
- Next