Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Average Ratings 0 Ratings

Total
ease
features
design
support

No User Reviews. Be the first to provide a review:

Write a Review

Description

LLaMA-Factory is an innovative open-source platform aimed at simplifying and improving the fine-tuning process for more than 100 Large Language Models (LLMs) and Vision-Language Models (VLMs). It accommodates a variety of fine-tuning methods such as Low-Rank Adaptation (LoRA), Quantized LoRA (QLoRA), and Prefix-Tuning, empowering users to personalize models with ease. The platform has shown remarkable performance enhancements; for example, its LoRA tuning achieves training speeds that are up to 3.7 times faster along with superior Rouge scores in advertising text generation tasks when compared to conventional techniques. Built with flexibility in mind, LLaMA-Factory's architecture supports an extensive array of model types and configurations. Users can seamlessly integrate their datasets and make use of the platform’s tools for optimized fine-tuning outcomes. Comprehensive documentation and a variety of examples are available to guide users through the fine-tuning process with confidence. Additionally, this platform encourages collaboration and sharing of techniques among the community, fostering an environment of continuous improvement and innovation.

Description

Stability AI, along with its CarperAI lab, is excited to unveil Stable Beluga 1 and its advanced successor, Stable Beluga 2, previously known as FreeWilly, both of which are robust new Large Language Models (LLMs) available for public use. These models exhibit remarkable reasoning capabilities across a wide range of benchmarks, showcasing their versatility and strength. Stable Beluga 1 is built on the original LLaMA 65B foundation model and has undergone meticulous fine-tuning with a novel synthetically-generated dataset utilizing Supervised Fine-Tune (SFT) in the conventional Alpaca format. In a similar vein, Stable Beluga 2 utilizes the LLaMA 2 70B foundation model, pushing the boundaries of performance in the industry. Their development marks a significant step forward in the evolution of open access AI technologies.

API Access

Has API

API Access

Has API

Screenshots View All

Screenshots View All

Integrations

ChatGLM
DeepSeek
Gemma
LLaVA
Llama
Llama 3
MLflow
Mistral AI
Mixtral 8x22B
Mixtral 8x7B
OpenAI
PaliGemma 2
Phi-2
Qwen
TensorBoard
TensorWave
Yi-Large

Integrations

ChatGLM
DeepSeek
Gemma
LLaVA
Llama
Llama 3
MLflow
Mistral AI
Mixtral 8x22B
Mixtral 8x7B
OpenAI
PaliGemma 2
Phi-2
Qwen
TensorBoard
TensorWave
Yi-Large

Pricing Details

Free
Free Trial
Free Version

Pricing Details

Free
Free Trial
Free Version

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Deployment

Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Customer Support

Business Hours
Live Rep (24/7)
Online Support

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Types of Training

Training Docs
Webinars
Live Training (Online)
In Person

Vendor Details

Company Name

hoshi-hiyouga

Website

github.com/hiyouga/LLaMA-Factory

Vendor Details

Company Name

Stability AI

Founded

2021

Country

United Kingdom

Website

stability.ai/news/stable-beluga-large-instruction-fine-tuned-models

Product Features

Product Features

Alternatives

Alternatives

Llama 2 Reviews

Llama 2

Meta
Llama Reviews

Llama

Meta
Vicuna Reviews

Vicuna

lmsys.org