What Integrates with Unify AI?
Find out what Unify AI integrations exist in 2025. Learn what software and services currently integrate with Unify AI, and sort them by reviews, cost, features, and more. Below is a list of products that Unify AI currently integrates with:
-
1
Mistral AI
Mistral AI
Free 674 RatingsMistral AI is an advanced artificial intelligence company focused on open-source generative AI solutions. Offering adaptable, enterprise-level AI tools, the company enables deployment across cloud, on-premises, edge, and device-based environments. Key offerings include "Le Chat," a multilingual AI assistant designed for enhanced efficiency in both professional and personal settings, and "La Plateforme," a development platform for building and integrating AI-powered applications. With a strong emphasis on transparency and innovation, Mistral AI continues to drive progress in open-source AI and contribute to shaping AI policy. -
2
TensorFlow
TensorFlow
Free 2 RatingsOpen source platform for machine learning. TensorFlow is a machine learning platform that is open-source and available to all. It offers a flexible, comprehensive ecosystem of tools, libraries, and community resources that allows researchers to push the boundaries of machine learning. Developers can easily create and deploy ML-powered applications using its tools. Easy ML model training and development using high-level APIs such as Keras. This allows for quick model iteration and debugging. No matter what language you choose, you can easily train and deploy models in cloud, browser, on-prem, or on-device. It is a simple and flexible architecture that allows you to quickly take new ideas from concept to code to state-of the-art models and publication. TensorFlow makes it easy to build, deploy, and test. -
3
Definitive functions are the heart of extensible programming. Python supports keyword arguments, mandatory and optional arguments, as well as arbitrary argument lists. It doesn't matter if you are a beginner or an expert programmer, Python is easy to learn. Python is easy to learn, whether you are a beginner or an expert in other languages. These pages can be a helpful starting point to learn Python programming. The community hosts meetups and conferences to share code and much more. The documentation for Python will be helpful and the mailing lists will keep in touch. The Python Package Index (PyPI), hosts thousands of third-party Python modules. Both Python's standard library and the community-contributed modules allow for endless possibilities.
-
4
The NumPy vectorization and indexing concepts are fast and flexible. They are the current de-facto standard in array computing. NumPy provides comprehensive mathematical functions, random numbers generators, linear algebra routines and Fourier transforms. NumPy is compatible with a wide variety of hardware and computing platforms. It also works well with sparse array libraries, distributed, GPU, or GPU. NumPy's core is C code that has been optimized. Enjoy Python's flexibility with the speed and efficiency of compiled code. NumPy's high-level syntax makes it easy for programmers of all backgrounds and experience levels. NumPy brings the computational power and simplicity of languages such as C and Fortran into Python, making it a language that is much easier to learn and to use. This power is often accompanied by simplicity: NumPy solutions are often simple and elegant.
-
5
TorchScript allows you to seamlessly switch between graph and eager modes. TorchServe accelerates the path to production. The torch-distributed backend allows for distributed training and performance optimization in production and research. PyTorch is supported by a rich ecosystem of libraries and tools that supports NLP, computer vision, and other areas. PyTorch is well-supported on major cloud platforms, allowing for frictionless development and easy scaling. Select your preferences, then run the install command. Stable is the most current supported and tested version of PyTorch. This version should be compatible with many users. Preview is available for those who want the latest, but not fully tested, and supported 1.10 builds that are generated every night. Please ensure you have met the prerequisites, such as numpy, depending on which package manager you use. Anaconda is our preferred package manager, as it installs all dependencies.
-
6
ChatGPT is an OpenAI language model. It can generate human-like responses to a variety prompts, and has been trained on a wide range of internet texts. ChatGPT can be used to perform natural language processing tasks such as conversation, question answering, and text generation. ChatGPT is a pretrained language model that uses deep-learning algorithms to generate text. It was trained using large amounts of text data. This allows it to respond to a wide variety of prompts with human-like ease. It has a transformer architecture that has been proven to be efficient in many NLP tasks. ChatGPT can generate text in addition to answering questions, text classification and language translation. This allows developers to create powerful NLP applications that can do specific tasks more accurately. ChatGPT can also process code and generate it.
-
7
Mistral 7B
Mistral AI
FreeMistral 7B is a cutting-edge 7.3-billion-parameter language model designed to deliver superior performance, surpassing larger models like Llama 2 13B on multiple benchmarks. It leverages Grouped-Query Attention (GQA) for optimized inference speed and Sliding Window Attention (SWA) to effectively process longer text sequences. Released under the Apache 2.0 license, Mistral 7B is openly available for deployment across a wide range of environments, from local systems to major cloud platforms. Additionally, its fine-tuned variant, Mistral 7B Instruct, excels in instruction-following tasks, outperforming models such as Llama 2 13B Chat in guided responses and AI-assisted applications. -
8
Codestral Mamba
Mistral AI
FreeCodestral Mamba is a Mamba2 model that specializes in code generation. It is available under the Apache 2.0 license. Codestral Mamba represents another step in our efforts to study and provide architectures. We hope that it will open up new perspectives in architecture research. Mamba models have the advantage of linear inference of time and the theoretical ability of modeling sequences of unlimited length. Users can interact with the model in a more extensive way with rapid responses, regardless of the input length. This efficiency is particularly relevant for code productivity use-cases. We trained this model with advanced reasoning and code capabilities, enabling the model to perform at par with SOTA Transformer-based models. -
9
Ruby
Ruby Language
FreeYou may be wondering why Ruby is so popular. It is a beautiful and artistic language, according to its fans. They also say it's practical and useful. Ruby has attracted devoted coders around the world since its 1995 release. Ruby was widely accepted in 2006. Active user groups were formed in major cities around the globe and Ruby-related conferences were full to capacity. Ruby-Talk, the main mailing list for discussion about the Ruby language, saw an average of 200 messages per daily in 2006. As the community has grown, the number of messages per day on Ruby-Talk has fallen. Ruby ranks among the top 10 in most indices that measure popularity and growth of programming languages around the world (such as TIOBE index). The popularity of Ruby on Rails, especially the Ruby on Rails web framework, is responsible for a large part of this growth. -
10
Replicate
Replicate
FreeMachine learning can do amazing things, including understanding the world, driving cars, writing code, and making art. It's still very difficult to use. Research is usually published in a PDF format. There are also bits of code on GitHub and weights (if you're fortunate!) on Google Drive. It's difficult to apply that work to a real-world problem unless you're an expert. Machine learning is now accessible to everyone. Machine learning models should be shared by people who create them. People who want to use machine-learning should not need a PhD to share their machine learning models. Great power comes with great responsibility. We believe that this technology can be made safer and more understandable by using better tools and safeguards. -
11
Wayfinder
Wayfinder
Wayfinder was created at Stanford d.school in response to a key design question: how can we reimagine the education system to develop a sense of purpose, meaning, and belonging for students? Our K-12 solutions help students develop belonging + purpose through meaningful connections with their inner selves and their local communities. Wayfinder offers products, curriculum, and professional development that help students develop future-ready skills and CASEL-aligned abilities. Our powerful assessment suite tracks skills over time and identifies areas for targeted skill building. Collections and Activity Library provide educators with thousands of research-based and evidence-based lessons and activities. Wayfinder can also be used to support MTSS with content for Tier 1 Instruction and Tier 2 + 3. Our Partner Success Managers and technical teams provide year-round support to ensure effective implementation. -
12
Codestral
Mistral AI
FreeWe are proud to introduce Codestral, the first code model we have ever created. Codestral is a generative AI model that is open-weight and specifically designed for code generation. It allows developers to interact and write code using a shared API endpoint for instructions and completion. It can be used for advanced AI applications by software developers as it is able to master both code and English. Codestral has been trained on a large dataset of 80+ languages, including some of the most popular, such as Python and Java. It also includes C, C++ JavaScript, Bash, C, C++. It also performs well with more specific ones, such as Swift and Fortran. Codestral's broad language base allows it to assist developers in a variety of coding environments and projects. -
13
Mistral Large
Mistral AI
FreeMistral Large is a state-of-the-art language model developed by Mistral AI, designed for advanced text generation, multilingual reasoning, and complex problem-solving. Supporting multiple languages, including English, French, Spanish, German, and Italian, it provides deep linguistic understanding and cultural awareness. With an extensive 32,000-token context window, the model can process and retain information from long documents with exceptional accuracy. Its strong instruction-following capabilities and native function-calling support make it an ideal choice for AI-driven applications and system integrations. Available via Mistral’s platform, Azure AI Studio, and Azure Machine Learning, it can also be self-hosted for privacy-sensitive use cases. Benchmark results position Mistral Large as one of the top-performing models accessible through an API, second only to GPT-4. -
14
Mistral NeMo
Mistral AI
FreeMistral NeMo, our new best small model. A state-of the-art 12B with 128k context and released under Apache 2.0 license. Mistral NeMo, a 12B-model built in collaboration with NVIDIA, is available. Mistral NeMo has a large context of up to 128k Tokens. Its reasoning, world-knowledge, and coding precision are among the best in its size category. Mistral NeMo, which relies on a standard architecture, is easy to use. It can be used as a replacement for any system that uses Mistral 7B. We have released Apache 2.0 licensed pre-trained checkpoints and instruction-tuned base checkpoints to encourage adoption by researchers and enterprises. Mistral NeMo has been trained with quantization awareness to enable FP8 inferences without performance loss. The model was designed for global applications that are multilingual. It is trained in function calling, and has a large contextual window. It is better than Mistral 7B at following instructions, reasoning and handling multi-turn conversation. -
15
Mixtral 8x22B
Mistral AI
FreeMixtral 8x22B is our latest open model. It sets new standards for performance and efficiency in the AI community. It is a sparse Mixture-of-Experts model (SMoE), which uses only 39B active variables out of 141B. This offers unparalleled cost efficiency in relation to its size. It is fluently bilingual in English, French Italian, German and Spanish. It has strong math and coding skills. It is natively able to call functions; this, along with the constrained-output mode implemented on La Plateforme, enables application development at scale and modernization of tech stacks. Its 64K context window allows for precise information retrieval from large documents. We build models with unmatched cost-efficiency for their respective sizes. This allows us to deliver the best performance-tocost ratio among models provided by the Community. Mixtral 8x22B continues our open model family. Its sparse patterns of activation make it faster than any 70B model. -
16
Mathstral
Mistral AI
FreeAs a tribute for Archimedes' 2311th birthday, which we celebrate this year, we release our first Mathstral 7B model, designed specifically for math reasoning and scientific discoveries. The model comes with a 32k context-based window that is published under the Apache 2.0 License. Mathstral is a tool we're donating to the science community in order to help solve complex mathematical problems that require multi-step logical reasoning. The Mathstral release was part of a larger effort to support academic project, and it was produced as part of our collaboration with Project Numina. Mathstral, like Isaac Newton at his time, stands on Mistral 7B's shoulders and specializes in STEM. It has the highest level of reasoning in its size category, based on industry-standard benchmarks. It achieves 56.6% in MATH and 63.47% in MMLU. The following table shows the MMLU performance differences between Mathstral and Mistral 7B. -
17
Ministral 3B
Mistral AI
FreeMistral AI has introduced two state of the art models for on-device computing, and edge use cases. These models are called "les Ministraux", Ministral 3B, and Ministral 8B. These models are a new frontier for knowledge, commonsense, function-calling and efficiency within the sub-10B category. They can be used for a variety of applications, from orchestrating workflows to creating task workers. Both models support contexts up to 128k (currently 32k for vLLM) and Ministral 8B has a sliding-window attention pattern that allows for faster and more memory-efficient inference. These models were designed to provide a low-latency and compute-efficient solution for scenarios like on-device translators, internet-less intelligent assistants, local analytics and autonomous robotics. Les Ministraux, when used in conjunction with larger languages models such as Mistral Large or other agentic workflows, can also be efficient intermediaries in function-calling. -
18
Ministral 8B
Mistral AI
FreeMistral AI has introduced "les Ministraux", two advanced models, for on-device computing applications and edge applications. These models are Ministral 3B (the Ministraux) and Ministral 8B (the Ministraux). These models excel at knowledge, commonsense logic, function-calling and efficiency in the sub-10B parameter area. They can handle up to 128k contexts and are suitable for a variety of applications, such as on-device translations, offline smart assistants and local analytics. Ministral 8B has an interleaved sliding window attention pattern that allows for faster and memory-efficient inference. Both models can be used as intermediaries for multi-step agentic processes, handling tasks such as input parsing and task routing and API calls with low latency. Benchmark evaluations show that les Ministraux consistently performs better than comparable models in multiple tasks. Both models will be available as of October 16, 2024. Ministral 8B is priced at $0.1 for every million tokens. -
19
Mistral Small
Mistral AI
FreeMistral AI announced a number of key updates on September 17, 2024 to improve the accessibility and performance. They introduced a free version of "La Plateforme", their serverless platform, which allows developers to experiment with and prototype Mistral models at no cost. Mistral AI has also reduced the prices of their entire model line, including a 50% discount for Mistral Nemo, and an 80% discount for Mistral Small and Codestral. This makes advanced AI more affordable for users. The company also released Mistral Small v24.09 - a 22-billion parameter model that offers a balance between efficiency and performance, and is suitable for tasks such as translation, summarization and sentiment analysis. Pixtral 12B is a model with image understanding abilities that can be used to analyze and caption pictures without compromising text performance. -
20
Anyscale
Anyscale
Ray's creators have created a fully-managed platform. The best way to create, scale, deploy, and maintain AI apps on Ray. You can accelerate development and deployment of any AI app, at any scale. Ray has everything you love, but without the DevOps burden. Let us manage Ray for you. Ray is hosted on our cloud infrastructure. This allows you to focus on what you do best: creating great products. Anyscale automatically scales your infrastructure to meet the dynamic demands from your workloads. It doesn't matter if you need to execute a production workflow according to a schedule (e.g. Retraining and updating a model with new data every week or running a highly scalable, low-latency production service (for example. Anyscale makes it easy for machine learning models to be served in production. Anyscale will automatically create a job cluster and run it until it succeeds. -
21
PHP
PHP
FreePHP is fast, flexible, and pragmatic. It powers everything, from your blog to the most visited websites in the world. PHP 8.0.20 is now available from the PHP development team. You don't even need to use a search box when accessing the PHP.net website. To access pages, you can use PHP.net URLs. -
22
Mixtral 8x7B
Mistral AI
FreeMixtral 8x7B has open weights and is a high quality sparse mixture expert model (SMoE). Licensed under Apache 2.0. Mixtral outperforms Llama 70B in most benchmarks, with 6x faster Inference. It is the strongest model with an open-weight license and the best overall model in terms of cost/performance tradeoffs. It matches or exceeds GPT-3.5 in most standard benchmarks. -
23
Pixtral Large
Mistral AI
FreePixtral Large is Mistral AI’s latest open-weight multimodal model, featuring a powerful 124-billion-parameter architecture. It combines a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, allowing it to excel at interpreting documents, charts, and natural images while maintaining top-tier text comprehension. With a 128,000-token context window, it can process up to 30 high-resolution images simultaneously. The model has achieved cutting-edge results on benchmarks like MathVista, DocVQA, and VQAv2, outperforming competitors such as GPT-4o and Gemini-1.5 Pro. Available under the Mistral Research License for non-commercial use and the Mistral Commercial License for enterprise applications, Pixtral Large is designed for advanced AI-powered understanding. -
24
Node.js
Node.js
Node.js is an asynchronous JavaScript runtime that drives JavaScript calls. It's designed to create scalable network applications. Node.js will go to sleep if there isn't any work being done. This is in contrast with the more common concurrency model today, where OS threads are used. Thread-based networking is slow and difficult to use. Node.js users are not at risk of deadlocking the process because there are no locks. Nearly every function in Node.js performs I/O. The process never blocks unless the I/O is performed using synchronous Node.js methods standard library. Scalable systems are easy to create in Node.js because nothing blocks. Node.js is inspired by and similar to Ruby's Event Machine, and Python's Twisted. Node.js extends the event model a little further. It presents an event loop instead of a library as a runtime construct. -
25
Llama 2
Meta
FreeThe next generation of the large language model. This release includes modelweights and starting code to pretrained and fine tuned Llama languages models, ranging from 7B-70B parameters. Llama 1 models have a context length of 2 trillion tokens. Llama 2 models have a context length double that of Llama 1. The fine-tuned Llama 2 models have been trained using over 1,000,000 human annotations. Llama 2, a new open-source language model, outperforms many other open-source language models in external benchmarks. These include tests of reasoning, coding and proficiency, as well as knowledge tests. Llama 2 has been pre-trained using publicly available online data sources. Llama-2 chat, a fine-tuned version of the model, is based on publicly available instruction datasets, and more than 1 million human annotations. We have a wide range of supporters in the world who are committed to our open approach for today's AI. These companies have provided early feedback and have expressed excitement to build with Llama 2 -
26
OctoAI
OctoML
OctoAI is a world-class computing infrastructure that allows you to run and tune models that will impress your users. Model endpoints that are fast and efficient, with the freedom to run any type of model. OctoAI models can be used or you can bring your own. Create ergonomic model endpoints within minutes with just a few lines code. Customize your model for any use case that benefits your users. You can scale from zero users to millions without worrying about hardware, speed or cost overruns. Use our curated list to find the best open-source foundations models. We've optimized them for faster and cheaper performance using our expertise in machine learning compilation and acceleration techniques. OctoAI selects the best hardware target and applies the latest optimization techniques to keep your running models optimized. -
27
Together AI
Together AI
$0.0001 per 1k tokensWe are ready to meet all your business needs, whether it is quick engineering, fine-tuning or training. The Together Inference API makes it easy to integrate your new model in your production application. Together AI's elastic scaling and fastest performance allows it to grow with you. To increase accuracy and reduce risks, you can examine how models are created and what data was used. You are the owner of the model that you fine-tune and not your cloud provider. Change providers for any reason, even if the price changes. Store data locally or on our secure cloud to maintain complete data privacy. -
28
Le Chat
Mistral AI
FreeLe Chat is an interactive conversational interface to interact with Mistral AI's models. It is a fun and pedagogical way to learn about Mistral AI. Le Chat can use Mistral Large, Mistral Small, or a prototype called Mistral Next. This model is designed to be concise and brief. Our models are constantly being improved to be as useful as possible and to be as opinionated as we can. However, there is still much to improve! Le Chat's system-level moderation allows you to customize the way you are warned when you push the conversation in a direction where the assistant could produce sensitive or controversial content.
- Previous
- You're on page 1
- Next