What Integrates with ONNX?
Find out what ONNX integrations exist in 2025. Learn what software and services currently integrate with ONNX, and sort them by reviews, cost, features, and more. Below is a list of products that ONNX currently integrates with:
-
1
OpenVINO
Intel
FreeThe Intel® Distribution of OpenVINO™ toolkit serves as an open-source AI development resource that speeds up inference on various Intel hardware platforms. This toolkit is crafted to enhance AI workflows, enabling developers to implement refined deep learning models tailored for applications in computer vision, generative AI, and large language models (LLMs). Equipped with integrated model optimization tools, it guarantees elevated throughput and minimal latency while decreasing the model size without sacrificing accuracy. OpenVINO™ is an ideal choice for developers aiming to implement AI solutions in diverse settings, spanning from edge devices to cloud infrastructures, thereby assuring both scalability and peak performance across Intel architectures. Ultimately, its versatile design supports a wide range of AI applications, making it a valuable asset in modern AI development. -
2
Flyte
Union.ai
FreeFlyte is a robust platform designed for automating intricate, mission-critical data and machine learning workflows at scale. It simplifies the creation of concurrent, scalable, and maintainable workflows, making it an essential tool for data processing and machine learning applications. Companies like Lyft, Spotify, and Freenome have adopted Flyte for their production needs. At Lyft, Flyte has been a cornerstone for model training and data processes for more than four years, establishing itself as the go-to platform for various teams including pricing, locations, ETA, mapping, and autonomous vehicles. Notably, Flyte oversees more than 10,000 unique workflows at Lyft alone, culminating in over 1,000,000 executions each month, along with 20 million tasks and 40 million container instances. Its reliability has been proven in high-demand environments such as those at Lyft and Spotify, among others. As an entirely open-source initiative licensed under Apache 2.0 and backed by the Linux Foundation, it is governed by a committee representing multiple industries. Although YAML configurations can introduce complexity and potential errors in machine learning and data workflows, Flyte aims to alleviate these challenges effectively. This makes Flyte not only a powerful tool but also a user-friendly option for teams looking to streamline their data operations. -
3
Azure SQL Edge
Microsoft
$60 per yearIntroducing a compact, edge-optimized SQL database engine that integrates artificial intelligence: Azure SQL Edge. This powerful Internet of Things (IoT) database is specifically designed for edge computing, offering features like data streaming and time series analysis alongside in-database machine learning and graph capabilities. By extending the highly regarded Microsoft SQL engine to edge devices, it ensures uniform performance and security across your entire data infrastructure, whether in the cloud or at the edge. You can create your applications once and deploy them seamlessly across various environments, including edge locations, on-premises data centers, or Azure. With integrated data streaming and time series functionalities, along with advanced analytics powered by machine learning and graph features, users benefit from low-latency performance. It enables efficient data processing at the edge, accommodating online, offline, or hybrid scenarios to address challenges related to latency and bandwidth. Updates and deployments can be managed easily via the Azure portal or your organization’s portal, ensuring consistent security and streamlined operations. Furthermore, leverage the built-in machine learning capabilities to detect anomalies and implement business logic directly at the edge, enhancing real-time decision-making and operational efficiency. This comprehensive solution empowers organizations to harness the full potential of their data, regardless of its location. -
4
ML.NET
Microsoft
FreeML.NET is a versatile, open-source machine learning framework that is free to use and compatible across platforms, enabling .NET developers to create tailored machine learning models using C# or F# while remaining within the .NET environment. This framework encompasses a wide range of machine learning tasks such as classification, regression, clustering, anomaly detection, and recommendation systems. Additionally, ML.NET seamlessly integrates with other renowned machine learning frameworks like TensorFlow and ONNX, which broadens the possibilities for tasks like image classification and object detection. It comes equipped with user-friendly tools such as Model Builder and the ML.NET CLI, leveraging Automated Machine Learning (AutoML) to streamline the process of developing, training, and deploying effective models. These innovative tools automatically analyze various algorithms and parameters to identify the most efficient model for specific use cases. Moreover, ML.NET empowers developers to harness the power of machine learning without requiring extensive expertise in the field. -
5
Cirrascale
Cirrascale
$2.49 per hourOur advanced storage systems are capable of efficiently managing millions of small, random files to support GPU-based training servers, significantly speeding up the overall training process. We provide high-bandwidth, low-latency network solutions that facilitate seamless connections between distributed training servers while enabling smooth data transfer from storage to servers. Unlike other cloud providers that impose additional fees for data retrieval, which can quickly accumulate, we strive to be an integral part of your team. Collaborating with you, we assist in establishing scheduling services, advise on best practices, and deliver exceptional support tailored to your needs. Recognizing that workflows differ across organizations, Cirrascale is committed to ensuring that you receive the most suitable solutions to achieve optimal results. Uniquely, we are the only provider that collaborates closely with you to customize your cloud instances, enhancing performance, eliminating bottlenecks, and streamlining your workflow. Additionally, our cloud-based solutions are designed to accelerate your training, simulation, and re-simulation processes, yielding faster outcomes. By prioritizing your unique requirements, Cirrascale empowers you to maximize your efficiency and effectiveness in cloud operations. -
6
Groq
Groq
Groq aims to establish a benchmark for the speed of GenAI inference, facilitating the realization of real-time AI applications today. The newly developed LPU inference engine, which stands for Language Processing Unit, represents an innovative end-to-end processing system that ensures the quickest inference for demanding applications that involve a sequential aspect, particularly AI language models. Designed specifically to address the two primary bottlenecks faced by language models—compute density and memory bandwidth—the LPU surpasses both GPUs and CPUs in its computing capabilities for language processing tasks. This advancement significantly decreases the processing time for each word, which accelerates the generation of text sequences considerably. Moreover, by eliminating external memory constraints, the LPU inference engine achieves exponentially superior performance on language models compared to traditional GPUs. Groq's technology also seamlessly integrates with widely used machine learning frameworks like PyTorch, TensorFlow, and ONNX for inference purposes. Ultimately, Groq is poised to revolutionize the landscape of AI language applications by providing unprecedented inference speeds. -
7
Intel Open Edge Platform
Intel
The Intel Open Edge Platform streamlines the process of developing, deploying, and scaling AI and edge computing solutions using conventional hardware while achieving cloud-like efficiency. It offers a carefully selected array of components and workflows designed to expedite the creation, optimization, and development of AI models. Covering a range of applications from vision models to generative AI and large language models, the platform equips developers with the necessary tools to facilitate seamless model training and inference. By incorporating Intel’s OpenVINO toolkit, it guarantees improved performance across Intel CPUs, GPUs, and VPUs, enabling organizations to effortlessly implement AI applications at the edge. This comprehensive approach not only enhances productivity but also fosters innovation in the rapidly evolving landscape of edge computing. -
8
LaunchX
Nota AI
Optimized AI is set to roll out its on-device capabilities, facilitating the deployment of AI models directly onto physical devices. By utilizing LaunchX automation, users can streamline the conversion process and easily assess performance metrics on designated devices. The platform can be tailored to align with specific hardware requirements, ensuring a seamless integration of AI models within a customized software ecosystem. Nota's AI innovations are designed to enhance intelligent transportation systems, facial recognition technology, and security surveillance mechanisms. Among their offerings are a driver monitoring system, robust driver authentication solutions, and smart access control systems. Nota is actively engaged in diverse sectors, such as construction, mobility, security, smart home technology, and healthcare. Furthermore, partnerships with leading global firms like Nvidia, Intel, and ARM have significantly boosted Nota's ability to penetrate the international market. The company is committed to pushing the boundaries of AI applications across various industries to create smarter environments. -
9
Qualcomm AI Hub
Qualcomm
The Qualcomm AI Hub serves as a comprehensive resource center for developers focused on creating and implementing AI applications that are specifically optimized for Qualcomm chipsets. It features a vast collection of pre-trained models, an array of development tools, and tailored SDKs for various platforms, facilitating efficient, low-power AI processing across a range of devices, including smartphones and wearables, as well as edge devices. Additionally, the hub provides a collaborative environment where developers can share insights and innovations, further enhancing the ecosystem of AI solutions.
- Previous
- You're on page 1
- Next