Best Lustre Alternatives in 2025
Find the top alternatives to Lustre currently available. Compare ratings, reviews, pricing, and features of Lustre alternatives in 2025. Slashdot lists the best Lustre alternatives on the market that offer competing products that are similar to Lustre. Sort through Lustre alternatives below to make the best choice for your needs
-
1
Simr (formerly UberCloud) is revolutionizing the world of simulation operations with our flagship solution, Simulation Operations Automation (SimOps). Designed to streamline and automate complex simulation workflows, Simr enhances productivity, collaboration, and efficiency for engineers and scientists across various industries, including automotive, aerospace, biomedical engineering, defense, and consumer electronics. Our cloud-based infrastructure provides scalable and cost-effective solutions, eliminating the need for significant upfront investments in hardware. This ensures that our clients have access to the computational power they need, exactly when they need it, leading to reduced costs and improved operational efficiency. Simr is trusted by some of the world's leading companies, including three of the seven most successful companies globally. One of our notable success stories is BorgWarner, a Tier 1 automotive supplier that leverages Simr to automate its simulation environments, significantly enhancing their efficiency and driving innovation.
-
2
Rocky Linux
Ctrl IQ, Inc.
CIQ empowers people to do amazing things by providing innovative and stable software infrastructure solutions for all computing needs. From the base operating system, through containers, orchestration, provisioning, computing, and cloud applications, CIQ works with every part of the technology stack to drive solutions for customers and communities with stable, scalable, secure production environments. CIQ is the founding support and services partner of Rocky Linux, and the creator of the next generation federated computing stack. -
3
Amazon EC2 UltraClusters
Amazon
Amazon EC2 UltraClusters allow you to scale up to thousands of GPUs and machine learning accelerators such as AWS trainium, providing access to supercomputing performance on demand. They enable supercomputing to be accessible for ML, generative AI and high-performance computing through a simple, pay-as you-go model, without any setup or maintenance fees. UltraClusters are made up of thousands of accelerated EC2 instance co-located within a specific AWS Availability Zone and interconnected with Elastic Fabric Adapter networking to create a petabit scale non-blocking network. This architecture provides high-performance networking, and access to Amazon FSx, a fully-managed shared storage built on a parallel high-performance file system. It allows rapid processing of large datasets at sub-millisecond latency. EC2 UltraClusters offer scale-out capabilities to reduce training times for distributed ML workloads and tightly coupled HPC workloads. -
4
MooseFS
Saglabs SA
$/TiB based on scale MooseFS represents a revolutionary concept in the Big Data Storage industry. It allows us combine data storage with data processing into a single unit, using commodity hardware. This provides an extremely high ROI. We provide expert advice and professional services for storage solutions, as well as implementations and support for your operations. MooseFS was launched in 2008 as a spinoff from Gemius, a leading European company that measures internet in more than 20 countries. It has since become one of the world's most sought-after Data storage software. It is still used to store large amounts of data by Gemius' core operations. Over 300 000 events per second are gathered and analyzed every second, 24 hours a day, 7 days a weeks. Any solution we offer to our clients has been tested on a real-life Big Data Analytics work environment. -
5
AWS HPC
Amazon
AWS High Performance Computing services (HPC) enable users to execute large-scale simulators and deep-learning workloads in the cloud. They provide virtually unlimited compute capacity and high-performance file system, as well as high-throughput network. This suite of services accelerates the innovation process by providing a wide range of cloud-based applications, including machine learning, analytics, and rapid design and testing. On-demand access to computing resources maximizes operational efficiency, allowing users the freedom to solve complex problems without the limitations of traditional infrastructure. AWS HPC includes Elastic Fabric Adapter for low-latency and high-bandwidth networks, AWS Batch to scale computing jobs, AWS ParallelCluster to simplify cluster deployment, as well as Amazon FSx, a high-performance file system. These services provide a flexible, scalable environment that is tailored to diverse HPC workloads. -
6
Amazon FSx for Lustre
Amazon
$0.073 per GB per monthAmazon FSx is a fully-managed service that offers high-performance storage for workloads that are compute-intensive. It is built on the open-source Lustre File System and offers sub-millisecond latency, hundreds of gigabytes of throughput per second, and millions IOPS. This makes it ideal for applications like machine learning, high performance computing, video processing, financial modeling, and more. FSx for Lustre seamlessly integrates with Amazon S3, allowing users to link file systems and S3 buckets. This integration allows for transparent access to and processing of S3 from a high performance file system. It also allows data to be imported and exported between FSx and S3. The service supports a variety of deployment options including scratch filesystems for temporary storage, persistent filesystems for long-term storage as well as SSD or HDD storage to optimize cost and performance depending on workload requirements. -
7
Azure FXT Edge Filer
Microsoft
Cloud-integrated hybrid storage can be created that integrates with your existing network-attached storage and Azure Blob Storage. This appliance optimizes data access in your datacenter, in Azure or across a wide area network (WAN). Microsoft Azure FXT Edge filter is a combination of software and hardware. It provides high throughput and low latency to support hybrid storage infrastructure that supports high-performance computing (HPC). Scale-out clustering allows for non-disruptive NAS performance scale-up. To scale to millions of IOPS, and hundreds of gigabytes/s, join up to 24 FXT cluster nodes. Azure FXT Edge filter is the best choice for file-based workloads that require performance and scale. Azure FXT Edge Filer makes it easy to manage data storage. To keep your data accessible and available with minimal latency, you can transfer aging data to Azureblob Storage. Balance cloud and on-premise storage -
8
TrinityX
Cluster Vision
FreeTrinityX, an open-source cluster management system created by ClusterVision to provide 24/7 oversight of High-Performance Computing and Artificial Intelligence environments. It provides a reliable, SLA-compliant system of support, allowing users the freedom to focus on their research, while still managing complex technologies like Linux, SLURM CUDA, InfiniBand Lustre and Open OnDemand. TrinityX simplifies cluster deployment with an intuitive interface that guides users step-bystep in configuring clusters for diverse purposes such as container orchestration, HPC and InfiniBand/RDMA. The BitTorrent protocol enables rapid deployment and setup of AI/HPC Nodes. The platform offers a dashboard that provides real-time insights on cluster metrics, resource usage, and workload distribution. This allows for the identification of bottlenecks, and optimizes resource allocation. -
9
AWS ParallelCluster
Amazon
AWS ParallelCluster, an open-source tool for cluster management, simplifies the deployment of High-Performance Computing clusters (HPC) on AWS. It automates resource setup, including compute nodes and a shared filesystem. It also supports multiple instance types and queues for job submission. ParallelCluster can be accessed via a graphical interface, command line interface, or API. This allows for flexible cluster management and configuration. The tool integrates with AWS Batch and Slurm to facilitate seamless migration of HPC workloads into the cloud. AWS ParallelCluster comes at no extra cost; users pay only for the AWS resources used by their applications. AWS ParallelCluster allows you to use a simple text document to model, provision and dynamically scale resources for your applications. This can be done in an automated, secure and automated manner. -
10
HPE Pointnext
Hewlett Packard
This confluence created new requirements for HPC storage because the input/output patterns for both workloads were very different. It is happening right now. Intersect360, an independent analyst firm, found that 63% of HPC users are already running machine learning programs. Hyperion Research predicts that the growth in HPC storage spending by public sector organizations and enterprises over the next three-years will be 57% faster than that for HPC compute. Seymour Cray once stated, "Anyone can make a fast CPU." The trick is to create a fast system. Anyone can build fast file storage when it comes to AI and HPC. It is possible to create a cost-effective, scalable and fast file storage system. This is possible by embedding the most popular parallel file systems in parallel storage products from HPE that are cost-effective. -
11
Arm Forge
Arm
You can build reliable and optimized code to achieve the best results on multiple Server or HPC architectures. This includes the latest compilers and C++ standard, as well as Intel, 64-bit Arm and AMD, OpenPOWER and Nvidia GPU hardware. Arm Forge combines Arm DDT (the leading debugger for efficient, high-performance application debugging), Arm MAP (the trusted performance profiler that provides invaluable optimization advice across native, Python, and HPC codes), and Arm Performance Reports, which provide advanced reporting capabilities. Arm DDT/Arm MAP can also be purchased as standalone products. Arm experts provide full technical support for efficient application development on Linux Server and HPC. Arm DDT is the best debugger for C++, C, and Fortran parallel applications. Arm DDT's intuitive graphical interface makes it easy to detect memory bugs at all scales and divergent behavior. This makes it the most popular debugger in academia, industry, research, and academia. -
12
AWS Parallel Computing Service
Amazon
$0.5977 per hourAWS Parallel Computing Service is a managed service which simplifies the running and scaling of high-performance computing workloads, and building scientific and engineering model on AWS with Slurm. It allows users to create complete, elastic environments that integrate storage, networking, computing, and visualization tools. This allows them to focus on their research and innovation, without having to worry about infrastructure management. AWS PCS provides managed updates and integrated observability features to enhance cluster operations and maintenance. Users can deploy HPC clusters that are scalable, secure, and reliable using the AWS Management Console (AWS CLI), AWS SDK, or AWS Command Line Interface. The service supports a variety of use cases including tightly coupled workloads such as computer-aided design, high-throughput computations like genomics analysis, GPU-accelerated computing, and custom silicon, like AWS Trainium or AWS Inferentia. -
13
Amazon EC2 P4 Instances
Amazon
$11.57 per hourAmazon EC2 instances P4d deliver high performance in cloud computing for machine learning applications and high-performance computing. They offer 400 Gbps networking and are powered by NVIDIA Tensor Core GPUs. P4d instances offer up to 60% less cost for training ML models. They also provide 2.5x better performance compared to the previous generation P3 and P3dn instance. P4d instances are deployed in Amazon EC2 UltraClusters which combine high-performance computing with networking and storage. Users can scale from a few NVIDIA GPUs to thousands, depending on their project requirements. Researchers, data scientists and developers can use P4d instances to build ML models to be used in a variety of applications, including natural language processing, object classification and detection, recommendation engines, and HPC applications. -
14
Bright Cluster Manager
NVIDIA
Bright Cluster Manager offers a variety of machine learning frameworks including Torch, Tensorflow and Tensorflow to simplify your deep-learning projects. Bright offers a selection the most popular Machine Learning libraries that can be used to access datasets. These include MLPython and NVIDIA CUDA Deep Neural Network Library (cuDNN), Deep Learning GPU Trainer System (DIGITS), CaffeOnSpark (a Spark package that allows deep learning), and MLPython. Bright makes it easy to find, configure, and deploy all the necessary components to run these deep learning libraries and frameworks. There are over 400MB of Python modules to support machine learning packages. We also include the NVIDIA hardware drivers and CUDA (parallel computer platform API) drivers, CUB(CUDA building blocks), NCCL (library standard collective communication routines). -
15
AWS DataSync
Amazon
AWS DataSync, a secure online service, automates and speeds up the transfer of data between on-premises storage services and AWS Storage. It simplifies migration plans and reduces costly on-premises data movements with a fully-managed service that scales seamlessly as data loads increase. DataSync can copy data from Network File System (NFS), Server Message Blocks (SMB), Hadoop Distributed File Systems, self-managed objects storage, AWS Snowcone to Amazon Simple Storage Service buckets (Amazon S3), Amazon Elastic File System File Systems (Amazon EFS), Amazon FSx file systems for Windows File Server, Amazon FSx file systems for Lustre, Amazon FSx file systems for OpenZFS, and Amazon FSx file systems for NetApp ONTAP. It can also move data between AWS Storage and other public clouds, allowing replication, archiving, or sharing application data. DataSync offers end-to-end data security, including encryption and data integrity. -
16
TotalView
Perforce
TotalView debugging software gives you the specialized tools to quickly analyze, scale, and debug high-performance computing applications (HPC). This includes multicore, parallel, and highly dynamic applications that run on a variety of hardware, from desktops to supercomputers. TotalView's powerful tools allow for faster fault isolation, better memory optimization, and dynamic visualisation to improve HPC development efficiency and time-to market. You can simultaneously debug thousands upon thousands of threads and processes. TotalView is a tool that was specifically designed for parallel and multicore computing. It provides unprecedented control over thread execution and processes, as well as deep insight into program data and program states. -
17
Amazon S3 Express One Zone
Amazon
Amazon S3 Express One Zone, a high-performance storage class with a single Availability Zone, is designed to deliver consistent data access in milliseconds for your most frequently used data and applications that are latency sensitive. It offers data access rates up to 10x faster and request costs up to 50 percent lower than S3 Standard. S3 Express One Zone allows you to select a specific AWS availability zone within an AWS Region for your data storage. This allows you to co-locate both your compute and storage resources in the same AWS Region, further optimizing performance and lowering compute costs. The data is stored in an S3 directory bucket that supports hundreds of thousands requests per second. S3 Express One Zone can be used with services like Amazon SageMaker, Amazon Athena and Amazon EMR to accelerate machine learning and analytics workloads. -
18
HPE Performance Cluster Manager
Hewlett Packard Enterprise
The integrated system management solution for Linux®, high-performance computing (HPC), clusters is offered by HPE Performance Cluster Manager (HPCM). HPE Performance Cluster Manager offers complete provisioning, management and monitoring of clusters that scale up to Exascale-sized supercomputers. The software allows for fast system setup starting from bare metal, comprehensive hardware monitoring, management, software updates, power management and cluster health management. It makes scaling HPC clusters faster and more efficient, and integrates with a variety of third-party tools to manage and run workloads. HPE Performance Cluster Manager cuts down on the time and effort required to administer HPC systems. This results in lower total cost of ownership, increased productivity, and a higher return on investment. -
19
Warewulf
Warewulf
FreeWarewulf, a cluster management system and provisioning tool, has been a pioneer in stateless node-management for more than two decades. It allows provisioning containers directly on bare metal hardware, at scales ranging from 10s to 10,000s of compute systems, while maintaining simplicity and versatility. The platform is extensible and allows users to modify the default functionalities and images of nodes to suit different clustering use cases. Warewulf provides stateless provisioning using SELinux and per-node asset keys-based provisioning. It also offers access controls to ensure secure deployments. Its minimal requirements, ease of customization, integration, and optimization make it accessible to a wide range of industries. Warewulf is a highly successful HPC cluster platform that is used across many sectors. It's supported by OpenHPC, and has contributors from around the world. Easy to start, easy to customize and integrate, minimal system requirements. -
20
Azure CycleCloud
Microsoft
$0.01 per hourManage, optimize, and optimize HPC and large compute clusters at any scale. You can deploy full clusters and other resources including schedulers, compute VMs (storage, networking, and caching), and other resources such as cache, network, networking, and storage. Advanced policy and governance features allow you to customize and optimize clusters, including cost controls, Active Directory integration and monitoring. You can continue using your existing job scheduler and other applications. Administrators have complete control over who can run jobs and where they are located. You can take advantage of autoscaling and battle-tested references architectures for a wide variety of HPC workloads. CycleCloud supports every job scheduler and software stack, from proprietary in-house to open source, third-party, or commercial. Your cluster should adapt to your changing resource requirements. Scheduler-aware autoscaling allows you to match your resources to your workload. -
21
Customers can extend their on-premises application data to Oracle Cloud Infrastructure (OCI), Storage Gateway. It is easy to securely transfer files to and from Oracle Cloud through integration with OCI Object Storage compliance and Network File Storage Compliance (NFS). Data is protected at rest and in transit with built-in data integrity check. Enterprise applications have instant access to files that they use frequently through local caching. Storage Gateway is a POSIX compliant NFS mount point. This mount point can be mounted to any host that supports NFSv4 clients. You can easily store and bridge data generated by traditional applications that use NFSv4 file system protocols. Automatic Storage Gateway updates are possible by adding or modifying files in Object Storage.
-
22
AWS Elastic Fabric Adapter (EFA)
United States
Elastic Fabric Adapter is a network-interface for Amazon EC2 instances. It allows customers to run applications that require high levels of internode communication at scale. Its custom-built OS bypass hardware interface improves the performance of interinstance communications which is crucial for scaling these applications. EFA allows High-Performance Computing applications (HPC) using the Message Passing Interface, (MPI), and Machine Learning applications (ML) using NVIDIA's Collective Communications Library, (NCCL), to scale up to thousands of CPUs and GPUs. You get the performance of HPC clusters on-premises, with the elasticity and flexibility on-demand of AWS. EFA is a free networking feature available on all supported EC2 instances. Plus, EFA works with the most common interfaces, libraries, and APIs for inter-node communication. -
23
Ansys HPC
Ansys
The Ansys HPC software suite allows you to use today's multicore processors to run more simulations in a shorter time. These simulations can be larger, more complex, and more accurate than ever before thanks to high-performance computing (HPC). Ansys HPC licensing options allow you to scale to any computational level you require, including single-user or small-user groups options for entry-level parallel processing to virtually unlimited parallel capability. Ansys allows large groups to run parallel processing simulations that are highly scalable and can be used for even the most difficult projects. Ansys offers parallel computing solutions as well as parametric computing. This allows you to explore your design parameters (size and weight, shape, materials mechanical properties, etc.). Early in the product development process. -
24
The Nimbix Supercomputing Suite offers a range of high-performance computing (HPC), as-a-service solutions that are flexible and secure. This as-a service model for HPC, AI and Quantum in cloud gives customers access to one the largest HPC and supercomputing portfolios. It includes hardware, bare metal-as a service, and the democratization and use of advanced computing in cloud across public and privately owned data centers. HyperHub Application Marketplace is our high-performance marketplace that offers over 1,000 applications and workflows. For the best infrastructure and on-demand scalability and convenience, BullSequana HPC server can be used as bare metal. Federated supercomputing-as-a-service offers a unified service console to manage all compute zones and regions in a public or private HPC, AI, and supercomputing federation.
-
25
Fuzzball
CIQ
Fuzzball speeds up innovation for researchers and scientist by eliminating the burdens associated with infrastructure provisioning and administration. Fuzzball optimizes the design and execution of high-performance computing workloads. A user-friendly GUI to design, edit, and execute HPC jobs. CLI allows for comprehensive control and automation of HPC tasks. Automated data entry and exit with full compliance logs. Native integration with GPUs, on-prem storage and cloud storage. Workflow files that are portable and readable by humans. CIQ's Fuzzball modernizes HPC by using an API-first and container-optimized architectural approach. It is based on Kubernetes and provides all of the security, performance and stability found in modern infrastructure and software. Fuzzball abstracts infrastructure and automates complex workflows to drive greater efficiency and collaboration. -
26
Arm MAP
Arm
There is no need to modify your code or the way that you build it. Profiling of applications that run on multiple servers and multiple processes. Clear views of bottlenecks in I/O in computing, in a thread or in multi-process activity. Deep insight into the actual instruction types of processors that impact your performance. To see memory usage over time, you can find high watermarks or changes across the entire memory footprint. Arm MAP is a unique, scalable, low-overhead profiler that can be used standalone or as part the Arm Forge profile and debug suite. It allows server and HPC developers to speed up their software by revealing the root causes of slow performance. It can be used on multicore Linux workstations as well as supercomputers. With a typical runtime overhead of 5%, you can profile the test cases you care about most. The interactive user interface was designed for developers and computational scientists. -
27
ScaleCloud
ScaleMatrix
High-end accelerators and processors such as Graphic Processing Units (GPU) are best for data-intensive AI, IoT, and HPC workloads that require multiple parallel processes. Businesses and research organizations have had the to make compromises when running compute-intensive workloads using cloud-based solutions. Cloud environments can be incompatible with new applications, or require high energy consumption levels. This can raise concerns about the environment. Other times, some aspects of cloud solutions are just too difficult to use. This makes it difficult to create custom cloud environments that meet business needs. -
28
Intel oneAPI HPC Toolkit
Intel
High-performance computing is the heart of AI, machine learning and deep learning applications. The Intel® oneAPI HPC Toolkit is a toolkit that allows developers to create, analyze, optimize and scale HPC applications using the most recent techniques in vectorization and multithreading, multi-node paralelization, memory optimization, and multi-node parallelization. This toolkit is an extension to the Intel(r] oneAPI Base Toolkit. It is required for full functionality. Access to the Intel(r?) Distribution for Python*, Intel(r] oneAPI DPC++/C++ C compiler, powerful data-centric library and advanced analysis tools are all included. You get everything you need to optimize, test, and build your oneAPI projects. An Intel(r] Developer Cloud account gives you 120 days access to the latest Intel®, hardware, CPUs and GPUs as well as Intel oneAPI tools, frameworks and frameworks. No software downloads. No configuration steps and no installations -
29
Moab HPC Suite
Adaptive Computing
Moab®, HPC Suite automates the management, monitoring, reporting, and scheduling of large-scale HPC workloads. Its intelligence engine, which is patent-pending, uses multi-dimensional policies to optimize workload start times and run time on different resources. These policies balance high utilization goals and throughput with competing workload priorities, SLA requirements, and thus accomplish more work in less time and in a better priority order. Moab HPC Suite maximizes the value and use of HPC systems, while reducing complexity and management costs. -
30
Qlustar
Qlustar
FreeThe ultimate full-stack clustering solution that allows you to manage and scale clusters with ease and control. Qlustar provides unmatched simplicity and robust capability to your HPC and AI environments. Qlustar offers a seamless cluster operation, from bare-metal installations with the Qlustar Installer to seamless cluster operations. Setup and manage your clusters in an unmatched manner. Designed to grow along with your needs and handle even the most complex workloads without any hassle. Designed for speed, reliability and resource efficiency. Upgrade your OS, manage security patches and avoid reinstallations. Regular updates protect your clusters from vulnerabilities. Qlustar optimizes computing power to deliver peak efficiency in high-performance computing environments. Our solution provides robust workload management, high availability built-in, and an intuitive user interface for streamlined operations. -
31
NVIDIA HPC SDK
NVIDIA
The NVIDIA HPC Software Developer Kit (SDK), includes the proven compilers and libraries, as well as software tools that maximize developer productivity and improve the portability and performance of HPC applications. NVIDIA HPC SDK C and C++, and Fortran compilers allow GPU acceleration of HPC simulation and modeling applications using standard C++ and Fortran, OpenACC® directives and CUDA®. GPU-accelerated math libraries maximize performance for common HPC algorithms. Optimized communications libraries allow standards-based multi-GPU programming and scalable systems programming. Debugging and performance profiling tools make porting and optimizing HPC applications easier. Containerization tools allow for easy deployment on-premises and in the cloud. The HPC SDK supports NVIDIA GPUs, Arm, OpenPOWER or x86 64 CPUs running Linux. -
32
NVIDIA Modulus
NVIDIA
NVIDIA Modulus, a neural network framework, combines the power of Physics in the form of governing partial differential equations (PDEs), with data to create high-fidelity surrogate models with near real-time latency. NVIDIA Modulus is a tool that can help you solve complex, nonlinear, multiphysics problems using AI. This tool provides the foundation for building physics machine learning surrogate models that combine physics and data. This framework can be applied to many domains and uses, including engineering simulations and life sciences. It can also be used to solve forward and inverse/data assimilation issues. Parameterized system representation that solves multiple scenarios in near real-time, allowing you to train once offline and infer in real-time repeatedly. -
33
Azure HPC
Microsoft
Azure high-performance computing (HPC). Power breakthrough innovations, solve complicated problems, and optimize compute-intensive workloads. A full stack solution designed for HPC allows you to run and build your most demanding workloads on the cloud. Azure Virtual Machines deliver supercomputing, interoperability and near-infinite scaling for compute-intensive workloads. Azure AI and analytics are industry-leading services that empower decision-making and deliver the next-generation AI. Multilayered, built-in privacy and security features help you secure your data and applications while ensuring compliance. -
34
GlusterFS
Gluster
GlusterFS is a scalable network storage system that can be used for data-intensive tasks like cloud storage or media streaming. GlusterFS is open-source software that is free and free to download. It can also be used with common off-the shelf hardware. Gluster is a distributed file system that aggregates disk storage resources across multiple servers into one global namespace. Enterprises can scale their capacity, performance, availability, and availability as needed, without vendor lock-in. This is possible in hybrid environments, on-premise, public clouds, and hybrid. Gluster is used in production by thousands of organisations across media, healthcare and government. It also serves as a platform for education, financial services, and web 2.0. Gluster scales to several petabytes and can handle thousands of clients. It is POSIX compatible and can use any ondisk storage system that supports extended attributes. It can also be accessed using industry standard protocols such as NFS and SMB and provides replication, quotas and geo-replication, snapshots, bitrot detection, and many other features. -
35
Arm Allinea Studio provides a suite of tools to develop server and HPC applications for Arm-based platforms. It includes Arm-specific libraries and compilers, as well as debugging and optimization tools. Arm Performance Libraries are optimized core math libraries that can be used to run high-performance computing applications on Arm processors. These routines are available via both Fortran and C interfaces. Arm Performance Libraries are built using OpenMP across many BLAS and LAPACK, FFT and sparse procedures to maximize your performance when working in multi-processor environments.
-
36
Kombyne
Kombyne
Kombyne™, a new SaaS high performance computing (HPC), workflow tool, was initially designed for customers in the aerospace, defense, and automotive industries. It is now available to academic researchers. It allows users to subscribe for a variety of workflow solutions for HPC-related jobs, including on-the-fly extraction generation and rendering to simulation steering. Interactive monitoring and control are available with minimal simulation disruption, and no reliance upon VTK. Extract workflows and real time visualization eliminate the need for large files. In-transit workflows use a separate process that receives data from the solver and performs visualizations and analysis without interfering in the running solver. The endpoint, also known as an extract, can output point samples, cutting planes, and point samples for data science. It can also render images. The Endpoint can also be used to bridge to popular visualization codes. -
37
FieldView
Intelligent Light
Software technologies have improved tremendously over the past 20 years and HPC computing has grown by orders of magnitude. Our ability to understand simulation results has remained the exact same. Making movies and plots in the traditional way is not scalable when dealing with multi-billion cell networks or tens of thousands of time steps. Automated solution assessment can be accelerated when features or quantitative properties are directly produced via eigen analysis and machine learning. The powerful VisIt Prime backend is paired with the easy-to-use, industry-standard FieldView desktop. -
38
Google Cloud GPUs
Google
$0.160 per GPUAccelerate compute jobs such as machine learning and HPC. There are many GPUs available to suit different price points and performance levels. Flexible pricing and machine customizations are available to optimize your workload. High-performance GPUs available on Google Cloud for machine intelligence, scientific computing, 3D visualization, and machine learning. NVIDIA K80 and P100 GPUs, T4, V100 and A100 GPUs offer a variety of compute options to meet your workload's cost and performance requirements. You can optimize the processor, memory and high-performance disk for your specific workload by using up to 8 GPUs per instance. All this with per-second billing so that you only pay for what you use. You can run GPU workloads on Google Cloud Platform, which offers industry-leading storage, networking and data analytics technologies. Compute Engine offers GPUs that can be added to virtual machine instances. Learn more about GPUs and the types of hardware available. -
39
Samadii Multiphysics
Metariver Technology Co.,Ltd
2 RatingsMetariver Technology Co., Ltd. develops innovative and creative computer-aided engineering (CAE) analysis S/W based upon the most recent HPC technology and S/W technologies including CUDA technology. We are changing the paradigm in CAE technology by using particle-based CAE technology, high-speed computation technology with GPUs, and CAE analysis software. Here is an introduction to our products. 1. Samadii-DEM: works with discrete element method and solid particles. 2. Samadii-SCIV (Statistical Contact In Vacuum): working with high vacuum system gas-flow simulation. 3. Samadii-EM (Electromagnetics) : For full-field interpretation 4. Samadii-Plasma: For Analysis of ion and electron behavior in an electromagnetic field. 5. Vampire (Virtual Additive Manufacturing System): Specializes in transient heat transfer analysis. -
40
Intel DevCloud
Intel
FreeIntel®, DevCloud provides free access to a variety of Intel®, architectures. This allows you to get hands-on experience using Intel®, software, and execute your edge and AI, high-performance computing, (HPC) and rendering workloads. You have all the tools and libraries you need to accelerate your learning and project prototyping with preinstalled Intel®. optimized frameworks, tools and libraries. Freely learn, prototype, test, run and manage your workloads on a cluster with the latest Intel®, hardware and software. A new collection of curated experiences will help you learn, including market vertical samples and Jupyter Notebook tutorials. You can build your solution in JupyterLab, test it with bare metal, or create a containerized solution. You can quickly bring it to Intel DevCloud to be tested. Use the deep learning toolbench to optimize your solution for a specific target device edge. Take advantage of the new, stronger telemetry dashboard. -
41
Covalent
Agnostiq
FreeCovalent's serverless HPC architecture makes it easy to scale jobs from your laptop to the HPC/Cloud. Covalent is a Pythonic workflow tool that computational scientists, AI/ML program engineers, and anyone who needs to perform experiments on limited or costly computing resources such as HPC clusters and GPU arrays, cloud services, and quantum computers. Covalent allows researchers to run computation tasks on advanced hardware platforms, such as a serverless HPC cluster or quantum computer, using just one line of code. Covalent's latest release includes three major enhancements and two new feature sets. Covalent's modular design allows users to create custom pre-and post-hooks for electrons. This allows them to facilitate a variety of use cases, including setting up remote environments (using DepsPip), and running custom functions. -
42
Kao Data
Kao Data
Kao Data is a leader in the industry. It pioneered the development and operation data centres designed for AI and advanced computing. We provide our customers a secure, scalable, and sustainable computing environment with a hyperscale platform. Kao Data is a leader in the industry for developing and operating data centres designed for AI and advanced computing. Our Harlow campus is the UK's top choice for high-density, GPU-powered computing. We can help you realize your hybrid AI and HPC goals with rapid on-ramps to all major cloud providers. -
43
Amazon EC2 P5 Instances
Amazon
Amazon Elastic Compute Cloud's (Amazon EC2) instances P5 powered by NVIDIA Tensor core GPUs and P5e or P5en instances powered NVIDIA Tensor core GPUs provide the best performance in Amazon EC2 when it comes to deep learning and high-performance applications. They can help you accelerate the time to solution up to four times compared to older GPU-based EC2 instance generation, and reduce costs to train ML models up to forty percent. These instances allow you to iterate faster on your solutions and get them to market quicker. You can use P5,P5e,and P5en instances to train and deploy increasingly complex large language and diffusion models that power the most demanding generative artificial intelligent applications. These applications include speech recognition, video and image creation, code generation and question answering. These instances can be used to deploy HPC applications for pharmaceutical discovery. -
44
NVIDIA GPU-Optimized AMI
Amazon
$3.06 per hourThe NVIDIA GPU Optimized AMI is a virtual image that accelerates your GPU-accelerated Machine Learning and Deep Learning workloads. This AMI allows you to spin up a GPU accelerated EC2 VM in minutes, with a preinstalled Ubuntu OS and GPU driver. Docker, NVIDIA container toolkit, and Docker are also included. This AMI provides access to NVIDIA’s NGC Catalog. It is a hub of GPU-optimized software for pulling and running performance-tuned docker containers that have been tested and certified by NVIDIA. The NGC Catalog provides free access to containerized AI and HPC applications. It also includes pre-trained AI models, AI SDKs, and other resources. This GPU-optimized AMI comes free, but you can purchase enterprise support through NVIDIA Enterprise. Scroll down to the 'Support information' section to find out how to get support for AMI. -
45
hBlock
hBlock
FreehBlock, a POSIX-compliant script, creates a hosts file that blocks your system from connecting to domains that are serving ads, tracking scripts, and malware. You can download the latest version of the default blocklist from our website. Or, you can create your own using the instructions on this project page. Block tracking, malware domains, and ads to improve your privacy and security. hBlock can be found in many package managers. A system timer can also be set to update the host's file regularly for new additions. Multiple options allow you to modify the default behavior of hBlock. The hBlock website offers nightly builds of the host's files, as well as other formats. Sometimes, you might need to temporarily disable the hBlock. A quick solution is to create a hosts file with no blocked domains. -
46
IBM Storage Scale
IBM
$19.10 per terabyteIBM Storage Scale, a software-defined object and file storage, allows organizations to build global data platforms for artificial intelligence (AI), advanced analytics and high-performance computing. Unlike traditional applications that work with structured data, today's performance-intensive AI and analytics workloads operate on unstructured data, such as documents, audio, images, videos, and other objects. IBM Storage Scale provides global data abstractions services that seamlessly connect data sources in multiple locations, even non-IBM storage environments. It is based on a massively-parallel file system that can be deployed across multiple hardware platforms, including x86 and IBM Power mainframes as well as ARM-based POSIX clients, virtual machines and Kubernetes. -
47
zdaemon
Python Software Foundation
Freezdaemon, a Unix (Unix Linux, Mac OS X, Mac OS X), Python program, wraps commands to make them behave like proper daemons. zdaemon offers a script, zdaemon that can be used to execute other programs as POSIX(Unix) daemons. It is only compatible with POSIX systems. Using zdaemon means that you need to specify a number options. These options can be given as either a configuration file or as command-line options. It can also accept commands that tell it what to do. Start a process by creating a daemon. Stop a daemon process. Stop and then start a program. Check if the program is still running. Send a signal for the daemon process. The transcript log can be reopened. You can give commands on a command-line or by using an interactive interpreter. The program command can include a program name and options. However, the command-line parsing can be quite primitive. -
48
Pavilion HyperOS
Pavilion
The most efficient, dense, scalable and flexible storage platform in existence. Pavilion HyperParallel File System™ allows you to scale across unlimited Pavilion HyperParallel Flash arrays™, providing 1.2TB/s read and 900GB/s write bandwidth, with 200M IOPS at a latency of 25us per rack. The Pavilion HyperOS 3 is unique in its ability to provide independent, linear scaling of both capacity as well as performance. It now supports global namespace support for both NFS/S3, allowing unlimited, linear scale across unlimited Pavilion HyperParallel FlashArray systems. The Pavilion HyperParallel Flash array offers unparalleled performance and availability. The Pavilion HyperOS is patent-pending technology that ensures that your data is always accessible, with performant access that legacy arrays can't match. -
49
Tencent Cloud File Storage
Tencent
CFS is compatible and cross-platform accessible with POSIX. This ensures consistency of files, data, and other information. Cloud Virtual Machine (CVM), an instance of CFS, can access the CFS system via the standard NFS protocol. CFS offers a simple, easy-to-learn interface. CFS makes it easy to quickly create, configure, and manage a filesystem. This reduces the time required to deploy and maintain your network-attached storage (NAS). CFS storage capacity can be scaled easily without affecting your applications or services. CFS performance increases with increasing storage size, providing reliable and high-performance services. CFS standard file storage has three layers of redundancy, and is extremely reliable and available. CFS can limit client permissions via network isolation, user isolation, and access allowlists. -
50
Veritas NetBackup
Veritas Technologies
Optimized to support multicloud environments, this product provides extensive workload support and operational resiliency. To maximize your resilience, ensure data integrity, monitor and recover at scale. Resilience. Migration. Snapshot orchestration. Disaster recovery. Unified, end to end deduplication All you need is one solution. The best solution for moving VMs to the cloud. Protect VMware, Microsoft Hyper-V and Nutanix AHV Red Hat Virtualization AzureStack and OpenStack with automated protection. Flexible recovery allows for instant access to VM data and immediate access to it. Disaster recovery at scale with low RPO and RTO. Protect your data with 60+ public storage targets, an automated, SLA driven resiliency platform and a supported integration with NetBackup. Scale-out protection for large-scale workloads that have hundreds of data nodes. Use NetBackup Parallel streaming, a modern, agentless parallel streaming architecture.