Best Nextflow Alternatives in 2024
Find the top alternatives to Nextflow currently available. Compare ratings, reviews, pricing, and features of Nextflow alternatives in 2024. Slashdot lists the best Nextflow alternatives on the market that offer competing products that are similar to Nextflow. Sort through Nextflow alternatives below to make the best choice for your needs
-
1
Rivery
Rivery
$0.75 Per CreditRivery’s ETL platform consolidates, transforms, and manages all of a company’s internal and external data sources in the cloud. Key Features: Pre-built Data Models: Rivery comes with an extensive library of pre-built data models that enable data teams to instantly create powerful data pipelines. Fully managed: A no-code, auto-scalable, and hassle-free platform. Rivery takes care of the back end, allowing teams to spend time on mission-critical priorities rather than maintenance. Multiple Environments: Rivery enables teams to construct and clone custom environments for specific teams or projects. Reverse ETL: Allows companies to automatically send data from cloud warehouses to business applications, marketing clouds, CPD’s, and more. -
2
Portainer Business
Portainer
Free 2 RatingsPortainer Business makes managing containers easy. It is designed to be deployed from the data centre to the edge and works with Docker, Swarm and Kubernetes. It is trusted by more than 500K users. With its super-simple GUI and its comprehensive Kube-compatible API, Portainer Business makes it easy for anyone to deploy and manage container-based applications, triage container-related issues, set up automate Git-based workflows and build CaaS environments that end users love to use. Portainer Business works with all K8s distros and can be deployed on prem and/or in the cloud. It is designed to be used in team environments where there are multiple users and multiple clusters. The product incorporates a range of security features - including RBAC, OAuth integration and logging, which makes it suitable for use in large, complex production environments. For platform managers responsible for delivering a self-service CaaS environment, Portainer includes a suite of features that help control what users can / can't do and significantly reduces the risks associated with running containers in prod. Portainer Business is fully supported and includes a comprehensive onboarding experience that ensures you get up and running. -
3
Google Cloud Run
Google
2 RatingsFully managed compute platform to deploy and scale containerized applications securely and quickly. You can write code in your favorite languages, including Go, Python, Java Ruby, Node.js and other languages. For a simple developer experience, we abstract away all infrastructure management. It is built upon the open standard Knative which allows for portability of your applications. You can write code the way you want by deploying any container that listens to events or requests. You can create applications in your preferred language with your favorite dependencies, tools, and deploy them within seconds. Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously--depending on traffic. Cloud Run only charges for the resources you use. Cloud Run makes app development and deployment easier and more efficient. Cloud Run is fully integrated with Cloud Code and Cloud Build, Cloud Monitoring and Cloud Logging to provide a better developer experience. -
4
Amazon ECS
Amazon
4 RatingsAmazon Elastic Container Service (Amazon ECS), is a fully managed container orchestration and management service. ECS is used by customers such as Duolingo and Samsung, GE and Cook Pad to run their most sensitive and critical mission-critical applications. It offers security, reliability and scalability. ECS is a great way to run containers for a variety of reasons. AWS Fargate is serverless compute for containers. You can also run ECS clusters with Fargate. Fargate eliminates the need for provisioning and managing servers. It allows you to specify and pay per application for resources and improves security by application isolation by design. ECS is also used extensively in Amazon to power services like Amazon SageMaker and AWS Batch. It is also used by Amazon.com's recommendation engines. ECS is extensively tested for reliability, security, and availability. -
5
Kubernetes
Kubernetes
Free 1 RatingKubernetes (K8s), an open-source software that automates deployment, scaling and management of containerized apps, is available as an open-source project. It organizes containers that make up an app into logical units, which makes it easy to manage and discover. Kubernetes is based on 15 years of Google's experience in running production workloads. It also incorporates best-of-breed practices and ideas from the community. Kubernetes is built on the same principles that allow Google to run billions upon billions of containers per week. It can scale without increasing your operations team. Kubernetes flexibility allows you to deliver applications consistently and efficiently, no matter how complex they are, whether you're testing locally or working in a global enterprise. Kubernetes is an open-source project that allows you to use hybrid, on-premises, and public cloud infrastructures. This allows you to move workloads where they are most important. -
6
Amazon EKS
Amazon
Amazon Elastic Kubernetes Service is a fully managed Kubernetes services. EKS is trusted by customers such as Intel, Snap and Intuit. It also supports GoDaddy and Autodesk's mission-critical applications. EKS is reliable, secure, and scaleable. EKS is the best place for Kubernetes because of several reasons. AWS Fargate is serverless compute for containers that you can use to run your EKS clusters. Fargate eliminates the need for provisioning and managing servers. It allows you to specify and pay per application for resources and improves security by application isolation by design. EKS is also integrated with AWS Identity and Access Management, AWS CloudWatch, Auto Scaling Groups and AWS Identity and Access Management, IAM, and Amazon Virtual Private Cloud (VPC), allowing you to seamlessly monitor, scale, and load balance your applications. -
7
harpoon
harpoon
$50 per monthharpoon allows you to deploy any software within seconds using a simple drag-and-drop method. No code is required to deploy production software using our visual Kubernetes tool. Harpoon offers all the features needed to deploy and configure your software using Kubernetes, the industry's leading container orchestrator. All without writing any code. You can easily deploy and configure Kubernetes and autoscale software in the cloud without writing any code. Search and find any commercial or open-source piece of software anywhere on the planet, and deploy it in the cloud instantly with just one click. Harpoon will run scripts to secure your cloud account before running any applications or service. Connect harpoon anywhere to your source code repository and set up an automated deployment pipeline. -
8
Apache Airflow
The Apache Software Foundation
Airflow is a community-created platform that allows programmatically to schedule, author, and monitor workflows. Airflow is modular in architecture and uses a message queue for managing a large number of workers. Airflow can scale to infinity. Airflow pipelines can be defined in Python to allow for dynamic pipeline generation. This allows you to write code that dynamically creates pipelines. You can easily define your own operators, and extend libraries to suit your environment. Airflow pipelines can be both explicit and lean. The Jinja templating engine is used to create parametrization in the core of Airflow pipelines. No more XML or command-line black-magic! You can use standard Python features to create your workflows. This includes date time formats for scheduling, loops to dynamically generate task tasks, and loops for scheduling. This allows you to be flexible when creating your workflows. -
9
StreamScape
StreamScape
Reactive Programming can be used on the back-end without the use of complex languages or cumbersome frameworks. Triggers, Actors, and Event Collections make it simple to build data pipelines. They also allow you to work with data streams using simple SQL syntax. This makes it easier for users to avoid the complexities of distributed systems development. Extensible data modeling is a key feature. It supports rich semantics, schema definition, and allows for real-world objects to be represented. Data shaping rules and validation on the fly support a variety of formats, including JSON and XML. This allows you to easily define and evolve your schema while keeping up with changing business requirements. If you can describe it, we can query it. Are you familiar with Javascript and SQL? You already know how to use the database engine. No matter what format you use, a powerful query engine allows you to instantly test logic expressions or functions. This speeds up development and simplifies deployment for unmatched data agility. -
10
Conductor
Conductor
Conductor is a cloud-based workflow orchestration engine. Conductor was designed to help Netflix orchestrate microservices-based processes flows. It includes the following features. A distributed server ecosystem that stores workflow state information efficiently. Allows creation of business flows that allow each task to be executed by different microservices. A DAG (Directed Acyclic graph) is a workflow definition. Workflow definitions can be separated from service implementations. These process flows can be traceable and visible. A simple interface connects workers to execute tasks in workflows. Workers can be written in any language that is most appropriate for the service, and workers are language-agnostic. You have full operational control over workflows, including the ability to pause and resume, restart, retry, terminate, and restart. Allow for greater reuse of existing microservices, making it easier to onboard. -
11
Apache Mesos
Apache Software Foundation
Mesos is built on the same principles as Linux, but at a higher level of abstraction. The Mesos kernel runs at every machine. It provides applications (e.g. Hadoop, Spark Kafka, Elasticsearch, Kafka) with API's that allow for resource management and scheduling across all datacenters and cloud environments. Native support for Docker and AppC images launching containers. Support for legacy and cloud native applications running in the same cluster using pluggable scheduling policies. -
12
JFrog Pipelines
JFrog
$98/month JFrog Pipelines allows software teams to ship updates quicker by automating DevOps processes in an efficient and secure manner across all their tools and teams. It automates every step of production, including continuous integration (CI), continuous deliveries (CD), infrastructure, and more. Pipelines is natively integrated with the JFrog Platform and is available with both cloud (software-as-a-service) and on-prem subscriptions. -
13
Nextflow Tower
Seqera Labs
Nextflow Tower is an intuitive, centralized command post that facilitates large-scale collaborative data analysis. Tower makes it easy to launch, manage, monitor, and monitor scalable Nextflow data analysis and compute environments both on-premises and on the cloud. Researchers can concentrate on the science that is important and not worry about infrastructure engineering. With predictable, auditable pipeline execution, compliance is made easier. You can also reproduce results with specific data sets or pipeline versions on-demand. Nextflow Tower was developed and supported by Seqera Labs. They are the maintainers and creators of the open-source Nextflow project. Users get high-quality support straight from the source. Tower integrates Nextflow with third-party frameworks, which is a significant advantage. It can help users take advantage of Nextflow's full range of capabilities. -
14
Nebula Container Orchestrator
Nebula Container Orchestrator
The Nebula container orchestrator is designed to allow developers and operators to treat IoT devices as distributed Dockerized applications. It acts as a Docker orchestrator for IoT device as well as distributed services such CDN or edge computing. It is open-source and free. Nebula is an open-source project for Docker orchestration. It's designed to manage large clusters at scale by scaling each component of the project as needed. The project's goal is to be a Docker orchestrator for IoT device as well as distributed services like CDN or edge computing. Nebula can simultaneously update tens of thousands IoT devices around the world with one API call. This is in an effort for developers and ops to treat IoT devices like distributed Dockerized applications. -
15
Northflank
Northflank
$6 per monthSelf-service platform for developers to create apps, databases and jobs. Scale up from one workload to hundreds of workloads on compute or GPUs. GitOps, self-service workflows and templates, pipelines and templates that are highly configurable, will accelerate every step, from push to production. With observability tools, backups and restores, rollbacks, and a rollback feature, you can deploy preview, staging and production environments securely. Northflank integrates seamlessly with your preferred tools and can accommodate any technology stack. You can deploy on Northflank’s secure infrastructure, or on your own account. Either way, you will get the same developer experience and have total control over your data, deployment regions, security and cloud expenses. Northflank uses Kubernetes to deliver the best of cloud native without the overhead. Northflank offers a cloud deployment option for maximum simplicity. You can also connect your GKE or EKS to Northflank to get a managed platform in minutes. -
16
Strong Network
Strong Network
$39Our platform allows you create distributed coding and data science processes with contractors, freelancers, and developers located anywhere. They work on their own devices, while auditing your data and ensuring data security. Strong Network has created a multi-cloud platform we call Virtual Workspace Infrastructure. It allows companies to securely unify their access to their global data science and coding processes via a simple web browser. The VWI platform is an integral component of their DevSecOps process. It doesn't require integration with existing CI/CD pipelines. Process security is focused on data, code, and other critical resources. The platform automates the principles and implementation of Zero-Trust Architecture, protecting the most valuable IP assets of the company. -
17
GlassFlow
GlassFlow
$350 per monthGlassFlow is an event-driven, serverless data pipeline platform for Python developers. It allows users to build real time data pipelines, without the need for complex infrastructure such as Kafka or Flink. GlassFlow is a platform that allows developers to define data transformations by writing Python functions. GlassFlow manages all the infrastructure, including auto-scaling and low latency. Through its Python SDK, the platform can be integrated with a variety of data sources and destinations including Google Pub/Sub and AWS Kinesis. GlassFlow offers a low-code interface that allows users to quickly create and deploy pipelines. It also has features like serverless function executions, real-time connections to APIs, alerting and reprocessing abilities, etc. The platform is designed for Python developers to make it easier to create and manage event-driven data pipes. -
18
HashiCorp Nomad
HashiCorp
It is a simple and flexible task orchestrator that deploys and manages containers and non-containerized apps across on-prem as well as cloud environments. One 35MB binary that can be integrated into existing infrastructure. It is easy to use on-prem and in the cloud with minimal overhead. You can orchestrate any type of application, not just containers. First-class support for Docker and Windows, Java, VMs, VMs, and other technologies. Orchestration benefits can be added to existing services. Zero downtime deployments, increased resilience, higher resource utilization, as well as greater resilience can all be achieved without containerization. Multi-region, multicloud federation - single command Nomad is a single control plane that allows you to deploy applications worldwide to any region. One workflow to deploy to cloud or bare metal environments. Multi-cloud applications can be enabled with ease. Nomad seamlessly integrates with Terraform Consul and Vault for provisioning and service networking. Secrets management is also possible. -
19
Pliant
Pliant.io
12 RatingsPliant's solution to IT Process Automation streamlines, secures, and simplifies the way teams build and deploy automation. Pliant will reduce human error, ensure compliance and increase your efficiency. Pliant allows you to integrate existing automation and create new automation using single-pane orchestration. You can ensure compliance with consistent, practical, built-in governance. Pliant has abstracted thousands from vendor APIs to create intelligent actions blocks that allow users to drag-and drop blocks rather than writing lines of code. Citizen developers can create consistent and meaningful automation across platforms and services in minutes. This allows them to maximize the value of the entire technology stack from one platform. We can add APIs in as little as 15 business days. This is an industry-leading timeframe. -
20
Canonical Juju
Canonical
Enterprise apps will have better operators thanks to a full application graph, declarative integration for legacy estate and Kubernetes, and a full app graph. Juju operator integration allows us keep each operator as simple and consistent as possible, then we compose them to create rich topologies for complex scenarios that support complex scenarios with less YAML. Large-scale operations code can also be governed by the UNIX philosophy of "doing one thing right". The benefits of clarity as well as reuse are the same. It is important to be small. Juju allows you the option to use the same operator pattern across your entire estate, even legacy apps. Model-driven operations significantly reduce maintenance and operation costs for traditional workloads, without the need to re-platform to K8s. Once mastered, legacy apps can be made multi-cloud-ready. The Juju Operator Lifecycle Manager, (OLM), uniquely supports both machine-based and container-based apps with seamless integration. -
21
AWS Data Pipeline
Amazon
$1 per monthAWS Data Pipeline, a web service, allows you to reliably process and transfer data between different AWS compute- and storage services as well as on premises data sources at specific intervals. AWS Data Pipeline allows you to access your data wherever it is stored, transform it and process it at scale, then transfer it to AWS services like Amazon S3, Amazon RDS and Amazon DynamoDB. AWS Data Pipeline makes it easy to create complex data processing workloads that can be fault-tolerant, repeatable, high-availability, and reliable. You don't need to worry about resource availability, managing intertask dependencies, retrying transient errors or timeouts in individual task, or creating a fail notification system. AWS Data Pipeline allows you to move and process data previously stored in on-premises silos. -
22
Test Kitchen
KitchenCI
Test Kitchen offers a test harness that allows you to execute infrastructure code on a single platform or multiple platforms. A driver plugin architecture can be used to run code on different cloud providers and virtualization technology such as Vagrant and Amazon EC2, Microsoft Azure and Google Compute Engine. Many testing frameworks, including Chef InSpec, Serverspec and Bats For Chef Infra workflows are supported. If you include a cookbooks/ directory, Kitchen will know what to do. All Chef-managed community cookbooks use Test Kitchen as the integration testing tool of preference for cookbooks. -
23
Kestra
Kestra
Kestra is a free, open-source orchestrator based on events that simplifies data operations while improving collaboration between engineers and users. Kestra brings Infrastructure as Code to data pipelines. This allows you to build reliable workflows with confidence. The declarative YAML interface allows anyone who wants to benefit from analytics to participate in the creation of the data pipeline. The UI automatically updates the YAML definition whenever you make changes to a work flow via the UI or an API call. The orchestration logic can be defined in code declaratively, even if certain workflow components are modified. -
24
Dagster+
Dagster Labs
$0Dagster is the cloud-native open-source orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. It is the platform of choice data teams responsible for the development, production, and observation of data assets. With Dagster, you can focus on running tasks, or you can identify the key assets you need to create using a declarative approach. Embrace CI/CD best practices from the get-go: build reusable components, spot data quality issues, and flag bugs early. -
25
Google Cloud Composer
Google
$0.074 per vCPU hourCloud Composer's managed nature with Apache Airflow compatibility allow you to focus on authoring and scheduling your workflows, rather than provisioning resources. Google Cloud products include BigQuery, Dataflow and Dataproc. They also offer integration with Cloud Storage, Cloud Storage, Pub/Sub and AI Platform. This allows users to fully orchestrate their pipeline. You can schedule, author, and monitor all aspects of your workflows using one orchestration tool. This is true regardless of whether your pipeline lives on-premises or in multiple clouds. You can make it easier to move to the cloud, or maintain a hybrid environment with workflows that cross over between the public cloud and on-premises. To create a unified environment, you can create workflows that connect data processing and services across cloud platforms. -
26
Yandex Data Proc
Yandex
$0.19 per hourYandex Data Proc creates and configures Spark clusters, Hadoop clusters, and other components based on the size, node capacity and services you select. Zeppelin Notebooks and other web applications can be used to collaborate via a UI Proxy. You have full control over your cluster, with root permissions on each VM. Install your own libraries and applications on clusters running without having to restart. Yandex Data Proc automatically increases or decreases computing resources for compute subclusters according to CPU usage indicators. Data Proc enables you to create managed clusters of Hive, which can reduce failures and losses due to metadata not being available. Save time when building ETL pipelines, pipelines for developing and training models, and describing other iterative processes. Apache Airflow already includes the Data Proc operator. -
27
Chalk
Chalk
FreeData engineering workflows that are powerful, but without the headaches of infrastructure. Simple, reusable Python is used to define complex streaming, scheduling and data backfill pipelines. Fetch all your data in real time, no matter how complicated. Deep learning and LLMs can be used to make decisions along with structured business data. Don't pay vendors for data that you won't use. Instead, query data right before online predictions. Experiment with Jupyter and then deploy into production. Create new data workflows and prevent train-serve skew in milliseconds. Instantly monitor your data workflows and track usage and data quality. You can see everything you have computed, and the data will replay any information. Integrate with your existing tools and deploy it to your own infrastructure. Custom hold times and withdrawal limits can be set. -
28
Metrolink
Metrolink.ai
Unified platform that is high-performance and can be layered on any existing infrastructure to facilitate seamless onboarding. Metrolink's intuitive design allows any organization to manage its data integration. It provides advanced manipulations that aim to maximize diverse and complex data and refocus human resource to eliminate overhead. Complex, multi-source, streaming data that is constantly changing in use cases. The focus is on core business and not data utilities. Metrolink is a Unified platform which allows organizations to design and manage their data pipes according to their business needs. This is achieved by providing an intuitive UI and advanced manipulations of complex data. It also allows for data privacy and data value enhancement. -
29
TrueFoundry
TrueFoundry
$5 per monthTrueFoundry provides data scientists and ML engineers with the fastest framework to support the post-model pipeline. With the best DevOps practices, we enable instant monitored endpoints to models in just 15 minutes! You can save, version, and monitor ML models and artifacts. With one command, you can create an endpoint for your ML Model. WebApps can be created without any frontend knowledge or exposure to other users as per your choice. Social swag! Our mission is to make machine learning fast and scalable, which will bring positive value! TrueFoundry is enabling this transformation by automating parts of the ML pipeline that are automated and empowering ML Developers with the ability to test and launch models quickly and with as much autonomy possible. Our inspiration comes from the products that Platform teams have created in top tech companies such as Facebook, Google, Netflix, and others. These products allow all teams to move faster and deploy and iterate independently. -
30
Dropbase
Dropbase
$19.97 per user per monthYou can centralize offline data, import files, clean up data, and process it. With one click, export to a live database Streamline data workflows. Your team can access offline data by centralizing it. Dropbase can import offline files. Multiple formats. You can do it however you want. Data can be processed and formatted. Steps for adding, editing, reordering, and deleting data. 1-click exports. Export to database, endpoints or download code in just one click Instant REST API access. Securely query Dropbase data with REST API access keys. You can access data wherever you need it. Combine and process data to create the desired format. No code. Use a spreadsheet interface to process your data pipelines. Each step is tracked. Flexible. You can use a pre-built library of processing functions. You can also create your own. 1-click exports. Export to a database or generate endpoints in just one click Manage databases. Manage databases and credentials. -
31
Apache Brooklyn
Apache Software Foundation
Your applications, any cloud, any container, anywhere. Apache Brooklyn is software to manage cloud applications. It can be used to: Blueprints of your application are stored in version control. Components are configured and integrated across multiple machines automatically. You can also use it to monitor key application metrics, scale to meet demand, restart and replace failing components. You can view and modify the web console, or automate using REST API. -
32
Centurion
New Relic
A Docker deployment tool. It takes containers from a Docker registry, and runs them on a number of hosts that have the correct environment variables, host volume maps, and port mappings. It supports rolling deployments right out of the box and makes it easy for applications to be shipped to Docker servers. It is being used in our production infrastructure. Centurion uses a two-part deployment process. The build process ships a container into the registry, and Centurion moves containers from the registry to Centurion's Docker fleet. The Docker command line tools handle registry support directly, so you can use any item they support via the normal registry mechanism. Before you deploy Centurion, it is a good idea to learn how to use a registry. This code was developed openly with input from the community via PRs and issues. New Relic has an active maintainer group. -
33
Azure Container Instances
Microsoft
You can run containers without having to manage servers Azure Container Instances allows you to focus on the design and building of your applications, rather than managing the infrastructure. Containers on demand increase agility With one command, deploy containers to the cloud with unrivalled speed and simplicity. ACI can be used to provision additional compute for your most demanding workloads whenever you require. ACI can be used to elastically burst your Azure Kubernetes Service cluster (AKS) when traffic spikes. Secure applications with hypervisor isolation You can use virtual machines to secure your container workloads while still using lightweight containers. ACI provides hypervisor isolation to each container group so containers can run in isolation and not share a kernel. -
34
HPE Ezmeral
Hewlett Packard Enterprise
Manage, control, secure, and manage the apps, data, and IT that run your business from edge to cloud. HPE Ezmeral accelerates digital transformation initiatives by shifting resources and time from IT operations to innovation. Modernize your apps. Simplify your operations. You can harness data to transform insights into impact. Kubernetes can be deployed at scale in your data center or on the edge. It integrates persistent data storage to allow app modernization on baremetal or VMs. This will accelerate time-to-value. Operationalizing the entire process to build data pipelines will allow you to harness data faster and gain insights. DevOps agility is key to machine learning's lifecycle. This will enable you to deliver a unified data network. Automation and advanced artificial intelligence can increase efficiency and agility in IT Ops. Provide security and control to reduce risk and lower costs. The HPE Ezmeral Container Platform is an enterprise-grade platform that deploys Kubernetes at large scale for a wide variety of uses. -
35
azk
Azuki
What is so special about azk azk (Apache 2.0) is open-source software and will remain so. azk is open source software (Apache 2.0) and has a very easy learning curve. Use the same development tools that you use. It takes only a few commands. It takes minutes, not hours or days. azk works by creating very brief recipe files (Azkfile.js), which describe the environments that will be installed and configured. azk runs very fast and your computer will not feel it. It uses containers rather than virtual machines. Containers are similar to virtual machines but with better performance and lower use of physical resources. azk is built using Docker, an open-source engine for managing containers. An Azkfile.js shared by all programmers ensures complete parity between development environments on different machines and reduces bugs during deployment. Are you unsure if all programmers on your team are using the latest version of the development environment -
36
Mirantis Kubernetes Engine
Mirantis
Mirantis Kubernetes Engine (formerly Docker Enterprise) gives you the power to build, run, and scale cloud native applications—the way that works for you. Increase developer efficiency and release frequency while reducing cost. Deploy Kubernetes and Swarm clusters out of the box and manage them via API, CLI, or web interface. Kubernetes, Swarm, or both Different apps—and different teams—have different container orchestration needs. Use Kubernetes, Swarm, or both depending on your specific requirements. Simplified cluster management Get up and running right out of the box—then manage clusters easily and apply updates with zero downtime using a simple web UI, CLI, or API. Integrated role-based access control (RBAC) Fine-grained security access control across your platform ensures effective separation of duties, and helps drive a security strategy built on the principle of least privilege. Identity management Easily integrate with your existing identity management solution and enable two-factor authentication to provide peace of mind that only authorized users are accessing your platform. Mirantis Kubernetes Engine works with Mirantis Container Runtime and Mirantis Secure Registry to provide security compliance. -
37
Marathon
D2iQ
Marathon is a production-grade container orchestration platform that Mesosphere's Datacenter Operating System(DC/OS) as well as Apache Mesos uses. High availability. Marathon runs as an active/passive cluster, with leader election for 100 percent uptime. Multiple container runtimes. Marathon provides first-class support for Mesos containers (using Docker) and Mesos containers using cgroups. Stateful apps. Marathon can bind persistent storage volumes directly to your application. You can run databases such as Postgres and MySQL, and have storage accounted by Mesos. Beautiful and powerful UI. Service Discovery & Load Balancing. Several methods available. Health checks. Use HTTP or TCP checks to assess the health of your application. Subscribe to Events To receive notifications, you will need to provide an HTTP endpoint. This is used to integrate with external load balancers. Metrics. You can query them at /metrics using JSON format. Push them to DataDog, StatsD, and Graphite. Or, you can scrape them with Prometheus. -
38
Container Engine for Kubernetes is an Oracle-managed container orchestration platform that can help you build modern cloud native apps in a shorter time and at a lower cost. Oracle Cloud Infrastructure offers Container Engine for Kubernetes free of charge, running on more efficient and lower-cost compute shapes than most other vendors. Open-source Kubernetes can be used by DevOps engineers for application workload portability, and to simplify operations with automatic updates. With a single click, deploy Kubernetes clusters, including the underlying virtual clouds networks, internet gateways and NAT gateways. Automate Kubernetes operations using web-based REST API or CLI. This includes cluster creation, scaling, operations, and maintenance. Cluster management is free with Oracle Container Engine for Kubernetes. You can easily and quickly upgrade container clusters with zero downtime to keep them current with the latest stable version Kubernetes.
-
39
Helios
Spotify
Helios is a Docker orchestration platform that allows you to deploy and manage containers across a large number of servers. Helios offers a command-line client and an HTTP API to allow you to interact with the containers running on your servers. It keeps track of all events in your cluster, including version changes, deploys, restarts, and restarts. Although the binary release of Helios was designed for Ubuntu 14.04.1LTS, Helios should work on any platform that has Java 8 and Maven 3. To launch a local environment, use helios–solo with a Helios master or agent. Helios is pragmatic. We don't have all the answers, but we do our best to make sure that what we do have is rock-solid. We don't yet have dynamic scheduling or resource limits. It is more important for us to have the CI/CD use case and surrounding tooling firmly established today than ever before. However, dynamic scheduling and composite jobs are all possible in the future. -
40
DataFactory
RightData
DataFactory has everything you need to integrate data and build efficient data pipelines. Transform raw data to information and insights faster than with other tools. No more writing pages of code just to move or transform data. Drag data operations directly from a tool pallete onto your pipeline canvas for even the most complex pipelines. Drag and drop data transformations to a pipeline canvas. Build pipelines in minutes that would have taken hours to code. Automate and operationalize by using version control and an approval mechanism. Data wrangling used to be one tool, pipeline creation was another and machine learning yet another. DataFactory brings all of these functions together. Drag-and-drop transformations make it easy to perform operations. Prepare datasets for advanced analytics. Add & operationalize ML features like segmentation and category without code. -
41
Ondat
Ondat
You can accelerate your development by using a storage platform that integrates with Kubernetes. While you focus on running your application we ensure that you have the persistent volumes you need to give you the stability and scale you require. Integrating stateful storage into Kubernetes will simplify your app modernization process and increase efficiency. You can run your database or any other persistent workload in a Kubernetes-based environment without worrying about managing the storage layer. Ondat allows you to provide a consistent storage layer across all platforms. We provide persistent volumes that allow you to run your own databases, without having to pay for expensive hosted options. Kubernetes data layer management is yours to take back. Kubernetes-native storage that supports dynamic provisioning. It works exactly as it should. API-driven, tight integration to your containerized applications. -
42
IBM Cloud Kubernetes Service
IBM
$0.11 per hourWith over 14,000 clusters of managed production, we are leading the charge. This is just the beginning. Operational visibility into Kubernetes-based services, platforms, and applications. Advanced features to monitor, troubleshoot, create alerts, and build custom dashboards. Cluster level, 30-day retention and natural language processing are all available. High security environment for production workloads. Integrate with advanced IBM services such as AI, Watson and Blockchain to extend your app's capabilities. This is done through an automated, standardized, and secure architecture. This includes Kubernetes secrets that can be managed by customers through IBM Cloud™. Key Protect -
43
Apache Hadoop YARN
Apache Software Foundation
The fundamental idea of YARN is to split up the functionalities of resource management and job scheduling/monitoring into separate daemons. The idea is to have a global ResourceManager, (RM), and a per-application ApplicationMaster, (AM). An application can be a single job, or a DAG (distributed array of jobs). The data-computation framework is formed by the NodeManager and the ResourceManager. The ResourceManager is the ultimate authority who arbitrates the allocation of resources among all applications in the system. The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler. The per-application ApplicationMaster, which is in essence a framework-specific library, is responsible for negotiating resources from ResourceManager and working with NodeManagers to execute and monitor tasks. -
44
Alibaba Cloud offers Container Service for Kubernetes, which is a fully managed service. ACK integrates with services such as virtualization and storage to provide a highly scalable Kubernetes environment for containerized applications. Alibaba Cloud is a Kubernetes Certified Service Provider, (KCSP), and ACK has been certified by Certified Kubernetes Conformance Program. This ensures consistent Kubernetes experience and workload portability. Kubernetes Certified Services Provider (KCSP), and qualified by Certified Kubernetes conformance Program. Kubernetes consistency and workload portability. Deep and rich cloud native capabilities for enterprise-class enterprises. Provides fine-grained access control and application security. Allows you quickly to create Kubernetes Clusters. Container-based management of applications during the entire application lifecycle.
-
45
k0s
Mirantis
$0It is the only Kubernetes distribution which is simple, solid, and certified. It works on any infrastructure, including bare metal, on-premises or private clouds, edge, IoT or public clouds. It's free and open source. Zero Friction – k0s dramatically reduces the complexity involved in installing and running Kubernetes. Bootstrapping new kube clusters takes only minutes. Developer friction is zero, making it easy for anyone to get started, even if they don't have any special skills. Zero Deps: k0s comes as a single binary that has no host OS dependencies other than the kernel of the host OS. It works with any OS without additional software packages. Any security vulnerabilities or performance problems can be fixed directly within the k0s Distribution. Zero Cost - K0s is free for commercial or personal use and will always be. The source code can be found on GitHub, under Apache 2 license. -
46
Apache ODE
Apache Software Foundation
Apache ODE (Orchestration Director Engine), software executes business processes that are written in accordance with the WS-BPEL standard. It can communicate with web services, including sending and receiving messages, data manipulation, and error recovery, as defined by your process definition. It can execute both long- and short-lived process executions to manage all services in your application. WS-BPEL (Business Process Execution Language), is an XML-based language that allows you to create business processes. It defines basic control structures, such as loops or conditions, as well as elements that invoke web services and get messages from them. It uses WSDL to communicate web service interfaces. You can manipulate message structures by assigning parts or wholes to variables that can then be used to send additional messages. Support for both the legacy BPEL4WS 1.1 vendor specification and the WS-BPEL 2.0 OASIS standards side-by-side. -
47
Dataplane
Dataplane
FreeDataplane's goal is to make it faster and easier to create a data mesh. It has robust data pipelines and automated workflows that can be used by businesses and teams of any size. Dataplane is more user-friendly and places a greater emphasis on performance, security, resilience, and scaling. -
48
Lightbend
Lightbend
Lightbend technology allows developers to quickly build data-centric applications that can handle the most complex, distributed applications and streaming data streams. Lightbend is used by companies around the world to address the problems of distributed, real-time data to support their most important business initiatives. Akka Platform is a platform that makes it easy for businesses build, deploy, manage, and maintain large-scale applications that support digitally transformational initiatives. Reactive microservices are a way to accelerate time-to-value, reduce infrastructure costs, and lower cloud costs. They take full advantage the distributed nature cloud and are highly efficient, resilient to failure, and able to operate at any scale. Native support for encryption, data destruction, TLS enforcement and compliance with GDPR. Framework to quickly build, deploy and manage streaming data pipelines. -
49
Data Taps
Data Taps
Data Taps lets you build your data pipelines using Lego blocks. Add new metrics, zoom out, and investigate using real-time streaming SQL. Globally share and consume data with others. Update and refine without hassle. Use multiple models/schemas during schema evolution. Built for AWS Lambda, S3, and Lambda. -
50
Kubestack
Kubestack
There is no need to compromise between the convenience and power of infrastructure as a code. Kubestack lets you design your Kubernetes platform using an intuitive, graphical user interface. Export your custom stack to Terraform code to ensure reliable provisioning and long-term sustainability. Platforms built with Kubestack Cloud can be exported to a Terraform root Module, which is based on Kubestack framework. Framework modules are all open-source, which reduces the long-term maintenance effort as well as allowing for easy access to future improvements. To efficiently manage changes with your team, adapt the tried-and-trued pull-request and peer review based workflow. You can reduce the amount of bespoke infrastructure code that you need to maintain and save time in the long-term.