Best Data Management Software for Apache Iceberg

Find and compare the best Data Management software for Apache Iceberg in 2024

Use the comparison tool below to compare the top Data Management software for Apache Iceberg on the market. You can filter results by user reviews, pricing, features, platform, region, support options, integrations, and more.

  • 1
    Apache Hive Reviews

    Apache Hive

    Apache Software Foundation

    1 Rating
    Apache Hive™, a data warehouse software, facilitates the reading, writing and management of large datasets that are stored in distributed storage using SQL. Structure can be projected onto existing data. Hive provides a command line tool and a JDBC driver to allow users to connect to it. Apache Hive is an Apache Software Foundation open-source project. It was previously a subproject to Apache® Hadoop®, but it has now become a top-level project. We encourage you to read about the project and share your knowledge. To execute traditional SQL queries, you must use the MapReduce Java API. Hive provides the SQL abstraction needed to integrate SQL-like query (HiveQL), into the underlying Java. This is in addition to the Java API that implements queries.
  • 2
    Trino Reviews
    Trino is an engine that runs at incredible speeds. Fast-distributed SQL engine for big data analytics. Helps you explore the data universe. Trino is an extremely parallel and distributed query-engine, which is built from scratch for efficient, low latency analytics. Trino is used by the largest organizations to query data lakes with exabytes of data and massive data warehouses. Supports a wide range of use cases including interactive ad-hoc analysis, large batch queries that take hours to complete, and high volume apps that execute sub-second queries. Trino is a ANSI SQL query engine that works with BI Tools such as R Tableau Power BI Superset and many others. You can natively search data in Hadoop S3, Cassandra MySQL and many other systems without having to use complex, slow and error-prone copying processes. Access data from multiple systems in a single query.
  • 3
    Tabular Reviews

    Tabular

    Tabular

    $100 per month
    Tabular is a table store that allows you to create an open table. It was created by the Apache Iceberg creators. Connect multiple computing frameworks and engines. Reduce query time and costs up to 50%. Centralize enforcement of RBAC policies. Connect any query engine, framework, or tool, including Athena BigQuery, Snowflake Databricks Trino Spark Python, Snowflake Redshift, Snowflake Databricks and Redshift. Smart compaction, data clustering and other automated services reduce storage costs by up to 50% and query times. Unify data access in the database or table. RBAC controls are easy to manage, enforce consistently, and audit. Centralize your security at the table. Tabular is easy-to-use and has RBAC, high-powered performance, and high ingestion under the hood. Tabular allows you to choose from multiple "best-of-breed" compute engines, based on their strengths. Assign privileges to the data warehouse database or table level.
  • 4
    Apache Impala Reviews
    Impala offers low latency, high concurrency, and a wide range of storage options, including Iceberg and open data formats. Impala scales linearly in multitenant environments. Impala integrates native Hadoop security, Kerberos authentication, and the Ranger module to ensure that the correct users and applications have access to the right data. Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment, with no redundant infrastructure or data conversion/duplication. Impala uses the same metadata driver and ODBC driver as Apache Hive. Impala, like Hive, supports SQL. You don't need to reinvent the wheel. Impala allows more users to interact with data, whether they are using SQL queries or BI apps, through a single repository. Metadata is also stored from the source of the data until it has been analyzed.
  • 5
    PuppyGraph Reviews
    PuppyGraph allows you to query multiple data stores in a single graph model. Graph databases can be expensive, require months of setup, and require a dedicated team. Traditional graph databases struggle to handle data beyond 100GB and can take hours to run queries with multiple hops. A separate graph database complicates architecture with fragile ETLs, and increases your total cost ownership (TCO). Connect to any data source, anywhere. Cross-cloud and cross region graph analytics. No ETLs are required, nor is data replication. PuppyGraph allows you to query data as a graph directly from your data lakes and warehouses. This eliminates the need for time-consuming ETL processes that are required with a traditional graph databases setup. No more data delays or failed ETL processes. PuppyGraph eliminates graph scaling issues by separating computation from storage.
  • 6
    StarRocks Reviews
    StarRocks offers at least 300% more performance than other popular solutions, whether you're using a single or multiple tables. With a rich set connectors, you can ingest real-time data into StarRocks for the latest insights. A query engine that adapts your use cases. StarRocks allows you to scale your analytics easily without moving your data or rewriting SQL. StarRocks allows a rapid journey between data and insight. StarRocks is unmatched in performance and offers a unified OLAP system that covers the most common data analytics scenarios. StarRocks offers at least 300% faster performance than other popular solutions, whether you are working with one table or many. StarRocks' built-in memory-and-disk-based caching framework is specifically designed to minimize the I/O overhead of fetching data from external storage to accelerate query performance.
  • 7
    Stackable Reviews
    The Stackable platform was built with flexibility and openness in mind. It offers a curated collection of open source data apps such as Apache Kafka Apache Druid Trino and Apache Spark. Stackable is different from other offerings that either push proprietary solutions or further vendor lock-in. All data apps are seamlessly integrated and can be added to or removed at any time. It runs anywhere, on-prem and in the cloud, based on Kubernetes. You only need stackablectl, a Kubernetes Cluster and stackablectl to run your stackable data platform. You will be able to work with your data within minutes. Configure your one line startup command here. Similar to kubectl stackablectl was designed to interface easily with the Stackable data Platform. Use the command-line utility to deploy and maintain stackable data apps in Kubernetes. You can create, delete and update components with stackablectl.
  • 8
    Amazon Data Firehose Reviews

    Amazon Data Firehose

    Amazon

    $0.075 per month
    Easy to capture, transform and load streaming data. Create a stream of data, select the destination and start streaming real time data in just a few simple clicks. Automate the provisioning and scaling of compute, memory and network resources, without any ongoing administration. Transform streaming data into formats such as Apache Parquet and dynamically partition streaming without building your own pipelines. Amazon Data Firehose is the fastest way to acquire data streams, transform them, and then deliver them to data lakes, warehouses, or analytics services. Amazon Data Firehose requires you to create a stream that includes a destination, a source and the transformations required. Amazon Data Firehose continuously processes a stream, scales automatically based on data availability, and delivers the results within seconds. Select the source of your data stream, or write data with the Firehose Direct PUT (API) API.
  • 9
    Onehouse Reviews
    The only fully-managed cloud data lakehouse that can ingest data from all of your sources in minutes, and support all of your query engines on a large scale. All for a fraction the cost. With the ease of fully managed pipelines, you can ingest data from databases and event streams in near-real-time. You can query your data using any engine and support all of your use cases, including BI, AI/ML, real-time analytics and AI/ML. Simple usage-based pricing allows you to cut your costs by up to 50% compared with cloud data warehouses and ETL software. With a fully-managed, highly optimized cloud service, you can deploy in minutes and without any engineering overhead. Unify all your data into a single source and eliminate the need for data to be copied between data lakes and warehouses. Apache Hudi, Apache Iceberg and Delta Lake all offer omnidirectional interoperability, allowing you to choose the best table format for your needs. Configure managed pipelines quickly for database CDC and stream ingestion.
  • 10
    Presto Reviews

    Presto

    Presto Foundation

    Presto is an open-source distributed SQL query engine that allows interactive analytic queries against any data source, from gigabytes up to petabytes.
  • 11
    Apache Spark Reviews

    Apache Spark

    Apache Software Foundation

    Apache Spark™, a unified analytics engine that can handle large-scale data processing, is available. Apache Spark delivers high performance for streaming and batch data. It uses a state of the art DAG scheduler, query optimizer, as well as a physical execution engine. Spark has over 80 high-level operators, making it easy to create parallel apps. You can also use it interactively via the Scala, Python and R SQL shells. Spark powers a number of libraries, including SQL and DataFrames and MLlib for machine-learning, GraphX and Spark Streaming. These libraries can be combined seamlessly in one application. Spark can run on Hadoop, Apache Mesos and Kubernetes. It can also be used standalone or in the cloud. It can access a variety of data sources. Spark can be run in standalone cluster mode on EC2, Hadoop YARN and Mesos. Access data in HDFS and Alluxio.
  • 12
    SQL Reviews
    SQL is a domain-specific programming language that allows you to access, manage, and manipulate relational databases and relational management systems.
  • 13
    Daft Reviews
    Daft is an ETL, analytics, and ML/AI framework that can be used at scale. Its familiar Python Dataframe API is designed to outperform Spark both in terms of performance and ease-of-use. Daft integrates directly with your ML/AI platform through zero-copy integrations of essential Python libraries, such as Pytorch or Ray. It also allows GPUs to be requested as a resource when running models. Daft is a lightweight, multithreaded local backend. When your local machine becomes insufficient, it can scale seamlessly to run on a distributed cluster. Daft supports User-Defined Functions in columns. This allows you to apply complex operations and expressions to Python objects, with the flexibility required for ML/AI. Daft is a lightweight, multithreaded local backend that runs locally. When your local machine becomes insufficient, it can be scaled to run on a distributed cluster.
  • 14
    Salesforce Data Cloud Reviews
    Salesforce Data Cloud is an online data platform that allows businesses to collect, harmonize, and analyze data in real time. This creates a 360-degree customer profile which can be used across Salesforce's various applications, such as Marketing Cloud, Sales Cloud, and Service Cloud. It allows businesses collect, harmonize and analyze data in real-time, creating a 360° customer profile that can then be used across Salesforce's different applications, including Marketing Cloud, Sales Cloud and Service Cloud. This platform allows for faster, more personal customer interactions through the integration of data from online and off-line channels, such as CRM data, transactional information, and third-party sources. Salesforce Data Cloud offers advanced AI and analytics capabilities that help organizations gain deeper insight into customer behavior, and predict future needs. Salesforce Data Cloud helps improve customer experiences, target marketing, and data-driven decision making across departments by centralizing and refining the data.
  • 15
    Apache Flink Reviews

    Apache Flink

    Apache Software Foundation

    Apache Flink is a distributed processing engine and framework for stateful computations using unbounded and bounded data streams. Flink can be used in all cluster environments and perform computations at any scale and in-memory speed. A stream of events can be used to produce any type of data. All data, including credit card transactions, machine logs, sensor measurements, and user interactions on a website, mobile app, are generated as streams. Apache Flink excels in processing both unbounded and bound data sets. Flink's runtime can run any type of application on unbounded stream streams thanks to its precise control of state and time. Bounded streams are internal processed by algorithms and data structure that are specifically designed to process fixed-sized data sets. This results in excellent performance. Flink can be used with all of the resource managers previously mentioned.
  • 16
    Dremio Reviews
    Dremio provides lightning-fast queries as well as a self-service semantic layer directly to your data lake storage. No data moving to proprietary data warehouses, and no cubes, aggregation tables, or extracts. Data architects have flexibility and control, while data consumers have self-service. Apache Arrow and Dremio technologies such as Data Reflections, Columnar Cloud Cache(C3), and Predictive Pipelining combine to make it easy to query your data lake storage. An abstraction layer allows IT to apply security and business meaning while allowing analysts and data scientists access data to explore it and create new virtual datasets. Dremio's semantic layers is an integrated searchable catalog that indexes all your metadata so business users can make sense of your data. The semantic layer is made up of virtual datasets and spaces, which are all searchable and indexed.
  • Previous
  • You're on page 1
  • Next