Best etcd Alternatives in 2024
Find the top alternatives to etcd currently available. Compare ratings, reviews, pricing, and features of etcd alternatives in 2024. Slashdot lists the best etcd alternatives on the market that offer competing products that are similar to etcd. Sort through etcd alternatives below to make the best choice for your needs
-
1
HashiCorp Consul
HashiCorp
Multi-cloud service networking platform that connects and secures services across any runtime platform or public or private cloud. All services are available in real-time location and health information. With less overhead, progressive delivery and zero trust security. You can rest assured that all HCP connections have been secured right out of the box. -
2
Amazon DynamoDB
Amazon
1 RatingAmazon DynamoDB, a key-value and document databank, delivers single-digit millisecond performance on any scale. It is a fully managed, multiregional, multimaster, durable database that offers built-in security, backup, restore, and in-memory cache for internet-scale apps. DynamoDB can process more than 10 trillion requests per hour and can handle peak requests of more than 20,000,000 requests per second. Many of the fastest-growing businesses in the world, such as Lyft, Redfin, and Airbnb, as well as enterprises like Samsung, Toyota and Capital One, rely on DynamoDB's scale and performance to support mission-critical workloads. -
3
LeanXcale
LeanXcale
$0.127 per GB per monthLeanXcale is fast and scalable database that combines SQL and NoSQL. It can ingest large batches of data and make it available via SQL or GIS for any purpose, including operational applications, analytics and dashboarding. No matter which stack you use, LeanXcale offers both SQL and NoSQL interfaces. The KiVi storage engine can be used as a relational key/value data store. The data can be accessed via the standard SQL API or a direct ACID key/value interface. This key-value interface allows users data ingestion at extremely high rates and efficiently, while avoiding SQL processing overhead. High-scalable, efficient, and distributed storage engine distributed data along a cluster to improve performance and increase reliability. -
4
FoundationDB
FoundationDB
FoundationDB supports multiple models, so you can store different types of data in one database. All data can be safely stored, distributed and replicated in Key-Value Store. FoundationDB is easy-to-use, grow, and maintain. It uses a distributed architecture that scales out gracefully and handles faults, while acting as a single ACID database. FoundationDB is extremely fast on commodity hardware and can support very heavy loads at a low cost. FoundationDB has been in production for many years and has learned from its mistakes. FoundationDB is supported by an unmatched testing system that is based on a deterministic simulator engine. -
5
Infinispan
Infinispan
Infinispan, an open-source, in-memory data grid, offers flexible deployment options as well as robust capabilities for managing, storing, and processing data. Infinispan is a key/value storage that can store all types of data, including Java objects and plain text. Infinispan uses elastically scalable clusters to distribute your data, ensuring high availability and fault tolerance. Infinispan boosts applications by storing data closer than processing logic. This reduces latency, increases throughput, and speeds up the process. Infinispan is available as a Java library. Simply add Infinispan as a dependency to your Java application and you are ready to store data within the same memory space that the executing code. -
6
Apache Accumulo
Apache Corporation
Apache Accumulo allows users to store and manage large data sets across a cluster. Accumulo uses Apache Hadoop HDFS to store its data, and Apache ZooKeeper to reach consensus. Accumulo is used by many users, but there are also open-source projects that use it as their underlying store. Take the Accumulo tour to learn more, and then run the Accumulo sample code. If you have any questions, please don't hesitate to contact us. Accumulo offers a programming mechanism called Iterators that allows you to modify key/value pair at different points in the data management process. Each Accumulo key/value pair is assigned a security label that limits the query results based on user authorizations. Accumulo can be run on a cluster that uses one or more HDFS instances. As Accumulo's data grows, nodes can be added and removed. -
7
LevelDB
Google
LevelDB is a fast key/value storage library that Google has created. It provides an ordered mapping of string keys to string value. Keys and values can be stored in arbitrary byte arrays. Data is stored in key order. To override the order of the data, callers can provide a custom comparator function. Multiple changes can be made to an atomic batch. To maintain a consistent view of data, users can create a temporary snapshot. Data can be used for forward and backward iteration. Snappy is used to automatically compress data. External activity (file system operations, etc.) The information is transmitted via a virtual interface to allow users to customize the operating system interactions. A database with over a million entries is used. Each entry is assigned a 16-byte key and a 100-byte value. The benchmark reduces the size of the values to approximately half of their original size. The benchmark lists the performance of sequential reading in the forward and reverse directions, as well as the performance of random lookups. -
8
VMware Tanzu GemFire
Broadcom
VMware Tanzu GemFire, a distributed key-value store with in-memory key-value storage, performs read-and-write operations at lightning fast speeds. It has high availability parallel message queues, continuous availability, an event-driven architecture that can scale dynamically with no downtime, and an event-driven architecture. Tanzu GemFire scales linearly as your data requirements increase to support high performance, real-time apps. Traditional databases are often too fragile or unreliable to be used with microservices. Every modern distributed architecture requires a cache. Applications can use Tanzu GemFire to get fast responses to data access requests and always receive new data. Applications can subscribe to real time events to respond to changes instantly. Continuous queries from Tanzu GemFire notify your application whenever new data becomes available. This reduces overhead on your SQL database. -
9
Lucid KV
Lucid KV
Lucid is still in development. However, we want to create a fast, secure, and distributed key-value storage accessible through an HTTP API. We also want to offer persistence, encryption WebSocket streaming and replication, and many other features. Private keys storage, IoT (to collect statistics data and save it), Distributed cache, service discovery distributed configuration, blob storage, replication, etc. -
10
Oracle Coherence
Oracle
Oracle Coherence, the industry's leading in-memory grid solution, enables organizations to scale mission-critical applications quickly by providing quick access to frequently used data. Data volumes and customer expectations are increasing due to the "internet-of-things", mobile, cloud, social, and cloud. This means that organizations need to be able handle more data in real time, offload redundant shared data services, and ensure availability. The 14.1.1 release of Oracle Coherence adds a patented scalable message implementation, support for polyglot grid side programming on GraalVM and distributed tracing in grid. Coherence stores each piece in multiple members (one primary, one or more backup copies), so that any mutating operation is not considered complete until the backup(s). This ensures that your data grid is resilient to failure at all levels, from a single JVM to a whole data center. -
11
Voldemort
Voldemort
Voldemort does not have a relational database. It doesn't attempt to satisfy arbitrary relationships while also satisfying ACID properties. It is not an object database that attempts transparently to map object reference graphs. Nor does it introduce a new abstraction such as document-orientation. It is essentially a large, distributed, persistent, fault-tolerant, hash table. This will allow applications to use O/R maps like active-record and hibernate, which will provide horizontal scaling and greater availability, but with a great loss in convenience. A system may consist of many functionally partitioned APIs or services that can manage storage resources across multiple data centres using storage systems that may be themselves horizontally partitioned. This is useful for large applications that are subject to internet-type scalability. Because all data is not in one database, it is impossible to make arbitrary in-database connections for applications in this space. -
12
BoltDB
BoltDB
Bolt is a pure-Go key/value store that was inspired by Howard Chu's LMDB Project. The project's goal is to provide a simple, reliable, and fast database for projects that do not require a full database server like Postgres or MySQL. Bolt is intended to be used as a low-level piece. The API will be simple and focus only on setting values and getting values. That's all. Bolt's original purpose was to provide a pure Go key/value storage and not add unnecessary features. The project has been a great success. The project's scope is limited, but it is still complete. It takes a lot of energy and time to maintain an open-source database. Code changes can have unintended consequences, sometimes even catastrophic ones. Even simple changes require hours of testing and validation. -
13
BergDB
BergDB
We are glad you are here! BergDB is a Java/.NET database that is simple and efficient. It was designed for developers who want to concentrate on their specific task and not waste time worrying about database issues. BergDB features include simple key-value storage and ACID transactions, historical queries, efficient concurrency controls, secondary indexes with fast append-only storage, replication, transparent object Serialization, and more. BergDB is an embedded, NoSQL, open-source, schemaless, document-oriented, NoSQL, embedded database. BergDB was built from the ground up to execute transactions extremely quickly. There are no compromises. All writes to the database are done in ACID transactions with the highest level of consistency (in SQL-speak, serializable isolation). Historical queries are useful when there are previous data states that are relevant. They also serve as a quick way to manage concurrency. BergDB does not lock anything when a read operation is performed. -
14
Kyoto Tycoon
Altice Labs
Kyoto Tycoon, a lightweight network server built on top of the Kyoto Cabinet key value database, is designed for high-performance concurrency and concurrency. It includes. It comes with a fully-featured protocol that is based on HTTP, as well as a binary protocol that provides even better performance. There are many client libraries that implement them in multiple languages. We have one for Python. You can configure it with simultaneous support for memcached, but there are limitations on the data update commands. This is useful if you wish to replace memcached in larger-than-memory/persistency scenarios. You will find updated versions of the most recent upstream releases. These are intended to be used together in real-world production environments. These changes include bug fixes, minor improvements, and packaging for a few Linux distributions. -
15
Speedb
Speedb
FreeSpeedb is RocksDB-compatible, enhancing stability, efficiency and overall performance. Join the Hive - Speedb's community of open-source users - to share knowledge, improve and interact with each other. Speedb is an alternative to LevelDB and RocksDB for users who want to take their applications to the next stage. Consider using Speedb when using event streaming platforms such as Kafka or Spark. Many applications are experiencing performance issues due to the increase in metadata found in modern data sets. Speedb allows you to keep costs down and ensure that your applications run smoothly, even when under heavy load. Speedb can help you decide whether to upgrade your platform or to deploy a new key value store. You'll feel immediate relief by integrating Speedb’s advanced key-value engine into your projects. -
16
InterSystems Caché
InterSystems
InterSystems Cache®, a high-performance database, powers transaction processing applications all over the globe. It's used for everything, from mapping a million stars in the Milky Way to processing a trillion equity trades per day to managing smart energy grids. InterSystems has developed Cache, a multi-model (object-relational, key-value), DBMS and application server. InterSystems Cache offers multiple APIs that allow you to work with the same data simultaneously: key/value, relational/object, document, multidimensional, object, object, and object. Data can be managed using SQL, Java, node.js.NET, C++ and Python. Cache also offers an application server that hosts web apps (CSP, REST, SOAP and other types TCP access for Cache data). -
17
InterSystems IRIS
InterSystems
23 RatingsInterSystems IRIS, a cloud-first data platform, is a multi-model transactional database management engine, application development platform, interoperability engine and open analytics platform. InterSystems IRIS offers a variety of APIs that allow you to work with transactional persistent data simultaneously. These include key-value, relational and object, document, and multidimensional. Data can be managed by SQL, Java, node.js, .NET, C++, Python, and native server-side ObjectScript language. InterSystems IRIS features an Interoperability engine as well as modules for building AI solutions. InterSystems IRIS features horizontal scalability (sharding and ECP), and High Availability features such as Business intelligence, transaction support and backup. -
18
Oracle Berkeley DB
Oracle
Berkeley DB is a set of embedded key-value databases libraries that provide high-performance data management services for applications. -
19
Riak KV
Riak
$0Riak is a distributed systems expert and works with Application teams to overcome distributed system challenges. Riak's Riak®, a distributed NoSQL databank, delivers: Unmatched resilience beyond the typical "high availability" offerings - Innovative technology to ensure data accuracy, and never lose a word. - Massive scale for commodity hardware - A common code foundation that supports true multi-model support Riak®, offers all of this while still focusing on ease-of-use. Choose Riak®, KV flexible key value data model for web scale profile management, session management, real time big data, catalog content management, customer 360, digital message and other use cases. Choose Riak®, TS for IoT, time series and other use cases. -
20
SwayDB
SwayDB
High performance and resource efficiency with embedded persistent and in-memory key value storage engine. It is designed to manage bytes on-disk efficiently and in-memory efficiently by recognising reoccurring pattern in serialised bytes. The core implementation can be restricted to any data model (SQL or NoSQL) or storage type (Disk, RAM). Although the core has many configurations that can easily be tuned for specific use-cases we plan to implement automatic runtime tuning once we are able collect and analyse runtime machine statistics and read-write patterns. You can manage data by creating familiar data structures such as Map, Set, Queue and SetMap. These data structures can be easily converted to native Java or Scala collections. Conditional updates/data modification can be done with any Java, Scala, or any native JVM Code - No query language. -
21
GridGain
GridGain Systems
Apache Ignite is an enterprise-grade platform that offers in-memory speed, massive scalability and real-time access across datastores. You can upgrade from Ignite or GridGain without any code changes and deploy your clusters securely on a global scale with zero downtime. Rolling upgrades can be performed on your production clusters without affecting application availability. To load balance workloads and prevent outages in regional areas, replicate across globally distributed data centres. You can protect your data in motion and at rest, and comply with security and privacy standards. Integrate with your organization's authorization and authentication system. Allow full data and user activity auditing. Automated schedules can be created for incremental and full backups. With snapshots and point in time recovery, restore your cluster to its last stable state. -
22
InfinityDB
InfinityDB
InfinityDB Embedded, a Java NoSQL Java database, is a hierarchical sorted value store. It is flexible, high-performance, multicore, and maintenance-free. InfinityDB Client/Server and InfinityDB Encrypted databases are now also available. According to our customers and provided performance tests, InfinityDB offers the best performance. Multi-core overlapping operations scale almost linearly with thread count. Threads use fair scheduling with very low interthread interference. Random I/O scales logarithmically with file size. Caches grow only as they are used and are packed efficiently. Database open is instantaneous even after abrupt exit. -
23
TerarkDB
Terark
TerarkDB is a core product. It is a RocksDB distribution powered by (c?)™Terark algorithms. TerarkDB can store more data and access it much faster than official RocksDB (3+X more data, 10+X faster on the same hardware). TerarkDB is fully compatible (binary compatible) to official RocksDB. We forked RocksDB, made some changes to our algorithms, and added it here as submodule rocksdb. Our changes for RocksDB does not change any RocksDB API, and does not have any extra dependencies, say, Terark modified RocksDB does not depend on TerarkZipTable(Without TerarkZipTable, Terark RocksDB works exactly same as official RocksDB). -
24
GridDB
GridDB
GridDB uses multicast communication in order to create a cluster. To enable multicast communication, set the network. First, verify the host name and IP address. To check the settings for an IP address on the host, run "hostname-i" command. If the IP address of your machine is identical to the below, you don't need to adjust network settings and can skip to the next section. GridDB is a database that manages a group (known as a Row) of data that is composed of multiple values and a key. It can be an in-memory database which arranges all data in the memory. However, it can also use a hybrid composition that uses both a disk (including SSD) and a memory. -
25
ArcadeDB
ArcadeDB
FreeArcadeDB allows you to manage complex models without any compromises. Polyglot Persistence is gone. There is no need to have multiple databases. ArcadeDB Multi-Model databases can store graphs and documents, key values, time series, and key values. Each model is native to the database engine so you don't need to worry about translations slowing down your computer. ArcadeDB's engine was developed with Alien Technology. It can crunch millions upon millions of records per second. ArcadeDB's traversing speed does not depend on the size of the database. It doesn't matter if your database contains a few records or a billion. ArcadeDB can be used as an embedded database on a single server. It can scale up by using Kubernetes to connect multiple servers. It is flexible enough to run on any platform that has a small footprint. Your data is protected. Our unbreakable fully transactional engine ensures durability for mission-critical production database databases. ArcadeDB uses the Raft Consensus Algorithm in order to maintain consistency across multiple servers. -
26
Terracotta
Software AG
Terracotta DB, a distributed in-memory database management solution, is a comprehensive and flexible data management tool that caters to both operational storage and caching. It also enables transactional processing and analysis. Ultra-Fast Ram and Big Data = Business Power. BigMemory gives you: Real-time access and control over terabytes in-memory data. High throughput and predictable latency. Support for Java®, Microsoft®,.NET/C# and C++ applications. 99.999 percent uptime. Linear scalability. Data consistency guarantees across multiple servers. Optimized data storage across SSD and RAM. SQL support for querying in memory data. Maximal hardware utilization results in lower infrastructure costs. High-performance persistent storage for durability and fast restart. Advanced monitoring, management, and control. Data storage that is ultra-fast and in-memory, which automatically moves data to the right place. Support for data replication across multiple data centers for disaster recovery. Real-time management of fast-moving data -
27
Google Cloud Bigtable
Google
Google Cloud Bigtable provides a fully managed, scalable NoSQL data service that can handle large operational and analytical workloads. Cloud Bigtable is fast and performant. It's the storage engine that grows with your data, from your first gigabyte up to a petabyte-scale for low latency applications and high-throughput data analysis. Seamless scaling and replicating: You can start with one cluster node and scale up to hundreds of nodes to support peak demand. Replication adds high availability and workload isolation to live-serving apps. Integrated and simple: Fully managed service that easily integrates with big data tools such as Dataflow, Hadoop, and Dataproc. Development teams will find it easy to get started with the support for the open-source HBase API standard. -
28
upscaledb
upscaledb
upscaledb, a key-value database that optimizes storage and algorithms for specific data types, is fast. Optional compression can further reduce file size and I/O. It can also keep more data in memory to improve performance and scalability for full-table scans to query the data and analyze it. upscaledb is able to create all functions of a typical SQL Database, customized to your application's needs, and can be directly linked to your program. Its database cursors and fast analytical functions make it an ideal choice to process data when a SQL database is slow enough. Applications that use upscaledb can be deployed on millions of desktops as well as cloud instances and embedded devices. This benchmark runs a full table scan of 50 million records and retrieves maximum. These records are configured with uint32 values. -
29
Amazon ElastiCache
Amazon
Amazon ElastiCache makes it easy to create, manage, and scale popular open source compatible in-memory cloud data stores. You can build data-intensive apps and improve the performance of existing databases by retrieving data in high-throughput and low latency, in-memory storages. Amazon ElastiCache is popular for real-time applications such as Caching, Session Stores and Gaming, Geospatial Service, Real-Time Analytics and Queuing. Amazon ElastiCache provides fully managed Redis, Memcached and other services for demanding applications that need sub-millisecond response time. Amazon ElastiCache is an in-memory cache and data store that can support the most demanding applications that require sub-millisecond response time. Amazon ElastiCache delivers secure, lightning fast performance by using an optimized stack that runs on customer-dedicated nodes. -
30
KeyDB
KeyDB
KeyDB is fully compatible with Redis modules API and protocol. You can easily drop in KeyDB to maintain compatibility with existing clients, scripts, and configurations. Multi-Master mode distributes a single replicated dataset across multiple nodes to support both read and writing operations. Nodes can be replicated across regions to provide submillisecond latency to local clients. Cluster mode allows unlimited read/write scaling by splitting the data across multiple shards. This allows unlimited scaling and supports high availability via replica nodes. KeyDB provides new community-driven commands that allow you to do more with data. ModJS module allows you to create your own JavaScript commands and functionality. ModJS allows you to write Javascript functions that can be called directly from KeyBD. The left example shows a Javascript function that would be loaded along with the module. It can then be called from your client. -
31
Dragonfly
Dragonfly
FreeDragonfly replaces Redis with a plug-and-play solution that reduces costs and improves performance. Dragonfly is designed to take full advantage of the power of cloud hardware, and meet the data needs of modern applications. It frees developers from traditional in-memory databases. Legacy software cannot take advantage of the power of modern cloud hardware. Dragonfly is optimized to work with modern cloud computing. It delivers 25x more throughput, and 12x less snapshotting latency, when compared to traditional in-memory stores like Redis. This makes it easy to provide the real-time experiences your customers expect. Due to Redis' inefficient single-threaded design, scaling Redis workloads can be expensive. Dragonfly has a much higher memory and compute efficiency, resulting in infrastructure costs that are up to 80% less. Dragonfly scales first vertically, and only requires clustering when the scale is extremely high. This results in an operational model that is simpler and more reliable. -
32
XAP
GigaSpaces
GigaSpaces XAP, an event-driven, distributed development platform, delivers extreme processing for mission-critical applications. XAP provides high availability, resilience and boundless scale under any load. With XAP, the application and the data co-locate in the same memory space, reducing data movement over the network and providing both data and application scalability. XAP Skyline, an in-memory distributed technology for mission-critical applications running in cloud-native environments, unites data and business logic within the Kubernetes cluster. With XAP Skyline, developers can ensure that data-driven applications achieve the highest levels of performance and serve hundreds of thousands of concurrent users while delivering sub-second response times. XAP Skyline delivers the low latency, scalability and resilience that are vital for businesses running time-sensitive apps in distributed Kubernetes clusters. XAP Skyline is used in financial services, retail, and other industries where speed and scalability are critical. -
33
Apache Ignite
Apache Ignite
You can use Ignite as a traditional SQL Database by leveraging JDBC drivers or ODBC drivers. Or, you can use the native SQL APIs for Java, C# and C++, Python, or other programming languages. You can easily join, group, aggregate, or order your distributed on-disk and in-memory data. You can accelerate your existing applications up to 100x by using Ignite as an in memory cache or in-memory grid that is deployed over one of several external databases. You can query, transact, and calculate on this cache. Ignite is a database that scales beyond your memory capacity to support modern transactional and analytical workloads. Ignite allocates memory to your hot data and writes to disk when applications query cold records. Execute custom code up to kilobytes in size over petabytes. Your Ignite database can be transformed into a distributed supercomputer that can perform low-latency calculations, complex analysis, and machine learning. -
34
Hazelcast
Hazelcast
In-Memory Computing Platform. Digital world is different. Microseconds are important. The world's most important organizations rely on us for powering their most sensitive applications at scale. If they meet the current requirement for immediate access, new data-enabled apps can transform your business. Hazelcast solutions can be used to complement any database and deliver results that are much faster than traditional systems of record. Hazelcast's distributed architecture ensures redundancy and continuous cluster up-time, as well as always available data to support the most demanding applications. The capacity grows with demand without compromising performance and availability. The cloud delivers the fastest in-memory data grid and third-generation high speed event processing. -
35
Azure Cache for Redis
Microsoft
$1.11 per monthScale performance easily and cost-effectively as your app's traffic and demands increase. To handle thousands of simultaneous users at once, add a caching layer to your application architecture. All this while enjoying the benefits of a fully managed service. Superior throughput and performance to handle millions upon millions of requests per seconds with sub-millisecond latency. Fully managed service that provides automatic patching, updates and scaling. This allows you to focus on development. RedisBloom, RediSearch and RedisTimeSeries module integrations, supporting data analysis and search, as well as streaming. You get powerful capabilities like clustering, Redis on Flash, built-in replica, Redis on Flash, availability up to 99.99%, and more. Azure Cosmos DB and Azure SQL Database are available to complement your database services. This allows you to scale throughput at a lower price than expanding database instances. -
36
Azure Cosmos DB
Microsoft
Azure Cosmos DB, a fully managed NoSQL databank service, is designed for modern app development. It offers guaranteed single-digit millisecond response time and 99.999 percent availability. This service is backed by SLAs and instant scalability. Open source APIs for MongoDB or Cassandra are also available. With turnkey multi-master global distribution, you can enjoy fast writes and readings from anywhere in the world. -
37
IBM Cloud®, purpose-built databases, deliver high availability and enhanced security as well as scalable performance. You can choose from a range of database engines, including relational and NoSQL databases, such as graph, key-value and in-memory databases, and document, key-value and graph databases. You can build distributed, modern applications that are highly scalable and distributed thanks to the support for multiple data models. There is no one size fits all. You can speed up development and meet your business needs by choosing the right database for the job. IBM Cloud DBaaS solutions include hosting, auto provisioning, and 24x7 management with automated backup and restore, version updates, security, and more.
-
38
Aerospike
Aerospike
Aerospike is the global leader for next-generation, real time NoSQL data solutions at any scale. Aerospike helps enterprises overcome seemingly impossible data bottlenecks and compete with other companies at a fraction of the cost and complexity of legacy NoSQL databases. Aerospike's Hybrid Memory Architecture™ is a patented technology that unlocks the full potential of modern hardware and delivers previously unimaginable value. It does this by delivering unimaginable value from huge amounts of data at both the edge, core, and in the cloud. Aerospike empowers customers with the ability to instantly combat fraud, dramatically increase shopping cart sizes, deploy global digital payment networks, and provide instant, one-to-1 personalization for millions. Aerospike customers include Airtel and Banca d'Italia as well as Snap, Verizon Media, Wayfair, PayPal, Snap, Verizon Media, and Nielsen. The company's headquarters is in Mountain View, California. Additional locations are in London, Bengaluru, India, and Tel Aviv in Israel. -
39
ArangoDB
ArangoDB
Natively store data for graphs, documents and search needs. One query language allows for feature-rich access. You can map data directly to the database and access it using the best patterns for the job: traversals, joins search, ranking geospatial, aggregateions - you name them. Polyglot persistence without the cost. You can easily design, scale, and adapt your architectures to meet changing needs with less effort. Combine the flexibility and power of JSON with graph technology to extract next-generation features even from large datasets. -
40
memcached
memcached
It can be thought of as a temporary memory for your applications. memcached allows for you to take memory from areas of your system that have more than you need, and make it available to areas that have less. This is the classic deployment strategy. However, you'll see that it's not only wasteful because the cache size is only a fraction of what your web farm actually has, but also because it takes a lot of effort to maintain consistency across all nodes. You can see that all servers are looking into the exact same virtual memory pool with memcached. You will also notice that as your application demands increase, so does the amount of data that must be accessed. These two aspects of your system should be scaled together in a deployment strategy. -
41
Secure and manage the data lifecycle, from Edge to AI in any cloud or data centre. Operates on all major public clouds as well as the private cloud with a public experience everywhere. Integrates data management and analytics experiences across the entire data lifecycle. All environments are covered by security, compliance, migration, metadata management. Open source, extensible, and open to multiple data stores. Self-service analytics that is faster, safer, and easier to use. Self-service access to multi-function, integrated analytics on centrally managed business data. This allows for consistent experiences anywhere, whether it is in the cloud or hybrid. You can enjoy consistent data security, governance and lineage as well as deploying the cloud analytics services that business users need. This eliminates the need for shadow IT solutions.
-
42
Azure Table Storage
Microsoft
Azure Table storage can store petabytes semi-structured data at low costs and keeps costs down. Table storage is able to scale up, unlike many cloud-based or on-premise data stores. Also, availability is not a concern. With geo-redundant storage, data can be replicated three times within one region and three times in another region hundreds of miles away. Flexible data such as web app user data, address books, device data and other metadata can be stored in table storage. You can also use table storage to build cloud applications without having to lock down the data model to specific schemas. Different rows can have different structures in the same table, so you can easily change your application and table schema without having to take it offline. Table storage embraces a strong consistency model. -
43
Alibaba Cloud Tablestore
Alibaba Cloud
$0.00010 per GBTablestore allows seamless expansion of data size, access concurrency, and data sharding technologies. It provides storage and real-time access of massive structured data. Three copies of data with high consistency and full host, high availability, data high reliability, and service high availability. Provides full/incremental data tunnels that seamlessly connect with other products for big-data analysis and real time stream computing. Distributed architecture, single-table auto scaling, support for 10-PB-level data, and 10-million-level access concurrency. Multi-level and multi-level security protection, as well as resource access management, are available to ensure data security. This service's low latency, high concurrency and elastic resources, as well as the Pay-As You-Go billing method, allow your risk control system, which allows you to control transaction risks. -
44
JaguarDB
JaguarDB
JaguarDB allows for fast ingestion of time-series data and location-based data. It can also index in both time and space. It is also quick to back-fill time series data (inserting large amounts of data in the past time). Time series are usually a sequence of data points that have been indexed in order of time. JaguarDB uses the term time series to mean both a sequence data points and a set of tick tables that hold aggregated data values over a specified time span. JaguarDB's time series tables can contain a base table that stores data points in time order and tick tables such daily, weekly, monthly, and daily tables to store aggregated information within these time periods. The RETENTION format is identical to the TICK format, but it can have any number or retention periods. The RETENTION indicates how long data points in the base tables should be kept. -
45
ScyllaDB
ScyllaDB
The fastest NoSQL database in the world. The fastest NoSQL database available, capable of millions IOPS per node with less than 1 millisecond latency. This database will accelerate your application performance. Scylla, a drop-in Apache Cassandra and Amazon DynamoDB alternative, powers your applications with extreme throughput and ultra-low latency. To power modern, high-performance applications, we used the best features of high availability databases to create a NoSQL database that is significantly more efficient, fault-tolerant, and resource-efficient. This high-availability database is built from scratch in C++ for Linux. Scylla unleashes your infrastructure's true potential for running high-throughput/low-latency workloads. -
46
Macrometa
Macrometa
We provide a geo-distributed, real-time database, stream processing, and compute runtime for event driven applications across up to 175 global edge data centers. Our platform is loved by API and app developers because it solves the most difficult problems of sharing mutable states across hundreds of locations around the world. We also have high consistency and low latency. Macrometa allows you to surgically expand your existing infrastructure to bring your application closer to your users. This allows you to improve performance and user experience, as well as comply with global data governance laws. Macrometa is a streaming, serverless NoSQL database that can be used for stream data processing, pub/sub, and compute engines. You can create stateful data infrastructure, stateful function & containers for long-running workloads, and process data streams real time. We do the ops and orchestration, you write the code. -
47
Apache Cassandra
Apache Software Foundation
1 RatingThe Apache Cassandra database provides high availability and scalability without compromising performance. It is the ideal platform for mission-critical data because it offers linear scalability and demonstrated fault-tolerance with commodity hardware and cloud infrastructure. Cassandra's ability to replicate across multiple datacenters is first-in-class. This provides lower latency for your users, and the peace-of-mind that you can withstand regional outages. -
48
OrigoDB
Origo
€200 per GB RAM per serverOrigoDB allows you to create high-quality, mission-critical systems in a fraction of time and cost. This isn't marketing gibberish! For a detailed description of our features, please read on. Contact us if you have any questions. You can also download the software and start it right away! In-memory operations are a lot faster than disk operations. One OrigoDB engine can execute millions upon millions of read transactions per minute and thousands upon thousands of write transactions every second. Asynchronous command journaling to local SSDs is also available. This is why OrigoDB was built. A single object-oriented domain model is much simpler than a full stack that includes a relational model, object/relational map, data access code and views, as well as stored procedures. This is a lot of waste that can easily be eliminated. The OrigoDB engine runs 100% ACID right out of the box. Each command executes one at a moment, transitioning the in memory model from one consistent state into another. -
49
LedisDB
LedisDB
Ledisdb, a high-performance NoSQL server and database library written in Go, is Ledisdb. It is similar to Redis, but stores data on disk. It supports many data structures, including kv and list, hash, set, zset, and zset. LedisDB now supports multiple databases as backends. -
50
Apache HBase
The Apache Software Foundation
Apache HBase™, is used when you need random, real-time read/write access for your Big Data. This project aims to host very large tables, billions of rows and X million columns, on top of clusters of commodity hardware.