Skip to main content

Valkey vs KeyDB vs Dragonfly: Redis Alternatives 2026

·PkgPulse Team
0

TL;DR

Redis's license change from BSD to SSPL in March 2024 triggered a fork explosion. Valkey is the Linux Foundation-backed fork — the most direct Redis replacement, already in production at AWS (Elasticache Serverless), Google Cloud Memorystore, and Akamai. KeyDB (acquired by Snap) is the multi-threaded variant — it uses Redis protocol but runs on all CPU cores, making it 2–5x faster than Redis for throughput-heavy workloads. Dragonfly is the full rewrite — built in C++ with modern architecture (fibers, SIMD, shared-nothing), targeting 25x better throughput than Redis on the same hardware. For drop-in Redis replacement: Valkey. For maximum throughput on existing hardware: Dragonfly. For multi-threaded Redis on Kubernetes: KeyDB.

Key Takeaways

  • Valkey is now the default on major cloud providers — AWS Elasticache, Google Memorystore, and Akamai all switched in 2024-2025
  • Dragonfly claims 25x Redis throughput — 3.8M ops/sec vs ~150k for Redis on the same 8-core machine
  • All three are 100% Redis protocol compatibleioredis, node-redis, and all Redis clients work without changes
  • Valkey GitHub stars: ~20k — the fastest-growing Redis fork (donated to Linux Foundation)
  • KeyDB supports FLASH storage — hot/warm/cold tiering for datasets larger than RAM
  • Dragonfly uses 80% less memory than Redis for the same dataset (compression + modern memory management)
  • Migration is a swap — change the Redis URL in your connection string; no code changes needed

The Redis License Crisis

In March 2024, Redis Ltd. changed Redis's license from BSD to Server Side Public License (SSPL) — meaning cloud providers can no longer offer Redis as a managed service without a commercial agreement. This triggered:

  1. Valkey — Linux Foundation fork, backed by AWS, Google, Oracle, Snap, and others
  2. KeyDB — Already existed as a Redis fork by Snap, gained renewed attention
  3. Dragonfly — Already existed as a Redis rewrite, accelerated adoption

The major cloud providers have now defaulted to Valkey:

  • AWS Elasticache Serverless → Valkey (automatic migration)
  • Google Cloud Memorystore → Valkey available
  • Akamai Linode Managed Databases → Valkey

Valkey: The Official Linux Foundation Fork

Valkey is Redis 7.2.4 forked at the exact point before the license change. The Linux Foundation maintains it with contributions from AWS, Google, Oracle, Snap, Ericsson, and others. If you used Redis, Valkey is identical in behavior.

The Linux Foundation's involvement isn't just symbolic. The LF has a proven track record of governing open-source projects that multiple competitors need to be equally reliable: Node.js, Kubernetes, OpenJS, and Linux itself all operate under its umbrella. The governance model distributes technical decision-making among contributors from different companies, preventing any single organization from steering the project in a direction that benefits only them. For Valkey, this means AWS can't make changes that disadvantage Azure's Valkey implementation, and vice versa.

Valkey's release cadence in 2025-2026 has been active, shipping security patches, performance improvements, and new data type features on a quarterly schedule. The project is not merely maintaining Redis 7.2.4 — it's actively developing new capabilities. Valkey 8.0 introduced module API improvements and a rewritten cluster bus. For teams concerned that a fork would stagnate, Valkey's commit history is reassuring.

Why Valkey is the Default Choice for Most Redis Migrations

The economics of Valkey adoption are compelling for any team using managed Redis on AWS, Google Cloud, or Azure. All three major cloud providers have either already migrated their Redis-compatible managed services to Valkey or announced plans to do so. AWS Elasticache, Google Cloud Memorystore, and Azure Cache for Redis all support Valkey as a drop-in replacement. Teams on managed Redis who upgrade their cloud service instance type or version will likely receive Valkey automatically.

The operational knowledge transfer is complete: everything a Redis operator knows about configuration, monitoring, replication, clustering, and failover applies to Valkey. The valkey-cli tool is functionally identical to redis-cli. The Sentinel and Cluster modes work the same way. The only change is the project name.

Docker Setup

# Valkey — drop-in Redis replacement
docker run -d --name valkey \
  -p 6379:6379 \
  valkey/valkey:latest

# With persistence
docker run -d --name valkey \
  -p 6379:6379 \
  -v valkey-data:/data \
  valkey/valkey:latest \
  valkey-server --save 60 1 --loglevel warning

Node.js Client (Zero Changes from Redis)

// ioredis — works with Valkey, zero changes
import Redis from "ioredis";

const client = new Redis({
  host: "localhost",
  port: 6379,
  // That's it — same connection string as Redis
});

// All Redis commands work identically
await client.set("user:1:name", "Alice", "EX", 3600);
const name = await client.get("user:1:name");

// Hash operations
await client.hset("user:1", { email: "alice@example.com", plan: "pro" });
const user = await client.hgetall("user:1");

// Sorted sets
await client.zadd("leaderboard", 1500, "alice", 1200, "bob", 900, "charlie");
const top3 = await client.zrevrange("leaderboard", 0, 2, "WITHSCORES");

// Pub/Sub
const sub = new Redis();
sub.subscribe("notifications");
sub.on("message", (channel, message) => {
  console.log(`[${channel}] ${message}`);
});

const pub = new Redis();
pub.publish("notifications", JSON.stringify({ event: "order_created", orderId: 123 }));

Valkey-Specific: Multi-Exec Improvements

// Valkey 8.0+ added improvements to MULTI/EXEC reliability
const pipeline = client.multi();
pipeline.set("key1", "value1");
pipeline.incr("counter");
pipeline.expire("key1", 60);

const results = await pipeline.exec();
// results: [[null, 'OK'], [null, 1], [null, 1]]

High Availability with Valkey Sentinel

# docker-compose.yml — Valkey with Sentinel HA
version: "3.8"

services:
  valkey-primary:
    image: valkey/valkey:latest
    ports:
      - "6379:6379"
    command: valkey-server --save 60 1

  valkey-replica:
    image: valkey/valkey:latest
    command: valkey-server --replicaof valkey-primary 6379

  sentinel:
    image: valkey/valkey:latest
    command: >
      valkey-sentinel /etc/valkey/sentinel.conf
    volumes:
      - ./sentinel.conf:/etc/valkey/sentinel.conf
// Connect to Sentinel cluster
const client = new Redis({
  sentinels: [
    { host: "sentinel-1", port: 26379 },
    { host: "sentinel-2", port: 26379 },
    { host: "sentinel-3", port: 26379 },
  ],
  name: "mymaster",  // Sentinel master name
});

KeyDB: Multi-Threaded Redis

KeyDB runs Redis protocol but uses a multi-threaded event loop instead of Redis's single-threaded design. On 8 cores, KeyDB processes requests on all cores simultaneously — 2-5x throughput improvement for CPU-bound workloads.

Docker Setup

docker run -d --name keydb \
  -p 6379:6379 \
  eqalpha/keydb:latest \
  keydb-server --server-threads 4  # Use 4 threads

# Check thread usage
docker exec -it keydb keydb-cli INFO server | grep threads

Multi-Threading Configuration

# keydb.conf — key performance settings
server-threads 4        # Number of threads (match CPU cores)
server-thread-affinity yes  # Pin threads to CPU cores

# FLASH tiering (unique to KeyDB)
storage-provider flash /path/to/flash
db-s3-object mybucket   # S3-backed cold storage

# Active replication (unique to KeyDB)
active-replica yes      # Active-active multi-master
replica-read-only no    # Replicas can accept writes

Active Replication (Multi-Master)

// KeyDB's unique active replication — write to any node
const primary = new Redis({ host: "keydb-1", port: 6379 });
const replica = new Redis({ host: "keydb-2", port: 6379 });

// Both accept writes and sync bidirectionally
await primary.set("key", "from-primary");
await replica.set("key2", "from-replica");

// Both nodes have both keys after sync
const val1 = await replica.get("key");    // "from-primary"
const val2 = await primary.get("key2");  // "from-replica"

FLASH Tiering for Large Datasets

// KeyDB FLASH — hot data in RAM, cold data on NVMe/SSD
// No code changes needed — transparent to client

// Configure in keydb.conf:
// storage-provider flash /mnt/nvme/keydb
// db-s3-object s3-bucket-name  (optional cold tier)

// Data is automatically tiered based on access patterns
// Hot keys → RAM
// Warm keys → NVMe FLASH
// Cold keys → S3 (optional)

// From Node.js perspective, it's identical to Redis
await client.set("hot-data", "frequently accessed");   // Stays in RAM
await client.set("cold-data", "rarely accessed data"); // May move to FLASH

Dragonfly: The Full Rewrite

Dragonfly is a ground-up rewrite of Redis in C++ using modern concurrency primitives (fibers/coroutines) and a shared-nothing architecture. Each CPU core has its own memory shard, eliminating lock contention. The result is claimed 25x better throughput and 80% less memory than Redis.

The architectural difference between Dragonfly and Redis is fundamental. Redis is single-threaded for command processing — this simplicity is one of Redis's strengths (no lock contention, predictable latency) but limits throughput on multi-core hardware. Dragonfly uses a shared-nothing design where each thread owns a subset of key space. Commands that touch a single key execute within one thread; multi-key commands use a coordination protocol to remain atomic across threads. This architecture scales horizontally across cores, allowing a single Dragonfly process to fully utilize a 64-core server.

The memory efficiency improvement (claimed 80% reduction vs Redis) comes from a combination of factors: a custom memory allocator optimized for the key-value access pattern, more efficient data structure implementations, and lazy defragmentation. In practice, teams report that workloads requiring 10GB of Redis memory run comfortably in 2-3GB of Dragonfly memory. At sufficient scale, this memory efficiency can justify Dragonfly's operational overhead and BSL license restrictions.

Docker Setup

docker run -d --name dragonfly \
  -p 6379:6379 \
  -v dragonfly-data:/data \
  docker.dragonflydb.io/dragonflydb/dragonfly:latest

# With memory limits and threads
docker run -d --name dragonfly \
  -p 6379:6379 \
  --ulimit memlock=-1 \
  docker.dragonflydb.io/dragonflydb/dragonfly:latest \
  --maxmemory 4gb \
  --threads 8

Node.js Client — Identical API

// Dragonfly is 100% Redis-compatible — same client code
import Redis from "ioredis";

const client = new Redis({
  host: "localhost",
  port: 6379,
});

// All Redis data structures work
await client.set("session:abc", JSON.stringify({ userId: 123 }), "EX", 1800);

// Sorted sets — one of Dragonfly's fastest operations
await client.zadd("scores", 9500, "alice", 8200, "bob");
const scores = await client.zrange("scores", 0, -1, "WITHSCORES");

// Streams (Dragonfly implements Redis Streams)
await client.xadd("events", "*", "type", "click", "page", "/home");
const events = await client.xrange("events", "-", "+", "COUNT", 10);

Dragonfly-Specific: Lua Scripts Performance

// Dragonfly executes Lua scripts across shards efficiently
// Useful for atomic multi-key operations

const luaScript = `
  local current = redis.call('GET', KEYS[1])
  if current == false then
    redis.call('SET', KEYS[1], ARGV[1])
    return ARGV[1]
  end
  return current
`;

// defineCommand with evalsha pattern
client.defineCommand("getOrSet", {
  numberOfKeys: 1,
  lua: luaScript,
});

// @ts-ignore — custom command
const value = await client.getOrSet("mykey", "default-value");

Performance Benchmarks

Benchmark: 100% GET operations, 8-byte key, 64-byte value
Hardware: 8-core AWS c6g.2xlarge, 16 GB RAM

Operations per second (ops/sec):
Redis 7.2:           ~150,000 ops/sec   (single-threaded)
Valkey 8.0:          ~160,000 ops/sec   (+7% over Redis)
KeyDB 6.3:           ~450,000 ops/sec   (4 threads, 3x Redis)
Dragonfly 1.x:       ~3,800,000 ops/sec (25x Redis claim)

Memory for 10M small keys (5 byte value):
Redis:               ~700 MB
Valkey:              ~700 MB  (same codebase)
KeyDB:               ~700 MB  (same codebase)
Dragonfly:           ~140 MB  (80% reduction)

P99 latency at 100k req/sec:
Redis:               ~0.5ms
Valkey:              ~0.5ms
KeyDB:               ~0.3ms
Dragonfly:           ~0.1ms

Note: Benchmarks vary significantly by workload type.
Dragonfly's advantage is most pronounced at high concurrency.

Feature Comparison

FeatureValkeyKeyDBDragonfly
Redis protocol✅ 100%✅ 100%✅ 100%
Redis clients compatibility
Multi-threading✅ (v8.0+)✅ Native✅ Sharded
Active replication✅ Multi-master
FLASH tiering
Memory efficiencyBaselineBaseline✅ ~80% less
Redis Modules APIPartial
Cluster support
Sentinel support
Persistence (RDB/AOF)
LicenseBSD-3BSD-3BSL 1.1
Backed byLinux FoundationSnapDragonfly DB Inc.
Cloud managedAWS, GCP, AkamaiSelf-hostedDragonfly Cloud
GitHub stars20k25k29k

When to Use Each

Choose Valkey if:

  • You're running on AWS Elasticache, Google Cloud Memorystore, or Akamai — it's already the default
  • You want the most conservative Redis replacement with no surprises
  • Redis Modules (RedisSearch, RedisJSON, RedisGraph) compatibility matters
  • Your primary goal is license compliance, not performance

Choose KeyDB if:

  • You need active-active multi-master replication (unique feature)
  • Your dataset exceeds RAM and NVMe tiering via FLASH is attractive
  • CPU is your bottleneck on a multi-core machine and multi-threading helps
  • You're on a single server and want maximum throughput without cluster complexity

Choose Dragonfly if:

  • Memory cost is the bottleneck (80% reduction can save significant cloud spend)
  • You need extreme throughput at low latency on a single node
  • You're doing a greenfield deployment and want modern architecture
  • Redis Modules are not required (Dragonfly doesn't support the Modules API)

Ecosystem & Community

The Redis alternative ecosystem moved remarkably fast after the license change. Within months of the SSPL announcement, Valkey had 20+ maintainers from five major tech companies and a release cadence matching Redis's own. The Linux Foundation's governance model, established through similar projects like Node.js and Kubernetes, provides the institutional stability that ensures no single company can drive the fork in a direction that doesn't serve the broader community.

KeyDB's community is smaller but more specialized. Snap's acquisition brought dedicated engineering resources, and the FLASH storage feature has attracted a loyal user base of teams running datasets too large for pure in-memory Redis. The KeyDB Discord has several thousand active members, and the GitHub repository is responsive to issues and pull requests.

Dragonfly has attracted significant venture capital funding — over $100M as of 2025 — and is building a commercial managed service (Dragonfly Cloud) alongside the open-source project. The commercial backing means active development and good documentation, but the BSL 1.1 license (which restricts use for competing managed database services) means it's not fully free for all use cases.

Redis Alternatives in the Node.js Ecosystem

The choice of Redis-compatible in-memory data store interacts with the broader Node.js data layer decisions. For job queues built on Redis, BullMQ — the most widely used Redis-based job queue for Node.js — is compatible with all three alternatives. The ioredis and node-redis clients that BullMQ uses work against Valkey, KeyDB, and Dragonfly without modification. For teams using Redis as a session store, caching layer, or pub/sub broker in Express, Fastify, or Hono applications, the same compatibility applies.

The caching use case is the most common deployment of Redis in Node.js applications: store expensive database query results, API responses, or rendered templates with TTL-based expiration. Valkey, KeyDB, and Dragonfly all support the full set of Redis commands used for this pattern (SET, GET, SETEX, MSET, MGET, DEL, SCAN). Any caching library built on Redis — cache-manager, cacheable, custom implementations using ioredis — works unchanged.

For teams using Redis as a message broker with pub/sub or streams, the compatibility is similarly complete. Valkey and KeyDB implement Redis Streams (XADD, XREAD, XGROUP commands) identically. Dragonfly supports Streams as well, though some edge cases in high-throughput stream scenarios have required workarounds documented in the Dragonfly migration guide.

The ioredis vs node-redis vs Upstash comparison is the companion to this article for teams choosing the client library — the server choice (Valkey/KeyDB/Dragonfly) and client choice (ioredis/node-redis/Upstash) are independent decisions that can be made separately.


Real-World Adoption

Valkey's adoption story is dominated by managed cloud services. If you use AWS Elasticache today and haven't specified Redis explicitly, you're probably already running Valkey without knowing it. AWS migrated Elasticache Serverless to Valkey and reported the migration was transparent — no application code changes required for the 99% of workloads using standard Redis commands.

Dragonfly has attracted attention from companies with extreme throughput or memory pressure. Teams running Redis clusters to distribute load across multiple nodes have found that a single Dragonfly node on modern hardware can replace a three to five node Redis cluster — dramatically simplifying their operational footprint. Several high-traffic e-commerce platforms and social media companies have publicized their Dragonfly migrations.

KeyDB is less commonly discussed publicly but has a dedicated following in specific use cases: financial data systems that benefit from active-active replication for geographic redundancy, and analytics platforms that store datasets too large to fit in RAM but need Redis's data structure capabilities.


Developer Experience Deep Dive

All three alternatives present identical developer experiences for Node.js developers — the ioredis and node-redis clients connect, authenticate, and issue commands exactly as they would against a Redis server. The protocol compatibility is not a marketing claim; it's a fundamental design constraint that all three projects take seriously. Any Redis client library, any ORM with Redis support, and any caching layer built on the Redis protocol works unchanged.

The operational experience differs. Valkey's configuration file is a direct copy of Redis's — operators who know Redis's redis.conf syntax can configure Valkey without reading new documentation. Dragonfly's configuration uses a slightly different format and has additional parameters for thread count and memory management that don't exist in Redis. KeyDB adds multi-threading and FLASH configuration that require new understanding.


Migration Guide

The migration from Redis to any of these alternatives is a connection string change:

// All three are drop-in replacements — change the connection URL

// Before (Redis)
const redis = new Redis({ host: "redis.internal", port: 6379 });

// After (Valkey/KeyDB/Dragonfly — same config)
const redis = new Redis({ host: "valkey.internal", port: 6379 });

// Environment variable approach (recommended)
const redis = new Redis({ host: process.env.REDIS_HOST, port: 6379 });
// → Change REDIS_HOST from "redis.internal" to "valkey.internal"

The main migration risk is not the client code — it's Redis Modules. If your application uses RedisSearch for full-text search, RedisJSON for native JSON storage, or RedisTimeSeries for time-series data, check module compatibility before migrating. Valkey has partial module support through its module API compatibility layer. KeyDB supports the full Redis Modules API. Dragonfly does not support Redis Modules at all and recommends using its built-in equivalents (JSON support, search) instead.

For production migrations, the recommended approach is to stand up the new server, use DUMP/RESTORE or a replication-based migration to copy data, then do a cutover during low-traffic hours. All three alternatives support Redis replication protocol, meaning you can replicate live data from Redis to Valkey/Dragonfly/KeyDB before the cutover.


Choosing a Node.js Client

All three Redis alternatives work with both major Node.js Redis clients. The choice of client is independent of which server you run — both ioredis and node-redis speak the Redis Serialization Protocol (RESP) and are fully compatible with Valkey, KeyDB, and Dragonfly.

ioredis is the more feature-rich option: it supports Sentinel, cluster, pipelining, scripting, and has a more extensive plugin system. The API is promise-based throughout, TypeScript types are maintained in the package itself, and the documentation covers edge cases thoroughly. Most enterprise Node.js applications that have been running Redis for years are on ioredis.

node-redis (the official client, redis on npm) received a major rewrite in v4 that improved its TypeScript support significantly. It's simpler than ioredis but covers the vast majority of use cases. The createClient API with TypeScript generics for custom commands is well-designed. For new projects, node-redis v4+ is a reasonable default.

Upstash's @upstash/redis client is worth mentioning as a third option — it's an HTTP-based client designed for serverless environments where persistent TCP connections are problematic. If you're running serverless functions and using Valkey or Dragonfly Cloud as a managed service, Upstash's client eliminates connection management overhead entirely.


Operational Considerations

The operational story for each alternative differs meaningfully from Redis itself. Valkey operates identically to Redis from an operational standpoint — the same tooling (redis-cli works as valkey-cli), the same monitoring approach, the same failover patterns. Teams running Redis on Kubernetes can replace the container image with valkey/valkey:latest and continue operating exactly as before.

Dragonfly requires some operational adjustment. The memory management model is different enough that Redis monitoring metrics (fragmentation ratio, eviction rates) require reinterpretation. Dragonfly's own monitoring endpoints provide equivalent information, but teams with existing Redis monitoring dashboards (Grafana panels, Datadog monitors) will need to update them. The Dragonfly team provides migration guides for common monitoring setups.

KeyDB's operational considerations center on thread management. In a multi-threaded setup, CPU core affinity, thread count tuning, and the interaction between threading and persistence can affect performance in ways that Redis operators haven't had to think about. The KeyDB configuration documentation is detailed, but teams without experience tuning multi-threaded systems should plan for a learning period.


Final Verdict 2026

For most teams migrating away from Redis due to the license change, Valkey is the clear choice — it is Redis, with a better license and the backing of the major cloud providers who are already running it in production at massive scale. The Linux Foundation governance means its future is secure regardless of any single company's commercial interests.

Teams with specific performance or scalability constraints should evaluate Dragonfly seriously. The memory efficiency claims are not marketing hyperbole — the architectural differences produce real savings, and at sufficient scale, those savings justify the operational change. For multi-master active-active replication, KeyDB remains the only option in this comparison.


Methodology

Data sourced from GitHub repositories (star counts as of February 2026), official benchmarks (Dragonfly DB benchmark suite, KeyDB benchmarks), cloud provider announcements (AWS Elasticache, Google Cloud Memorystore), and community performance reports on Hacker News and r/redis. Performance numbers are from official vendor benchmarks and should be verified for your specific workload.


Related: ioredis vs node-redis vs Upstash for Redis client comparison, or BullMQ vs Bee-Queue vs pg-boss for job queue implementations on top of Redis.

See also: Best real-time libraries for Node.js · Turso vs PlanetScale vs Neon serverless databases

The 2026 JavaScript Stack Cheatsheet

One PDF: the best package for every category (ORMs, bundlers, auth, testing, state management). Used by 500+ devs. Free, updated monthly.