Valkey 9.0: When Open Source Outpaced Its Corporate Fork

Redis relicensing triggered one of the fastest open-source migrations in recent memory. Valkey 9.0 launched with major performance improvements just seven months after the fork, backed by competing cloud giants who rarely align on anything. The technical wins are real, but scaling challenges and latency controversies reveal the trade-offs of community-driven development versus commercial focus.

Featured Repository Screenshot

When Redis switched to proprietary licensing in March 2024, the open-source community forked the codebase and shipped a faster alternative in seven months.

Valkey 9.0 launched in October 2024 with 40% higher throughput than its predecessor, support for clusters handling over 1 billion requests per second, and backing from AWS, Google Cloud, Alibaba, and Oracle.

The Seven-Month Sprint: From Fork to Production

Redis switched to dual SSPL and proprietary licensing in early 2024. The Linux Foundation forked the last BSD-licensed version (7.2.4) in March. By October, Valkey had shipped production-ready releases with architectural improvements Redis hadn't implemented.

Major cloud providers—each with their own managed Redis offerings—coordinated under the Linux Foundation's neutral governance. Engineers at competing companies contributed to a shared codebase.

Getting Alibaba and AWS to Collaborate

Alibaba Cloud, AWS, Google Cloud, Oracle, and others don't typically collaborate on infrastructure projects. Each has commercial incentives to differentiate their managed services, yet all committed engineering resources to Valkey's development.

The alternative—fragmented Redis forks maintained independently—would create worse outcomes for everyone. The Linux Foundation provided governance structure, but the velocity came from shared urgency: cloud providers needed a credible open-source answer immediately.

What Changed Under the Hood: IO Multithreading and Cluster Gossip

Redis's single-threaded core was a known bottleneck. Valkey introduced IO multithreading, claiming 3x throughput improvements for I/O-bound workloads. This required rearchitecting core components that Redis had avoided touching for backward compatibility reasons.

The cluster architecture also changed. Valkey improved gossip protocols for larger deployments, though current implementations still fail beyond 500 nodes. Other limitations persist: replicas don't participate in leader election, and cascading failovers risk promoting replicas without complete data, potentially causing data loss.

The Performance Controversy: Throughput vs Latency Trade-offs

Benchmarks tell competing stories. Valkey's throughput gains are real, but SET performance degrades significantly with io-threads when pipelining commands due to ineffective prefetching. Independent tests show Valkey's p99.9 latencies reaching ~24ms compared to Redis's 16ms.

These are architectural choices. Workloads optimizing for maximum throughput on large datasets benefit from Valkey's multithreading. Latency-sensitive applications with strict SLAs may see regressions. Momento deployed Valkey 9.0 specifically for larger working sets and higher throughput scenarios, suggesting production users are finding appropriate fit.

Community Governance vs Commercial Focus: The Long Game

The precedents here—MySQL versus MariaDB, Elasticsearch versus OpenSearch—suggest community forks can sustain themselves but rarely out-innovate well-funded commercial entities long-term. Valkey's advantage is scale: companies like Twitter, Airbnb, OpenAI, and Adobe already run Redis in production and have migration paths to a compatible fork.

The question isn't whether Valkey survives—Linux Foundation backing and multi-vendor support ensure that. It's whether distributed governance can match Redis's commercial focus on feature velocity. Seven months in, the community moved faster. Whether that pace sustains remains unresolved.


valkey-ioVA

valkey-io/valkey

A flexible distributed key-value database that is optimized for caching and other realtime workloads.

24.4kstars
995forks
cache
database
key-value
key-value-store
nosql