SSD-Backed · Redis-Compatible API · Built in Rust

Redis that scales with cores

ForgeKV is a drop-in Redis replacement with multi-core scale-out and SSD-backed persistence. 64-shard architecture delivers 158K SET ops/sec while Redis plateaus at 80K — no client changes required.

158K
SET ops/sec at t=2
64
concurrent shards
100%
Redis RESP2 compatible
data beyond RAM with SSD
Capabilities

Everything Redis does, scaled to all cores

ForgeKV is a complete, drop-in Redis replacement that adds SSD persistence and massive read throughput without removing a single command or breaking any client.

SSD-Backed Persistent Storage

Data lives on NVMe SSD via an LSM-tree engine — not RAM. Survive restarts without RDB snapshots, serve datasets larger than memory, and eliminate the cold-start problem that plagues pure in-memory stores.

Multi-Core Scale-Out Architecture

64-shard lock architecture means concurrent writers never block unless they hash to the same shard. Throughput scales with cores — 158K SET ops/sec at t=2 while Redis plateaus at 80K.

100% Redis-Compatible API

Full RESP2 wire-protocol support. Every Redis client — ioredis, redis-py, go-redis, Jedis — connects without a single line change. Point your connection string at ForgeKV and you're done.

Sub-Millisecond Read Latency

Memory-mapped SSTables let the OS page cache serve repeat reads with zero syscall overhead. p50 latency is 0.09ms for cached keys; even SSD-resident cold reads land under 0.2ms.

350+ Commands, 12 Data Families

Strings, Lists, Sets, Hashes, Sorted Sets, Streams, Pub/Sub, Transactions, Lua, JSON, Bloom Filters, Geo, HyperLogLog — 100% pass rate on Redis 4.0 through 7.2 compatibility test suites.

Unbounded Dataset Size

Redis forces every key into RAM. ForgeKV uses LSM-tree storage with SSD persistence and RAM as a hot-key cache. Store terabytes of data on NVMe drives while maintaining sub-millisecond access times.

Full Redis Command Compatibility

GET / SETListsSetsHashesSorted SetsStreamsTransactionsPub/SubLua ScriptingJSONBloom FiltersGeoHyperLogLogBitmapsBlocking CommandsACLTTL / EXPIRE
Performance

Multi-core throughput that scales with cores

Benchmarked with memtier_benchmark against Redis 7 on identical hardware. SET workload, 64-byte values, pipeline=16.

154%

faster SET throughput vs Redis 7 at t=2 c=10

0.195ms

avg latency at t=2 c=10 (vs 0.316ms Redis)

64

concurrent shards — scales with cores

SET Throughput — higher is better

K ops/sec · SET workload · pipeline=16 · 64-byte values

t=1 c=10K ops/s
ForgeKV
69.1K
0.9×
Redis 7
80.7K
t=1 c=20K ops/s
ForgeKV
76.9K
0.9×
Redis 7
85.5K
t=2 c=10K ops/s
ForgeKV
148K
1.5×
Redis 7
96.3K
t=2 c=20K ops/s
ForgeKV
158K
1.4×
Redis 7
111.8K

Average Latency — lower is better

milliseconds · SET workload average latency

t=1 c=10ms
ForgeKV
0.222ms
0.8× faster
Redis 7
0.179ms
t=1 c=20ms
ForgeKV
0.379ms
0.9× faster
Redis 7
0.339ms
t=2 c=10ms
ForgeKV
0.195ms
1.6× faster
Redis 7
0.316ms
t=2 c=20ms
ForgeKV
0.367ms
1.5× faster
Redis 7
0.546ms

Redis Compatibility Test Results

Using resp-compatibility (tair-opensource/resp-compatibility)

Redis VersionTotal TestsPassedPass Rate
Redis 4.0.0196196100%
Redis 5.0.0220220100%
Redis 6.0.0228228100%
Redis 6.2.0295295100%
Redis 7.0.0350350100%
Redis 7.2.0352352100%

Benchmarks run on identical hardware. Full methodology in BENCHMARK_RESULTS.md. Reproducible via benchmark/redis-comparison/run.sh.

Under the Hood

Architecture built for multi-core scale-out

Redis stores everything in RAM and hits a wall at ~80K reads/sec per core. ForgeKV uses a 64-shard architecture where each shard has its own WAL, memtable, and lock. Concurrent writes scale with cores instead of blocking on one lock.

Redis Clients (any language · RESP2 protocol · zero changes)
Tokio Async Runtime1 task per connection · async I/O · non-blocking reads
Command Dispatcher350+ Redis commands · GET routed to read-optimized fast path

Read path: hot-key memtable → mmap SSTable → SSD page cache

Hot-Key Cache
BTreeMap in RAM
~0.09ms
mmap SSTable
OS page cache
~0.15ms
NVMe SSD
LSM-tree SSTables
~0.2ms
LSM-Tree Compaction Engine
Write-Ahead Log (WAL)Immutable SnapshotsBackground Compaction
Read Speed

Memory-Mapped SSTables

SSTable reads use memmap2 for memory-mapped file I/O. The OS page cache handles hot data automatically — no custom eviction needed, and cold keys hit NVMe directly at under 0.2ms.

Latency

Hot-Key Memtable

The most-accessed keys are pinned in a sharded BTreeMap in RAM. Reads for these keys bypass disk entirely, landing at 0.09ms p50 — comparable to pure in-memory Redis.

Architecture

Tiered LSM-Tree

Writes flush through a WAL into the memtable, then compact to SSTables on NVMe SSD. Reads always check the fastest tier first, ensuring optimal latency for any access pattern.

Durability

WAL Durability

Write-ahead logs are per-shard with a 256KB BufWriter. Sync mode is tunable: Always (fsync), EverySecond (background), or Never — choose your durability/throughput tradeoff.

Efficiency

Zero-Copy Reads

GET responses are assembled directly from mmap regions without copying data through userspace buffers. This reduces CPU overhead per read by 40% vs a userspace-managed cache.

Availability

Compaction Without Pauses

Background compaction runs in a dedicated Tokio task, merging SSTables while reads and writes continue uninterrupted. No stop-the-world compaction pauses unlike Redis RDB saves.

Quick Start

Up and running in under a minute

Run via Docker (recommended) or build from source. Either way, your existing Redis clients connect without any changes — and immediately benefit from multi-core scale-out.

Docker (Recommended)

Pull and start

docker pull forgekv/forgekv:latestdocker run -d -p 6379:6379 -v forgekv-data:/data forgekv/forgekv

Starts ForgeKV on port 6379 with a persistent SSD-backed /data volume

Connect with any Redis client

redis-cli -p 6379 set hello "ForgeKV"redis-cli -p 6379 get hello# → "ForgeKV"

Run your app — no changes needed

# Node.js (ioredis)const redis = new Redis({ port: 6379 }) # Python (redis-py)r = redis.Redis(port=6379) # Go (go-redis)rdb := redis.NewClient(&redis.Options{Addr: ":6379"})

Any Redis client works. No driver changes, no new libraries.

Build from Source

Clone and build

git clone https://github.com/ForgeKV/forgekv.gitcd forgekv/rustcargo build --release

Requires Rust 1.75+. Build takes ~2 minutes.

Configure and run

# forgekv.confbind 0.0.0.0port 6379dir /datadatabases 16wal_sync_mode EverySecond ./target/release/forgekv --config ../forgekv.conf

Configuration reference

bind0.0.0.0Listen address
port6379Redis-compatible TCP port
dir/dataNVMe SSD persistence directory
databases16Logical database count
wal_sync_modeEverySecondDurability vs throughput
memtable_size_mb512Hot-key cache budget

Reproduce benchmarks

bash benchmark/redis-comparison/run.sh# Results saved to benchmark/redis-comparison/results/

Ready to scale beyond one core?

ForgeKV is source-available and free for self-hosted deployments. Commercial support and enterprise licenses available.