Skip to content

Partitioning & Sharding

LoomCache distributes large datasets automatically using a heavily optimized Consistent Hash Ring approach, ensuring no single node is ever a bottleneck.

Consistent Hash Ring

Keys are hashed to 16,384 slots mapped cleanly across virtual nodes, decoupling data from physical servers.

N-1
N-2
N-3
Hash(Key)
Physical Node
Active Slot Route

Rather than hashing keys directly to nodes, LoomCache hashes them to one of 16,384 virtual slots. Slots are evenly distributed among the physical nodes in the cluster.

[!NOTE] Because we map to slots instead of nodes, adding or removing a node from the cluster only requires migrating the data for specific slots, drastically minimizing network impact compared to a traditional mod-N hash ring.

Partition Migration (Dynamic Scaling)

Non-blocking state transfer when topologies change

CLUSTER STABLE
Node 1
Slot 1
Slot 2
Node 2
Slot 3
Slot 4
Node 3
Slot 5
Slot 6

When a cluster scales out, the PartitionMigrationManager negotiates a new layout via Raft and seamlessly streams the slot data in the background, pausing writes to individual slots only for the split-second transfer of ownership.