Partitioning & Sharding
LoomCache distributes large datasets automatically using a heavily optimized Consistent Hash Ring approach, ensuring no single node is ever a bottleneck.
Consistent Hash Ring
Keys are hashed to 16,384 slots mapped cleanly across virtual nodes, decoupling data from physical servers.
Slot Management
Section titled “Slot Management”Rather than hashing keys directly to nodes, LoomCache hashes them to one of 16,384 virtual slots. Slots are evenly distributed among the physical nodes in the cluster.
[!NOTE] Because we map to slots instead of nodes, adding or removing a node from the cluster only requires migrating the data for specific slots, drastically minimizing network impact compared to a traditional mod-N hash ring.
Partition Migration
Section titled “Partition Migration”Partition Migration (Dynamic Scaling)
Non-blocking state transfer when topologies change
When a cluster scales out, the PartitionMigrationManager negotiates a new layout via Raft and seamlessly streams the slot data in the background, pausing writes to individual slots only for the split-second transfer of ownership.