Skip to content

Raft Clustering

LoomCache is fundamentally a distributed system. A single node dying should never result in lost data. To govern this distributed state, LoomCache implements the Raft Consensus Algorithm.

Raft Consensus Flow

Waiting for client request...

Client
Leader Node
WAL Term
Follower 1
Follower 2

At any given instance, one node in the LoomCache cluster is the Leader. The Leader is responsible for receiving write requests (e.g., PUT, DELETE). It writes the request to its local log and immediately broadcasts an AppendEntries RPC to the Followers.

A write is not considered “committed” until the Leader receives acknowledging responses from a strict majority of the cluster (the Quorum).

[!CAUTION] If a 5-node cluster suffers a 2-node failure, the remaining 3 nodes maintain quorum and continue serving writes. If 3 nodes fail, the remaining 2 lose quorum and will block writes to prevent “Split-Brain” data corruption.

LoomCache provides strong consistency (linearizability). This means that once a write is acknowledged to a client, all subsequent reads across the cluster—regardless of leader elections or network partitions—are guaranteed to reflect that write. We actively enforce and test this using custom chaos testing harnesses in our CI pipelines.