Skip to content

Kubernetes & Docker Deployment

LoomCache is fully cloud-native and designed to run inside Linux containers. Because nodes must persist state to Write-Ahead Logs (WAL) and establish stable network identities for Raft consensus, it is highly recommended to deploy using Kubernetes StatefulSets.

Kubernetes StatefulSet

Pod Auto-Discovery via Headless Service

Pod-0
10.0.1.5
PVC/wal
Pod-1
10.0.2.5
PVC/wal
Pod-2
10.0.3.5
PVC/wal

Unlike stateless microservices, LoomCache nodes form a cohesive, consistent hash ring. They require stable network identifiers so that clients can securely route packets to the correct partition leaders without constant DNS cache misses.

When deploying to Kubernetes, nodes dynamically discover each other using a headless service. The DnsDiscovery or EnvironmentDiscovery strategies allow new pods to query the Kubernetes API:

  1. Pod-0 starts up and checks loomcache-headless.default.svc.cluster.local.
  2. As Pod-1 and Pod-2 spin up, they see the seed nodes and initiate Raft Pre-Vote and Leader Election.
  3. Once a majority (quorum of 2/3) is achieved, the cluster turns green and begins accepting writes.

Every pod requires a Persistent Volume Claim (PVC) mounted to /var/lib/loomcache/wal. This directory holds the active .dat files for the Write-Ahead Log. Even if a pod crashes, Kubernetes will remount the exact same PVC to the new pod, allowing it to instantly replay the WAL from the last index and effortlessly rejoin the cluster.

Do not use emptyDir or ephemeral storage unless you are testing locally.

By default, LoomCache provides an HTTP endpoint for Kubernetes Readiness and Liveness probes (/health). However, intra-cluster detection is much faster. LoomCache natively implements Akka-style Phi-Accrual Failure Detectors via the DiscoveryHealthChecker. If a node experiences a sudden JVM crash or network partition, the remaining peers’ Circuit Breakers trigger to OPEN state, instantly re-routing traffic rather than suffering hanging TCP connections.