Skip to content

System Architecture & Design

LoomCache is built from the ground up to be a high-performance distributed cache engine with entirely custom architecture containing zero external cache dependencies. This page explains the inner mechanics.

LoomCache Architecture Stack

Zero-dependency, pure Java execution path

1. Client Layer
2. Network Layer
3. Protocol & Auth
4. Consensus Layer
5. State & Storage
WAL.DAT

1. Client Layer

LoomClient routes request using MurmurHash3 to the correct partition leader.

Latency Budget< 0.5ms

LoomCache implements a CP-by-default (Consistent and Partition-tolerant) architecture under the CAP theorem, similar to etcd and ZooKeeper.

Every single write operation goes through Raft consensus:

  1. Received by a client and routed to the Raft leader.
  2. Appended to the leader’s Raft log.
  3. Replicated to a majority of followers.
  4. Once acknowledged, the leader advances the commitIndex.
  5. Applied to the state machine and an acknowledgment is returned to the client.

This strict path guarantees no data loss during network partitions and maintains strict linearizability for all modifications.

LoomCache relies heavily on Java 25 Virtual Threads (Project Loom). The TcpServer spins up a lightweight virtual thread per incoming connection. When the thread performs a blocking I/O operation to read the custom binary protocol bytes, the JVM unmounts the carrier OS thread, allowing a single node to effortlessly handle hundreds of thousands of concurrent connections.

Data is transmitted using a highly optimized, custom Kryo-based binary protocol supporting 85 distinct message types. As bytes are parsed into Message objects, the AuthenticationHandler verifies the requested operation against the user’s role-based access control lists (RBAC), instantly dropping unauthorized payloads.

Our custom RaftNode implementation hardens standard Raft with features like:

  • Pre-vote: Prevents partitioned nodes from randomly incrementing election terms and disrupting stable clusters.
  • Leader Leases: Lets the Raft leader serve strongly consistent local reads rapidly without needing a quorum validation on every GET.

Memory isn’t infinitely durable. Before any cluster ACK is returned to the user, the WalWriter appends the state change to disk (fsync enabled by default) ensuring total durability even during complete cluster power loss scenarios. Snapshots are taken every 10,000 operations to compact the required recovery logs.