Skip to content

Technology Stack

LoomCache is built to be blisteringly fast and massively concurrent. To achieve this, it relies on modern Java capabilities rather than pulling in external bloat.

LoomCache core explicitly avoids monolithic frameworks like Spring or heavy standard libraries. The core datastore and clustering engine are written entirely in Vanilla Java.

[!TIP] By eliminating dependencies, we radically reduce our security attack surface and ensure a negligible cold-start time.

The backbone of LoomCache’s concurrency is Java 21/25 Virtual Threads.

Java Virtual Threads (Project Loom)

Millions of lightweight virtual threads multiplexed over a few OS Carrier Threads.

VT
VT
VT

OS Thread 1

Carrier

OS Thread 2

Carrier

Traditional OS threads are heavy, requiring memory per thread and context-switching overhead. Thread per request scaling has historically hit a wall at a few thousand concurrent connections.

LoomCache uses the Executors.newVirtualThreadPerTaskExecutor() model:

  • Every incoming network connection gets its own extremely lightweight virtual thread.
  • Blocking I/O operations (like writing to the TCP socket or reading from the WAL) do not block OS threads. They just yield the virtual thread.
  • This allows LoomCache to cleanly handle millions of concurrent connections on standard hardware without complex reactive programming models.