Technology Stack
LoomCache is built to be blisteringly fast and massively concurrent. To achieve this, it relies on modern Java capabilities rather than pulling in external bloat.
Zero Dependency Philosophy
Section titled “Zero Dependency Philosophy”LoomCache core explicitly avoids monolithic frameworks like Spring or heavy standard libraries. The core datastore and clustering engine are written entirely in Vanilla Java.
[!TIP] By eliminating dependencies, we radically reduce our security attack surface and ensure a negligible cold-start time.
Virtual Threads (Project Loom)
Section titled “Virtual Threads (Project Loom)”The backbone of LoomCache’s concurrency is Java 21/25 Virtual Threads.
Java Virtual Threads (Project Loom)
Millions of lightweight virtual threads multiplexed over a few OS Carrier Threads.
OS Thread 1
OS Thread 2
Traditional OS threads are heavy, requiring memory per thread and context-switching overhead. Thread per request scaling has historically hit a wall at a few thousand concurrent connections.
LoomCache uses the Executors.newVirtualThreadPerTaskExecutor() model:
- Every incoming network connection gets its own extremely lightweight virtual thread.
- Blocking I/O operations (like writing to the TCP socket or reading from the WAL) do not block OS threads. They just yield the virtual thread.
- This allows LoomCache to cleanly handle millions of concurrent connections on standard hardware without complex reactive programming models.