A lightweight, Go-based distributed in-memory cache with consistent-hashing sharding, singleflight deduplication, size-bounded LRU, protobuf communication, and read-through replication
Client β Group.Add("πΊ", "Hymeis")
ββ> Cache.Add("πΊ", "Hymeis")
ββ insert into in-memory LRU
ββ async fan-out to R-1 successors:
ββ for each replica in GetReplicas("πΊ", R)[1:]:
HTTP POST /dcache/<group>/πΊ (SetRequest)
Client β Group.Get("πΊ")
ββ LRU hit? βββΆ return "Hymeis"
ββ cache miss:
ββ singleflight.Do("πΊ", fn):
ββ pickPeer("πΊ") via consistent-hash
ββ peer? βββΆ peerLoad (HTTP+Protobuf) βββΆ return "Hymeis"
ββ local? βββΆ localLoad:
ββ GetterFunc β origin data
ββ Replication() (see Add flow above)
ββ return "Hymeis"
Sustained a constant 15β―000 QPS workload with wrk2, observing:
- Throughput: 14β―982 req/sec (β15β―000 target)
- Mean latency: 0.868β―ms
- P50 / P75 / P90: 0.86β―ms / 1.15β―ms / 1.44β―ms
- P99: 1.94β―ms (well under 10β―ms SLO)
- P99.9 / P99.99: 2.40β―ms / 2.86β―ms
- Max observed: 4.00β―ms
- Application side: Maybe make a Leetcode Top K ranking system?
Try
bash run.sh
and check the output shown