Show HN: Agent-cache – Multi-tier LLM/tool/session caching for Valkey and Redis
12
kkaliades about 7 hours ago 3 comments
HI version is available. Content is displayed in original English for accuracy.
Multi-tier exact-match cache for AI agents backed by Valkey or Redis. LLM responses, tool results, and session state behind one connection. Framework adapters for LangChain, LangGraph, and Vercel AI SDK. OpenTelemetry and Prometheus built in. No modules required - works on vanilla Valkey 7+ and Redis 6.2+.
Shipped v0.1.0 yesterday, v0.2.0 today with cluster mode. Streaming support coming next.
Existing options locked you into one tier (LangChain = LLM only, LangGraph = state only) or one framework. This solves both.
npm: https://www.npmjs.com/package/@betterdb/agent-cache Docs: https://docs.betterdb.com/packages/agent-cache.html Examples: https://valkeyforai.com/cookbooks/betterdb/ GitHub: https://github.com/BetterDB-inc/monitor/tree/master/packages...
Happy to answer questions.

Discussion (3 Comments)Read Original on HackerNews
Three tiers: if your agent calls gpt-4o with the same prompt twice, the second call returns from Valkey in under 1ms instead of hitting the API. Same for tool calls - if your agent calls get_weather("Sofia") twice with the same arguments, the cached result comes back instantly. And session state (what step the agent is on, user intent, LangGraph checkpoints) persists across requests with per-field TTL.
The main difference from existing options is that LangChain's cache only handles LLM responses, LangGraph's checkpoint-redis only handles state (and requires Redis 8 + modules), and none of them ship OpenTelemetry or Prometheus instrumentation at the cache layer. This puts all three tiers behind one Valkey connection with observability built in.