Giant language fashions (LLMs) now sit within the important path of search, help, and agentic workflows, making semantic caching important for decreasing inference price and latency. Manufacturing deployments usually use a tiered static-dynamic design: a static cache of curated, offline vetted responses mined from logs, backed by a dynamic cache populated on-line. In follow, each tiers are generally ruled by a single embedding similarity threshold, which induces a tough tradeoff: conservative thresholds miss secure reuse alternatives, whereas aggressive thresholds threat serving semantically incorrect responses. We introduce Krites, an asynchronous, LLM-judged caching coverage that expands static protection with out altering serving selections. On the important path, Krites behaves precisely like an ordinary static threshold coverage. When the closest static neighbor of the immediate falls slightly below the static threshold, Krites asynchronously invokes an LLM decide to confirm whether or not the static response is suitable for the brand new immediate. Accredited matches are promoted into the dynamic cache, permitting future repeats and paraphrases to reuse curated static solutions and increasing static attain over time. In trace-driven simulations on conversational and search workloads, Krites will increase the fraction of requests served with curated static solutions (direct static hits plus verified promotions) by as much as 3.9 instances for conversational visitors and search-style queries relative to tuned baselines, with unchanged important path latency.

