Zenodo (preprint) Preprint

Posted

CrystalCache: Cross-Domain Transfer from Cognitive Memory Crystallization to KV Cache Eviction in Long-Context LLMs

Po-Ting Lin 1

  1. 1 Independent Researcher
DOI
10.5281/zenodo.19815088
License
CC BY 4.0
Categories
Machine Learning · Long-Context LLMs · KV Cache

We present CrystalCache, a KV-cache eviction algorithm for long-context large language models, derived by porting a cognitive theory of memory crystallization into the cache-management domain. CrystalCache addresses memory bottlenecks by organizing past context into semantic units ("trunks") and scoring each unit along two independent dimensions: associative crystallization (D), which captures how strongly a trunk has been re-engaged across queries, and encoding impact (M_i), which captures how distinctively that trunk has shaped past attention. Eviction combines these signals with an explicit rarity term and replaces binary keep/drop decisions with progressive retention, so high-value content degrades gracefully rather than being abruptly discarded. Across Llama-3.1-8B, Mistral-7B, and Qwen3-8B, CrystalCache yields consistent improvements on long-context retrieval benchmarks, with the most pronounced gains on Needle-in-a-Haystack tasks where rare-but-critical evidence is precisely the failure mode standard eviction policies handle worst.

  • KV cache eviction
  • long-context LLM
  • memory crystallization
  • cross-domain transfer
  • associative scoring
  • progressive retention
  • Needle-in-a-Haystack
Loading PDF…
Cite as (BibTeX)
@misc{lin2026crystalcache,
  title = {CrystalCache: Cross-Domain Transfer from Cognitive Memory Crystallization to KV Cache Eviction in Long-Context LLMs},
  author = {Po-Ting Lin},
  year = {2026},
  howpublished = {Zenodo},
  doi = {10.5281/zenodo.19815088}
}

← All publications