MIT researchers developed Attention Matching, a KV cache compaction technique that compresses LLM memory by 50x in seconds — ...
Aerospace engineering and materials science researchers at Texas A&M University and the DEVCOM Army Research Laboratory have ...