Large Language Models (LLMs) face a significant bottleneck during autoregressive inference due to the massive memory footprint of the Key-Value (KV) cache. Existing compression techniques like token eviction, quantization, or other low-rank methods often risk information loss, have fixed limits, or introduce significant computational overhead from explicit decompression steps. In this work, we introduce SWAN, a novel, fine-tuning-free framework that eliminates this overhead. Our method uses an offline orthogonal matrix to rotate and prune the KV-cache, which is then used directly in the attention computation without any reconstruction. Our extensive experiments demonstrate that SWAN, augmented with a small dense buffer, offers a robust trade-off, maintaining performance close to the uncompressed baseline even at aggressive 50-60% memory savings per-token on KV-cache. A key advantage is its runtime-tunable compression level, allowing operators to dynamically adjust the memory footprint, a flexibility absent in methods requiring fixed offline configurations. This combination of a decompression-free design, high performance under compression, and adaptability makes SWAN a practical and efficient solution for serving LLMs with long contexts.
翻译:大型语言模型(LLMs)在自回归推理过程中面临一个显著的瓶颈,即键值(KV)缓存占用大量内存。现有的压缩技术,如令牌淘汰、量化或其他低秩方法,通常存在信息丢失风险、固定限制,或因显式解压缩步骤引入显著计算开销。在本工作中,我们提出了SWAN,一种新颖的免微调框架,消除了这种开销。我们的方法使用离线正交矩阵对KV缓存进行旋转和剪枝,然后直接用于注意力计算,无需任何重建。我们的大量实验表明,SWAN通过增加一个小的稠密缓冲区,提供了稳健的权衡,即使在KV缓存上实现每令牌50-60%的激进内存节省时,仍能保持接近未压缩基线的性能。其关键优势在于运行时可调的压缩级别,允许操作者动态调整内存占用,这种灵活性在需要固定离线配置的方法中是不具备的。这种免解压缩设计、高压缩性能与适应性的结合,使SWAN成为服务长上下文LLMs的实用且高效的解决方案。