Long-context dialogue systems suffer from State Inertia, where static constraints prevent models from resolving conflicts between evolving user intents and established historical context. To address this, we propose DZ-TDPO, a non-destructive alignment framework that synergizes conflict-aware dynamic KL constraints with a learnable temporal attention bias. Experiments on the Multi-Session Chat (MSC) dataset demonstrate that DZ-TDPO achieves state-of-the-art win rates (86.2% on Phi-3.5) while maintaining robust zero-shot generalization. Crucially, our scaling analysis reveals a "Capacity-Stability Trade-off": while smaller models incur an "alignment tax" (perplexity surge) to overcome historical inertia, the larger Qwen2.5-7B model achieves near-perfect alignment (99.4% win rate) with negligible perplexity overhead. This confirms that TAI can be alleviated via precise attention regulation rather than destructive weight updates, preserving general capabilities (MMLU) across model scales. Code and data are available: https://github.com/lyj20071013/DZ-TDPO
翻译:长上下文对话系统普遍存在状态惯性问题,即静态约束阻碍模型在演化用户意图与既定历史语境间有效解决冲突。为解决此问题,我们提出DZ-TDPO——一种非破坏性对齐框架,通过融合冲突感知的动态KL约束与可学习时序注意力偏置实现协同优化。在Multi-Session Chat(MSC)数据集上的实验表明,DZ-TDPO实现了最先进的胜率(Phi-3.5模型达86.2%),同时保持稳健的零样本泛化能力。关键的是,我们的缩放分析揭示了“容量-稳定性权衡”现象:较小模型需承受“对齐税”(困惑度激增)以克服历史惯性,而更大的Qwen2.5-7B模型能以可忽略的困惑度开销实现近乎完美的对齐(99.4%胜率)。这证实了时序注意力惯性可通过精确的注意力调控(而非破坏性权重更新)来缓解,从而在不同模型规模下保持通用能力(MMLU指标)。代码与数据已开源:https://github.com/lyj20071013/DZ-TDPO