ChatPaper.aiChatPaper

DZ-TDPO:面向长对话上下文可变状态跟踪的无损时序对齐方法

DZ-TDPO: Non-Destructive Temporal Alignment for Mutable State Tracking in Long-Context Dialogue

December 3, 2025
作者: Yijun Liao
cs.AI

摘要

长对话系统普遍存在状态惰性问题,即静态约束阻碍模型在动态演变的用户意图与既定历史语境间实现有效协调。为此,我们提出DZ-TDPO——一种非破坏性对齐框架,通过融合冲突感知的动态KL约束与校准后的时序注意力偏置实现协同优化。在Multi-Session Chat (MSC)数据集上的实验表明,DZ-TDPO在Phi-3.5模型上达到55.4%的胜率,同时保持优异的零样本泛化能力。尺度分析揭示出"容量-稳定性权衡"规律:小模型需付出"对齐代价"(困惑度激增)来克服历史惰性,而Qwen2.5-7B大模型以可忽略的困惑度开销实现50.8%胜率。这证实通过精确的注意力调控(而非破坏性权重更新)可缓解状态惰性,且能保持跨模型尺度的通用能力(MMLU)。代码与数据已开源:https://github.com/lyj20071013/DZ-TDPO
English
Long-context dialogue systems suffer from State Inertia, where static constraints prevent models from resolving conflicts between evolving user intents and established historical context. To address this, we propose DZ-TDPO, a non-destructive alignment framework that synergizes conflict-aware dynamic KL constraints with a calibrated temporal attention bias. Experiments on the Multi-Session Chat (MSC) dataset demonstrate that DZ-TDPO achieves state-of-the-art win rates (55.4% on Phi-3.5) while maintaining robust zero-shot generalization. Our scaling analysis reveals a "Capacity-Stability Trade-off": while smaller models incur an "alignment tax" (perplexity surge) to overcome historical inertia, the larger Qwen2.5-7B model achieves 50.8% win rate with negligible perplexity overhead. This confirms that TAI can be alleviated via precise attention regulation rather than destructive weight updates, preserving general capabilities (MMLU) across model scales. Code and data are available: https://github.com/lyj20071013/DZ-TDPO
PDF12December 10, 2025