ChatPaper.aiChatPaper

LSRIF:基于逻辑结构强化学习的指令跟随框架

LSRIF: Logic-Structured Reinforcement Learning for Instruction Following

January 10, 2026
作者: Qingyu Ren, Qianyu He, Jingwen Chang, Jie Zeng, Jiaqing Liang, Yanghua Xiao, Han Xia, Zeye Sun, Fei Yu
cs.AI

摘要

指令遵循对大型语言模型至关重要,但现实指令常包含顺序依赖和条件分支等逻辑结构。现有方法通常构建带并行约束的数据集并优化平均奖励,忽略了逻辑依赖关系并产生噪声信号。我们提出逻辑结构化训练框架LSRIF,显式建模指令逻辑:先构建包含并行、顺序、条件等约束结构的LSRInstruct数据集,再设计结构感知奖励方法,包括并行结构的平均聚合、顺序结构的失败惩罚传播、条件分支的选择性奖励。实验表明LSRIF在指令遵循(域内/域外)和通用推理上取得显著提升。分析发现,显式逻辑结构学习能引发注意力层的参数更新,并增强对约束条件和逻辑运算符的token级关注聚焦。
English
Instruction-following is critical for large language models, but real-world instructions often contain logical structures such as sequential dependencies and conditional branching. Existing methods typically construct datasets with parallel constraints and optimize average rewards, ignoring logical dependencies and yielding noisy signals. We propose a logic-structured training framework LSRIF that explicitly models instruction logic. We first construct a dataset LSRInstruct with constraint structures such as parallel, sequential, and conditional types, and then design structure-aware rewarding method LSRIF including average aggregation for parallel structures, failure-penalty propagation for sequential structures, and selective rewards for conditional branches. Experiments show LSRIF brings significant improvements in instruction-following (in-domain and out-of-domain) and general reasoning. Analysis reveals that learning with explicit logic structures brings parameter updates in attention layers and sharpens token-level attention to constraints and logical operators.
PDF51January 17, 2026