LSRIF:基于逻辑结构的指令跟随强化学习
LSRIF: Logic-Structured Reinforcement Learning for Instruction Following
January 10, 2026
作者: Qingyu Ren, Qianyu He, Jingwen Chang, Jie Zeng, Jiaqing Liang, Yanghua Xiao, Han Xia, Zeye Sun, Fei Yu
cs.AI
摘要
指令遵循对大型语言模型至关重要,但现实指令常包含顺序依赖和条件分支等逻辑结构。现有方法通常构建带并行约束的数据集并优化平均奖励,忽略了逻辑依赖关系并产生噪声信号。我们提出逻辑结构化训练框架LSRIF,显式建模指令逻辑。首先构建包含并行、顺序、条件等约束结构的LSRInstruct数据集,随后设计结构感知奖励方法:对并行结构采用平均聚合,对顺序结构实施失败惩罚传播,对条件分支进行选择性奖励。实验表明LSRIF在指令遵循(域内/域外)和通用推理方面带来显著提升。分析发现,显式逻辑结构学习能引发注意力层的参数更新,并增强对约束条件和逻辑运算符的令牌级关注。
English
Instruction-following is critical for large language models, but real-world instructions often contain logical structures such as sequential dependencies and conditional branching. Existing methods typically construct datasets with parallel constraints and optimize average rewards, ignoring logical dependencies and yielding noisy signals. We propose a logic-structured training framework LSRIF that explicitly models instruction logic. We first construct a dataset LSRInstruct with constraint structures such as parallel, sequential, and conditional types, and then design structure-aware rewarding method LSRIF including average aggregation for parallel structures, failure-penalty propagation for sequential structures, and selective rewards for conditional branches. Experiments show LSRIF brings significant improvements in instruction-following (in-domain and out-of-domain) and general reasoning. Analysis reveals that learning with explicit logic structures brings parameter updates in attention layers and sharpens token-level attention to constraints and logical operators.