複雜邏輯指令生成
Complex Logical Instruction Generation
August 12, 2025
作者: Mian Zhang, Shujian Liu, Sixun Dong, Ming Yin, Yebowen Hu, Xun Wang, Steven Ma, Song Wang, Sathish Reddy Indurthi, Haoyun Deng, Zhiyu Zoey Chen, Kaiqiang Song
cs.AI
摘要
指令遵循能力推动了大型语言模型(LLMs)的新时代,并构成了诸如推理与代理行为等更高级能力的基础技能。随着任务难度增加,自然语言指令中蕴含的逻辑结构愈发复杂。然而,LLMs在此类富含逻辑的指令上的表现仍待深入探究。我们提出了LogicIFGen与LogicIFEval。LogicIFGen是一个可扩展的自动化框架,用于从代码函数生成可验证的指令,这些指令能自然表达丰富的逻辑,如条件判断、嵌套、递归及函数调用。我们进一步精选了一系列复杂的代码函数,并利用LogicIFGen构建了LogicIFEval,这是一个包含426条可验证的富含逻辑指令的基准测试集。实验表明,当前最先进的LLMs在遵循LogicIFEval中的指令时仍面临困难。大多数LLMs仅能正确执行不到60%的指令,暴露出其在指令遵循能力上的显著不足。代码与基准测试集:https://github.com/mianzhang/LogicIF
English
Instruction following has catalyzed the recent era of Large Language Models
(LLMs) and is the foundational skill underpinning more advanced capabilities
such as reasoning and agentic behaviors. As tasks grow more challenging, the
logic structures embedded in natural language instructions becomes increasingly
intricate. However, how well LLMs perform on such logic-rich instructions
remains under-explored. We propose LogicIFGen and LogicIFEval. LogicIFGen is a
scalable, automated framework for generating verifiable instructions from code
functions, which can naturally express rich logic such as conditionals,
nesting, recursion, and function calls. We further curate a collection of
complex code functions and use LogicIFGen to construct LogicIFEval, a benchmark
comprising 426 verifiable logic-rich instructions. Our experiments demonstrate
that current state-of-the-art LLMs still struggle to correctly follow the
instructions in LogicIFEval. Most LLMs can only follow fewer than 60% of the
instructions, revealing significant deficiencies in the instruction-following
ability. Code and Benchmark: https://github.com/mianzhang/LogicIF