ChatPaper.aiChatPaper

跨越邊界的推理:通過測試時校準提升規格對齊度

Reasoning over Boundaries: Enhancing Specification Alignment via Test-time Delibration

September 18, 2025
作者: Haoran Zhang, Yafu Li, Xuyang Hu, Dongrui Liu, Zhilin Wang, Bo Li, Yu Cheng
cs.AI

摘要

大型語言模型(LLMs)正日益應用於多樣化的現實場景中,每個場景都遵循由用戶或組織量身定制的行為與安全規範(spec)。這些規範分為安全規範和行為規範,因場景而異,並隨著偏好和需求的變化而演進。我們將這一挑戰形式化為規範對齊,著重於LLMs從行為和安全角度遵循動態、場景特定規範的能力。為應對這一挑戰,我們提出了Align3,這是一種輕量級方法,採用測試時深思(TTD)結合分層反思與修訂來推理規範邊界。我們進一步推出了SpecBench,這是一個用於衡量規範對齊的統一基準,涵蓋5個場景、103個規範和1,500個提示。通過對15個推理模型和18個指令模型進行多種TTD方法(包括自我精煉、TPO和MoreThink)的實驗,我們得出了三個關鍵發現:(i)測試時深思能提升規範對齊;(ii)Align3以最小開銷推進安全與助益的權衡前沿;(iii)SpecBench有效揭示了對齊差距。這些結果凸顯了測試時深思作為推理現實世界規範邊界的有效策略的潛力。
English
Large language models (LLMs) are increasingly applied in diverse real-world scenarios, each governed by bespoke behavioral and safety specifications (spec) custom-tailored by users or organizations. These spec, categorized into safety-spec and behavioral-spec, vary across scenarios and evolve with changing preferences and requirements. We formalize this challenge as specification alignment, focusing on LLMs' ability to follow dynamic, scenario-specific spec from both behavioral and safety perspectives. To address this challenge, we propose Align3, a lightweight method that employs Test-Time Deliberation (TTD) with hierarchical reflection and revision to reason over the specification boundaries. We further present SpecBench, a unified benchmark for measuring specification alignment, covering 5 scenarios, 103 spec, and 1,500 prompts. Experiments on 15 reasoning and 18 instruct models with several TTD methods, including Self-Refine, TPO, and MoreThink, yield three key findings: (i) test-time deliberation enhances specification alignment; (ii) Align3 advances the safety-helpfulness trade-off frontier with minimal overhead; (iii) SpecBench effectively reveals alignment gaps. These results highlight the potential of test-time deliberation as an effective strategy for reasoning over the real-world specification boundaries.
PDF503September 19, 2025