ChatPaper.aiChatPaper

ORION:教导语言模型以思维语言进行高效推理

ORION: Teaching Language Models to Reason Efficiently in the Language of Thought

November 28, 2025
作者: Kumar Tanmay, Kriti Aggarwal, Paul Pu Liang, Subhabrata Mukherjee
cs.AI

摘要

大型推理模型(LRMs)在数学、代码生成和任务规划方面表现出色,但其依赖冗长的"思维"标记链会导致高延迟、冗余和推理路径不连贯。受思想语言假说(该假说认为人类推理基于一种名为"心理语"的符号化、组合性心理语言)启发,我们提出了一个训练模型以类似紧凑风格进行推理的框架。心理语将抽象推理编码为超压缩的结构化标记,使模型能够用更少步骤解决复杂问题。为提升效率与准确性,我们提出短长度偏好优化(SLPO)——一种强化学习方法,奖励保持正确性的简洁解法,同时允许必要时进行更长推理。应用于心理语对齐模型时,SLPO通过实现保留详细思维优势的简洁推理,显著提升压缩率且无需额外计算开销。在AIME 2024/2025、MinervaMath、OlympiadBench、Math500和AMC等基准测试中,我们的ORION模型生成推理轨迹的标记数量减少4-16倍,推理延迟降低高达5倍,训练成本较DeepSeek R1 Distilled模型减少7-9倍,同时保持其90-98%的准确率。ORION模型在保持2倍压缩率的同时,准确率较Claude和ChatGPT-4o最高提升5%。这些结果表明,心理语式压缩推理向类人认知效率迈进了一步,可在不牺牲准确性的前提下实现实时、高性价比的推理。
English
Large Reasoning Models (LRMs) achieve strong performance in mathematics, code generation, and task planning, but their reliance on long chains of verbose "thinking" tokens leads to high latency, redundancy, and incoherent reasoning paths. Inspired by the Language of Thought Hypothesis, which posits that human reasoning operates over a symbolic, compositional mental language called Mentalese, we introduce a framework that trains models to reason in a similarly compact style. Mentalese encodes abstract reasoning as ultra-compressed, structured tokens, enabling models to solve complex problems with far fewer steps. To improve both efficiency and accuracy, we propose SHORTER LENGTH PREFERENCE OPTIMIZATION (SLPO), a reinforcement learning method that rewards concise solutions that stay correct, while still allowing longer reasoning when needed. Applied to Mentalese-aligned models, SLPO yields significantly higher compression rates by enabling concise reasoning that preserves the benefits of detailed thinking without the computational overhead. Across benchmarks including AIME 2024 and 2025, MinervaMath, OlympiadBench, Math500, and AMC, our ORION models produce reasoning traces with 4-16x fewer tokens, achieve up to 5x lower inference latency, and reduce training costs by 7-9x relative to the DeepSeek R1 Distilled model, while maintaining 90-98% of its accuracy. ORION also surpasses Claude and ChatGPT-4o by up to 5% in accuracy while maintaining 2x compression. These results show that Mentalese-style compressed reasoning offers a step toward human-like cognitive efficiency, enabling real-time, cost-effective reasoning without sacrificing accuracy.
PDF31December 3, 2025