ChatPaper.aiChatPaper

潜在思维调优:通过潜在标记中的融合信息桥接语境与推理

Latent Thoughts Tuning: Bridging Context and Reasoning with Fused Information in Latent Tokens

February 10, 2026
作者: Weihao Liu, Dehai Min, Lu Cheng
cs.AI

摘要

虽然显性思维链(CoT)为大型语言模型赋予了强大的推理能力,但其要求模型将所有中间步骤通过文本符号进行外显化,将模型思维约束在离散的词表空间内。近年来,连续潜空间推理作为一种新兴替代方案崭露头角,它能够突破离散符号限制,实现更鲁棒的推理能力和灵活的计算模式。然而现有潜空间范式常因循环使用隐藏状态作为输入嵌入导致的分布失配,或依赖辅助模型产生的对齐问题,而出现特征坍缩与不稳定性。为此,我们提出潜思维微调框架(LT-Tuning),重新定义了潜思维的构建与部署机制。该方法通过上下文-预测-融合机制,联合利用上下文隐藏状态与词表嵌入空间的语义预测指导,而非单纯依赖原始隐藏状态。结合渐进式三阶段课程学习流程,LT-Tuning还能实现潜思维与显性思维模式的动态切换。实验表明,本方法在多个基准测试中超越现有潜推理基线,有效缓解特征坍缩问题并实现稳健的推理精度。
English
While explicit Chain-of-Thought (CoT) equips Large Language Models (LLMs) with strong reasoning capabilities, it requires models to verbalize every intermediate step in text tokens, constraining the model thoughts to the discrete vocabulary space. Recently, reasoning in continuous latent space has emerged as a promising alternative, enabling more robust inference and flexible computation beyond discrete token constraints. However, current latent paradigms often suffer from feature collapse and instability, stemming from distribution mismatches when recurrently using hidden states as the input embeddings, or alignment issues when relying on assistant models. To address this, we propose Latent Thoughts Tuning (LT-Tuning), a framework that redefines how latent thoughts are constructed and deployed. Instead of relying solely on raw hidden states, our method introduces a Context-Prediction-Fusion mechanism that jointly leveraging contextual hidden states and predictive semantic guidance from the vocabulary embedding space. Combined with a progressive three-stage curriculum learning pipeline, LT-Tuning also enables dynamically switching between latent and explicit thinking modes. Experiments demonstrate that our method outperforms existing latent reasoning baselines, effectively mitigating feature collapse and achieving robust reasoning accuracy.
PDF51February 13, 2026