利用潛在推理擴展測試時間計算:一種遞歸深度方法
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
February 7, 2025
作者: Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Tom Goldstein
cs.AI
摘要
我們研究了一種新穎的語言模型架構,能夠通過在潛在空間中隱式推理來擴展測試時的計算。我們的模型通過迭代遞歸塊來工作,因此在測試時可以展開到任意深度。這與通過生成更多標記來擴展計算的主流推理模型形成對比。與基於思維鏈的方法不同,我們的方法不需要任何專門的訓練數據,可以使用較小的上下文窗口工作,並且可以捕捉在詞語中不容易表示的推理類型。我們將一個概念驗證模型擴展到了35億個參數和8000億個標記。我們展示了結果模型可以在推理基準測試中提高其性能,有時甚至可以達到相當於50億個參數的計算負載。
English
We study a novel language model architecture that is capable of scaling
test-time computation by implicitly reasoning in latent space. Our model works
by iterating a recurrent block, thereby unrolling to arbitrary depth at
test-time. This stands in contrast to mainstream reasoning models that scale up
compute by producing more tokens. Unlike approaches based on chain-of-thought,
our approach does not require any specialized training data, can work with
small context windows, and can capture types of reasoning that are not easily
represented in words. We scale a proof-of-concept model to 3.5 billion
parameters and 800 billion tokens. We show that the resulting model can improve
its performance on reasoning benchmarks, sometimes dramatically, up to a
computation load equivalent to 50 billion parameters.Summary
AI-Generated Summary