ChatPaper.aiChatPaper

解碼軌跡輔助的大型語言模型推理:一個優化視角

Deciphering Trajectory-Aided LLM Reasoning: An Optimization Perspective

May 26, 2025
作者: Junnan Liu, Hongwei Liu, Linchen Xiao, Shudong Liu, Taolin Zhang, Zihan Ma, Songyang Zhang, Kai Chen
cs.AI

摘要

我們提出了一種新穎的框架,從元學習的角度來理解大型語言模型(LLMs)的推理能力。通過將推理軌跡概念化為對LLM參數的偽梯度下降更新,我們識別出LLM推理與多種元學習範式之間的相似性。我們將推理任務的訓練過程形式化為一種元學習設置,其中每個問題被視為單獨的任務,而推理軌跡則作為適應模型參數的內循環優化。一旦在多樣化的問題集上完成訓練,LLM便發展出能夠泛化到未見過問題的基本推理能力。大量的實證評估證實了LLM推理與元學習之間的緊密聯繫,並從元學習的角度探討了幾個具有重要意義的問題。我們的工作不僅加深了對LLM推理的理解,還為通過成熟的元學習技術改進這些模型提供了實用的見解。
English
We propose a novel framework for comprehending the reasoning capabilities of large language models (LLMs) through the perspective of meta-learning. By conceptualizing reasoning trajectories as pseudo-gradient descent updates to the LLM's parameters, we identify parallels between LLM reasoning and various meta-learning paradigms. We formalize the training process for reasoning tasks as a meta-learning setup, with each question treated as an individual task, and reasoning trajectories serving as the inner loop optimization for adapting model parameters. Once trained on a diverse set of questions, the LLM develops fundamental reasoning capabilities that can generalize to previously unseen questions. Extensive empirical evaluations substantiate the strong connection between LLM reasoning and meta-learning, exploring several issues of significant interest from a meta-learning standpoint. Our work not only enhances the understanding of LLM reasoning but also provides practical insights for improving these models through established meta-learning techniques.

Summary

AI-Generated Summary

PDF362May 27, 2025