AgentGL:基于强化学习的LLM智能体图学习新范式
AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning
April 7, 2026
作者: Yuanfu Sun, Kang Li, Dongzhe Fan, Jiajin Liu, Qiaoyu Tan
cs.AI
摘要
大型语言模型(LLMs)日益依赖智能体能力——迭代检索、工具使用与决策制定——来突破静态参数化知识的局限。然而现有智能体框架将外部信息视为非结构化文本,未能充分利用现实世界数据固有的拓扑依赖关系。为弥补这一差距,我们提出了图学习智能体化(AGL)范式,该范式将图学习重构为拓扑感知导航与基于LLM推理的交织过程。具体而言,我们开发了首个强化学习驱动的AGL框架AgentGL。该框架为LLM智能体配备多尺度探索的图原生工具,通过搜索约束思维机制平衡精度与效率,并采用图条件课程强化学习策略以稳定长周期策略学习而无需逐步监督。在多样化文本属性图基准测试和多重LLM骨干网络中,AgentGL显著优于主流GraphLLM与GraphRAG基线方法,在节点分类和链接预测任务上分别实现最高17.5%和28.4%的绝对性能提升。这些结果表明AGL是使LLM能够自主导航并推理复杂关系环境的前沿方向。代码已开源:https://github.com/sunyuanfu/AgentGL。
English
Large Language Models (LLMs) increasingly rely on agentic capabilities-iterative retrieval, tool use, and decision-making-to overcome the limits of static, parametric knowledge. Yet existing agentic frameworks treat external information as unstructured text and fail to leverage the topological dependencies inherent in real-world data. To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference. Specifically, we propose AgentGL, the first reinforcement learning (RL)-driven framework for AGL. AgentGL equips an LLM agent with graph-native tools for multi-scale exploration, regulates tool usage via search-constrained thinking to balance accuracy and efficiency, and employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without step-wise supervision. Across diverse Text-Attributed Graph (TAG) benchmarks and multiple LLM backbones, AgentGL substantially outperforms strong GraphLLMs and GraphRAG baselines, achieving absolute improvements of up to 17.5% in node classification and 28.4% in link prediction. These results demonstrate that AGL is a promising frontier for enabling LLMs to autonomously navigate and reason over complex relational environments. The code is publicly available at https://github.com/sunyuanfu/AgentGL.