无需检索,精准导航:将企业知识提炼为可导航智能体技能,赋能问答与检索增强生成系统
Don't Retrieve, Navigate: Distilling Enterprise Knowledge into Navigable Agent Skills for QA and RAG
April 16, 2026
作者: Yiqun Sun, Pengfei Wei, Lawrence B. Hsieh
cs.AI
摘要
检索增强生成(RAG)技术将大语言模型的响应建立在外部证据基础上,但将模型视为搜索结果的被动消费者:它无法感知语料库的组织方式及尚未检索的内容,限制了其回溯或整合零散证据的能力。我们提出Corpus2Skill方法,通过离线将文档语料库提炼为分层技能目录,使大语言模型智能体在运行时能自主导航该目录。该编译流程迭代式聚类文档,逐层级生成大语言模型撰写的摘要,并将结果物化为可导航的技能文件树。运行时,智能体可纵览语料库全貌,通过逐级细化的摘要深入主题分支,最终按ID检索完整文档。由于层级结构显式可见,智能体能够推理检索路径、从无效路径回溯并跨分支整合证据。在企业客户支持RAG基准测试WixQA上,Corpus2Skill在各项质量指标上均优于稠密检索、RAPTOR及智能RAG基线方法。
English
Retrieval-Augmented Generation (RAG) grounds LLM responses in external evidence but treats the model as a passive consumer of search results: it never sees how the corpus is organized or what it has not yet retrieved, limiting its ability to backtrack or combine scattered evidence. We present Corpus2Skill, which distills a document corpus into a hierarchical skill directory offline and lets an LLM agent navigate it at serve time. The compilation pipeline iteratively clusters documents, generates LLM-written summaries at each level, and materializes the result as a tree of navigable skill files. At serve time, the agent receives a bird's-eye view of the corpus, drills into topic branches via progressively finer summaries, and retrieves full documents by ID. Because the hierarchy is explicitly visible, the agent can reason about where to look, backtrack from unproductive paths, and combine evidence across branches. On WixQA, an enterprise customer-support benchmark for RAG, Corpus2Skill outperforms dense retrieval, RAPTOR, and agentic RAG baselines across all quality metrics.