K-搜索:基于协同演化内在世界模型的大语言模型内核生成
K-Search: LLM Kernel Generation via Co-Evolving Intrinsic World Model
February 22, 2026
作者: Shiyi Cao, Ziming Mao, Joseph E. Gonzalez, Ion Stoica
cs.AI
摘要
在現代機器學習系統中,優化GPU內核對實現高效能至關重要,但由於設計因素的複雜相互作用及硬體的快速迭代,這項任務仍具挑戰性。現有的自動化方法通常僅將大語言模型(LLMs)視為啟發式引導演化循環中的隨機代碼生成器。這類方法因缺乏顯式規劃能力,且常因低效或不正確的中間實現而丟棄潛在優化策略,難以處理需要協同多步驟結構變換的複雜內核。為此,我們提出基於協同演化世界模型的搜索方法,並據此構建了K-Search框架。該框架以協同演化的世界模型取代靜態搜索啟發規則,利用LLMs的領域先驗知識引導搜索進程,主動探索優化空間。這種方法顯式解耦了高層算法規劃與底層程序實例化,使系統能夠在非單調的優化路徑中導航,同時對臨時實現缺陷保持韌性。我們在FlashInfer的GQA、MLA及MoE等多類複雜內核上評估K-Search,結果表明其顯著優於現有演化搜索方法:平均性能提升達2.10倍,在複雜MoE內核上最高實現14.3倍增益。在GPUMode TriMul任務中,K-Search於H100平臺達到1030微秒的業界最優性能,超越既有演化算法與人工設計方案。
English
Optimizing GPU kernels is critical for efficient modern machine learning systems yet remains challenging due to the complex interplay of design factors and rapid hardware evolution. Existing automated approaches typically treat Large Language Models (LLMs) merely as stochastic code generators within heuristic-guided evolutionary loops. These methods often struggle with complex kernels requiring coordinated, multi-step structural transformations, as they lack explicit planning capabilities and frequently discard promising strategies due to inefficient or incorrect intermediate implementations. To address this, we propose Search via Co-Evolving World Model and build K-Search based on this method. By replacing static search heuristics with a co-evolving world model, our framework leverages LLMs' prior domain knowledge to guide the search, actively exploring the optimization space. This approach explicitly decouples high-level algorithmic planning from low-level program instantiation, enabling the system to navigate non-monotonic optimization paths while remaining resilient to temporary implementation defects. We evaluate K-Search on diverse, complex kernels from FlashInfer, including GQA, MLA, and MoE kernels. Our results show that K-Search significantly outperforms state-of-the-art evolutionary search methods, achieving an average 2.10x improvement and up to a 14.3x gain on complex MoE kernels. On the GPUMode TriMul task, K-Search achieves state-of-the-art performance on H100, reaching 1030us and surpassing both prior evolution and human-designed solutions.