边生成边执行:隐藏LLM代码生成中的执行延迟
Executing as You Generate: Hiding Execution Latency in LLM Code Generation
April 1, 2026
作者: Zhensu Sun, Zhihao Lin, Zhi Chen, Chengran Yang, Mingyi Zhou, Li Li, David Lo
cs.AI
摘要
当前基于大语言模型的代码生成代理遵循串行执行范式:模型首先生成完整代码,随后调用解释器执行。这种顺序工作流导致生成阶段执行器闲置、执行阶段生成器闲置,造成不必要的端到端延迟。我们观察到,与人类开发者不同,大语言模型以不可修订的顺序方式生成代码标记,这使得代码在生成过程中即可被执行成为可能。我们将这种并行执行范式形式化为包含生成、检测与执行的三级流水线,并通过闭式延迟边界刻画其加速潜力与运行区间。随后提出Eager系统实现方案,其核心特性包括基于抽象语法树的代码分块、带门控执行的动态批处理以及早期错误中断机制。我们在四个基准测试集、七种大语言模型及三种执行环境中对Eager进行评估。实验结果表明,Eager在七种大语言模型和四个基准测试中,将非重叠执行延迟降低最高达99.9%,端到端延迟降低最高达55%。
English
Current LLM-based coding agents follow a serial execution paradigm: the model first generates the complete code, then invokes an interpreter to execute it. This sequential workflow leaves the executor idle during generation and the generator idle during execution, resulting in unnecessary end-to-end latency. We observe that, unlike human developers, LLMs produce code tokens sequentially without revision, making it possible to execute code as it is being generated. We formalize this parallel execution paradigm, modeling it as a three-stage pipeline of generation, detection, and execution, and derive closed-form latency bounds that characterize its speedup potential and operating regimes. We then present Eager, a concrete implementation featuring AST-based chunking, dynamic batching with gated execution, and early error interruption. We evaluate Eager across four benchmarks, seven LLMs, and three execution environments. Results show that Eager reduces the non-overlapped execution latency by up to 99.9% and the end-to-end latency by up to 55% across seven LLMs and four benchmarks.