ChatPaper.aiChatPaper

CL4SE:面向软件工程任务的上下文学习基准

CL4SE: A Context Learning Benchmark For Software Engineering Tasks

February 26, 2026
作者: Haichuan Hu, Ye Shang, Guoqing Xie, Congqing He, Quanjun Zhang
cs.AI

摘要

语境工程已成为释放大型语言模型在软件工程任务中潜力的关键范式,无需模型微调即可在测试阶段提升性能。尽管成效显著,现有研究仍缺乏针对软件工程的系统化语境分类体系,以及用于量化不同语境在核心软件工程流程中异构效应的专用基准。为填补这一空白,我们提出CL4SE(软件工程语境学习基准),该基准具备精细划分的四类软件工程导向语境(可解释示例、项目特定语境、程序化决策语境、正负向语境),并分别映射至代表性任务(代码生成、代码摘要、代码审查、补丁正确性评估)。我们构建了包含30余个开源项目超13,000个样本的高质量数据集,通过九项指标评估五种主流大型语言模型。大量实验表明,语境学习使所有任务平均性能提升24.7%。具体而言:程序化语境将代码审查性能最高提升33%(Qwen3-Max),混合正负向语境使补丁评估提升30%(DeepSeek-V3),项目特定语境将代码摘要BLEU值提高14.78%(GPT-Oss-120B),可解释示例使代码生成PASS@1提升5.72%(DeepSeek-V3)。CL4SE建立了首个软件工程语境学习标准化评估框架,为任务导向的语境设计提供可操作的实证依据,并开源大规模数据集以推动该领域可复现研究。
English
Context engineering has emerged as a pivotal paradigm for unlocking the potential of Large Language Models (LLMs) in Software Engineering (SE) tasks, enabling performance gains at test time without model fine-tuning. Despite its success, existing research lacks a systematic taxonomy of SE-specific context types and a dedicated benchmark to quantify the heterogeneous effects of different contexts across core SE workflows. To address this gap, we propose CL4SE (Context Learning for Software Engineering), a comprehensive benchmark featuring a fine-grained taxonomy of four SE-oriented context types (interpretable examples, project-specific context, procedural decision-making context, and positive & negative context), each mapped to a representative task (code generation, code summarization, code review, and patch correctness assessment). We construct high-quality datasets comprising over 13,000 samples from more than 30 open-source projects and evaluate five mainstream LLMs across nine metrics. Extensive experiments demonstrate that context learning yields an average performance improvement of 24.7% across all tasks. Specifically, procedural context boosts code review performance by up to 33% (Qwen3-Max), mixed positive-negative context improves patch assessment by 30% (DeepSeek-V3), project-specific context increases code summarization BLEU by 14.78% (GPT-Oss-120B), and interpretable examples enhance code generation PASS@1 by 5.72% (DeepSeek-V3). CL4SE establishes the first standardized evaluation framework for SE context learning, provides actionable empirical insights into task-specific context design, and releases a large-scale dataset to facilitate reproducible research in this domain.
PDF22March 7, 2026