RaguTeam在SemEval-2026任务8中的探索:基于法官协调的LLM集成框架下Meno与多智能体协作实现忠实多轮响应生成
RaguTeam at SemEval-2026 Task 8: Meno and Friends in a Judge-Orchestrated LLM Ensemble for Faithful Multi-Turn Response Generation
May 6, 2026
作者: Ivan Bondarenko, Roman Derunets, Oleg Sedukhin, Mikhail Komarov, Ivan Chernov, Mikhail Kulakov
cs.AI
摘要
我们在SemEval-2026任务8(MTRAGEval)的B赛道(基于参考文本的生成任务)中提出了获胜系统。该方法采用七种大语言模型的异质集成框架,结合两种提示变体,通过GPT-4o-mini裁判模块为每个实例选择最优候选结果。我们在26支参赛队伍中排名第一,实现了0.7823的条件调和平均值,显著优于最强基线gpt-oss-120b(0.6390)。消融实验表明,模型族多样性、规模差异和提示策略的异质性至关重要,集成系统始终优于任何单一模型。我们还推出了Meno-Lite-0.1——一个在成本效益权衡方面表现优异的70亿参数领域自适应模型,并对MTRAGEval数据集进行深入分析,指出标注局限性与改进方向。代码已开源:https://github.com/RaguTeam/ragu_mtrag_semeval
English
We present our winning system for Task~B (generation with reference passages) in SemEval-2026 Task~8: MTRAGEval. Our method is a heterogeneous ensemble of seven LLMs with two prompting variants, where a GPT-4o-mini judge selects the best candidate per instance. We ranked 1st out of 26 teams, achieving a conditioned harmonic mean of 0.7827 and outperforming the strongest baseline (gpt-oss-120b, 0.6390). Ablations show that diversity in model families, scales, and prompting strategies is essential, with the ensemble consistently beating any single model. We also introduce Meno-Lite-0.1, a 7B domain-adapted model with a strong cost--performance trade-off, and analyse MTRAGEval, highlighting annotation limitations and directions for improvement. Our code is publicly available: https://github.com/RaguTeam/ragu_mtrag_semeval