Ragu團隊於SemEval-2026任務8:基於法官協調的LLM集成框架中Meno與多智能體協作的忠實多輪回應生成研究
RaguTeam at SemEval-2026 Task 8: Meno and Friends in a Judge-Orchestrated LLM Ensemble for Faithful Multi-Turn Response Generation
May 6, 2026
作者: Ivan Bondarenko, Roman Derunets, Oleg Sedukhin, Mikhail Komarov, Ivan Chernov, Mikhail Kulakov
cs.AI
摘要
我們在SemEval-2026 Task~8: MTRAGEval競賽中,提出了針對任務B(基於參考文獻的生成)的獲勝系統。該方法採用七種大型語言模型的異質性集成方案,結合兩種提示變體,並透過GPT-4o-mini評判器為每個實例選擇最佳候選結果。我們在26支參賽隊伍中排名第一,達成0.7827的條件化調和平均數,顯著超越最強基線模型(gpt-oss-120b, 0.6390)。消融實驗表明,模型家族、規模與提示策略的多樣性至關重要,集成系統始終優於單一模型。我們同時發佈Meno-Lite-0.1——一個具備優異成本效益比的70億參數領域適應模型,並對MTRAGEval數據集進行分析,指出標註侷限性與改進方向。程式碼已公開:https://github.com/RaguTeam/ragu_mtrag_semeval
English
We present our winning system for Task~B (generation with reference passages) in SemEval-2026 Task~8: MTRAGEval. Our method is a heterogeneous ensemble of seven LLMs with two prompting variants, where a GPT-4o-mini judge selects the best candidate per instance. We ranked 1st out of 26 teams, achieving a conditioned harmonic mean of 0.7827 and outperforming the strongest baseline (gpt-oss-120b, 0.6390). Ablations show that diversity in model families, scales, and prompting strategies is essential, with the ensemble consistently beating any single model. We also introduce Meno-Lite-0.1, a 7B domain-adapted model with a strong cost--performance trade-off, and analyse MTRAGEval, highlighting annotation limitations and directions for improvement. Our code is publicly available: https://github.com/RaguTeam/ragu_mtrag_semeval