CS-Sum:代码切换对话摘要的基准测试与大语言模型的局限
CS-Sum: A Benchmark for Code-Switching Dialogue Summarization and the Limits of Large Language Models
May 19, 2025
作者: Sathya Krishnan Suresh, Tanmay Surana, Lim Zhi Hao, Eng Siong Chng
cs.AI
摘要
代码转换(CS)对大型语言模型(LLMs)构成了重大挑战,然而其在LLMs中的可理解性仍未被充分探讨。我们引入了CS-Sum,通过将CS对话转换为英语摘要来评估LLMs对CS的理解能力。CS-Sum是首个跨普通话-英语(EN-ZH)、泰米尔语-英语(EN-TA)和马来语-英语(EN-MS)的CS对话摘要基准,每种语言对包含900至1300条人工标注的对话。通过评估包括开源和闭源模型在内的十种LLMs,我们分析了在少样本学习、翻译后摘要以及微调(LoRA、QLoRA在合成数据上)等方法下的表现。我们的研究结果表明,尽管在自动化指标上得分较高,但LLMs在处理CS输入时仍会犯下细微错误,从而完全改变对话的原意。为此,我们归纳了LLMs在处理CS输入时最常见的三类错误。错误率因CS语言对和LLMs的不同而异,某些LLMs在特定语言对上表现出更频繁的错误,这凸显了对代码转换数据进行专门训练的必要性。
English
Code-switching (CS) poses a significant challenge for Large Language Models
(LLMs), yet its comprehensibility remains underexplored in LLMs. We introduce
CS-Sum, to evaluate the comprehensibility of CS by the LLMs through CS dialogue
to English summarization. CS-Sum is the first benchmark for CS dialogue
summarization across Mandarin-English (EN-ZH), Tamil-English (EN-TA), and
Malay-English (EN-MS), with 900-1300 human-annotated dialogues per language
pair. Evaluating ten LLMs, including open and closed-source models, we analyze
performance across few-shot, translate-summarize, and fine-tuning (LoRA, QLoRA
on synthetic data) approaches. Our findings show that though the scores on
automated metrics are high, LLMs make subtle mistakes that alter the complete
meaning of the dialogue. To this end, we introduce 3 most common type of errors
that LLMs make when handling CS input. Error rates vary across CS pairs and
LLMs, with some LLMs showing more frequent errors on certain language pairs,
underscoring the need for specialized training on code-switched data.Summary
AI-Generated Summary