语言特定知识:模型在X语言中的表现是否优于英语?
Language Specific Knowledge: Do Models Know Better in X than in English?
May 21, 2025
作者: Ishika Agarwal, Nimet Beyza Bozdag, Dilek Hakkani-Tür
cs.AI
摘要
代码切换(Code-switching)是一种在同一话语、思维或对话中交替使用不同语言的常见现象。我们认为,人类之所以进行代码切换,是因为他们在谈论某些话题和领域时,使用一种语言比另一种语言感到更为自在。随着知识密集型语言模型的兴起,我们自然而然地提出了下一个问题:模型是否在某些语言X中持有更多关于某些主题的知识?更重要的是,我们是否可以通过改变推理所使用的语言来提升推理能力?为此,我们创造了“语言特定知识”(Language Specific Knowledge, LSK)这一术语来描述这一现象。鉴于民族文化往往与不同语言共同发展,我们采用了文化特定的数据集(这些数据集包含关于文化和社会行为规范的知识)。我们发现,在某些非英语语言中,甚至有时在低资源语言中,语言模型在使用思维链推理时表现更佳。结合先前研究表明语义相似性并不等同于表征相似性,我们假设文化特定的文本在相应语言中更为丰富,使得特定知识仅存在于特定的“专家”语言中。受初步结果的启发,我们设计了一种名为LSKExtractor的简单方法,用于基准测试语言模型中存在的语言特定知识,并在推理过程中加以利用。我们在多种模型和数据集上展示了结果,显示出准确率平均相对提升了10%。我们的研究为开源语言模型的开发做出了贡献,使其更具包容性,更贴近部署的文化和语言背景。
English
Code-switching is a common phenomenon of alternating between different
languages in the same utterance, thought, or conversation. We posit that humans
code-switch because they feel more comfortable talking about certain topics and
domains in one language than another. With the rise of knowledge-intensive
language models, we ask ourselves the next, natural question: Could models hold
more knowledge on some topics in some language X? More importantly, could we
improve reasoning by changing the language that reasoning is performed in? We
coin the term Language Specific Knowledge (LSK) to represent this phenomenon.
As ethnic cultures tend to develop alongside different languages, we employ
culture-specific datasets (that contain knowledge about cultural and social
behavioral norms). We find that language models can perform better when using
chain-of-thought reasoning in some languages other than English, sometimes even
better in low-resource languages. Paired with previous works showing that
semantic similarity does not equate to representational similarity, we
hypothesize that culturally specific texts occur more abundantly in
corresponding languages, enabling specific knowledge to occur only in specific
"expert" languages. Motivated by our initial results, we design a simple
methodology called LSKExtractor to benchmark the language-specific knowledge
present in a language model and, then, exploit it during inference. We show our
results on various models and datasets, showing an average relative improvement
of 10% in accuracy. Our research contributes to the open-source development of
language models that are inclusive and more aligned with the cultural and
linguistic contexts in which they are deployed.Summary
AI-Generated Summary