當模型以你的語言進行推理:控制思維軌跡語言 的代價是準確性的降低
When Models Reason in Your Language: Controlling Thinking Trace Language Comes at the Cost of Accuracy
May 28, 2025
作者: Jirui Qi, Shan Chen, Zidi Xiong, Raquel Fernández, Danielle S. Bitterman, Arianna Bisazza
cs.AI
摘要
近期,带有思维轨迹的大型推理模型(LRMs)在英语推理任务中展现了强大的性能。然而,它们在其它语言中的思考能力却较少被研究。对于现实世界的应用而言,这种能力与答案的准确性同等重要,因为用户只有在推理轨迹以其母语表达时,才能有效地进行监督。我们在XReasoning基准上全面评估了两类领先的LRMs,发现即使是最先进的模型,也常常会回归到英语或在其他语言中产生碎片化的推理,这揭示了多语言推理能力上的显著差距。通过基于提示的干预,强制模型以用户语言进行推理,虽然提高了可读性和监督性,却降低了答案的准确性,暴露了一个重要的权衡。我们进一步表明,针对性的后训练,仅需100个示例,就能缓解这种不匹配,尽管仍存在一定的准确性损失。我们的研究结果凸显了当前LRMs在多语言推理能力上的局限性,并为未来的工作指明了方向。代码和数据可在https://github.com/Betswish/mCoT-XReasoning获取。
English
Recent Large Reasoning Models (LRMs) with thinking traces have shown strong
performance on English reasoning tasks. However, their ability to think in
other languages is less studied. This capability is as important as answer
accuracy for real world applications because users may find the reasoning trace
useful for oversight only when it is expressed in their own language. We
comprehensively evaluate two leading families of LRMs on our XReasoning
benchmark and find that even the most advanced models often revert to English
or produce fragmented reasoning in other languages, revealing a substantial gap
in multilingual reasoning. Prompt based interventions that force models to
reason in the users language improve readability and oversight but reduce
answer accuracy, exposing an important trade off. We further show that targeted
post training on just 100 examples mitigates this mismatch, though some
accuracy loss remains. Our results highlight the limited multilingual reasoning
capabilities of current LRMs and outline directions for future work. Code and
data are available at https://github.com/Betswish/mCoT-XReasoning.Summary
AI-Generated Summary