当模型以你的语言推理:控制思维轨迹语言 以准确性为代价
When Models Reason in Your Language: Controlling Thinking Trace Language Comes at the Cost of Accuracy
May 28, 2025
作者: Jirui Qi, Shan Chen, Zidi Xiong, Raquel Fernández, Danielle S. Bitterman, Arianna Bisazza
cs.AI
摘要
近期,具备思维轨迹的大型推理模型(LRMs)在英语推理任务中展现了强劲性能。然而,这些模型在其他语言中的思考能力却鲜有研究。对于实际应用而言,这种能力与答案准确性同等重要,因为只有当推理轨迹以用户母语呈现时,用户才能有效利用其进行监督。我们在XReasoning基准上全面评估了两大主流LRM系列,发现即便是最先进的模型,也常常回归英语或在其他语言中产生碎片化的推理,暴露出多语言推理能力的显著差距。通过提示干预强制模型以用户语言进行推理,虽提升了可读性和监督效果,却降低了答案准确性,揭示了一个重要的权衡点。我们进一步证明,仅针对100个示例进行针对性后训练即可缓解这种不匹配,尽管仍存在一定的准确性损失。我们的研究结果凸显了当前LRMs在多语言推理能力上的局限,并为未来工作指明了方向。代码与数据可在https://github.com/Betswish/mCoT-XReasoning获取。
English
Recent Large Reasoning Models (LRMs) with thinking traces have shown strong
performance on English reasoning tasks. However, their ability to think in
other languages is less studied. This capability is as important as answer
accuracy for real world applications because users may find the reasoning trace
useful for oversight only when it is expressed in their own language. We
comprehensively evaluate two leading families of LRMs on our XReasoning
benchmark and find that even the most advanced models often revert to English
or produce fragmented reasoning in other languages, revealing a substantial gap
in multilingual reasoning. Prompt based interventions that force models to
reason in the users language improve readability and oversight but reduce
answer accuracy, exposing an important trade off. We further show that targeted
post training on just 100 examples mitigates this mismatch, though some
accuracy loss remains. Our results highlight the limited multilingual reasoning
capabilities of current LRMs and outline directions for future work. Code and
data are available at https://github.com/Betswish/mCoT-XReasoning.Summary
AI-Generated Summary