ChatPaper.aiChatPaper

在语言模型中对心理状态表示进行基准测试

Benchmarking Mental State Representations in Language Models

June 25, 2024
作者: Matteo Bortoletto, Constantin Ruhdorfer, Lei Shi, Andreas Bulling
cs.AI

摘要

尽管已有许多研究评估了语言模型(LMs)在需要心灵理论推理的任务上的生成性能,但对模型对心智状态的内部表示的研究仍然有限。最近的研究利用探测来证明LMs能够表示自身和他人的信念。然而,这些主张缺乏充分的评估,使得很难评估模型设计和训练选择对心智状态表示的影响。我们报告了一项广泛的基准测试,涉及不同模型大小、微调方法和提示设计的各种LM类型,以研究心智状态表示的稳健性和探针内的记忆问题。我们的结果显示,模型对他人信念的内部表示质量随着模型大小的增加而提高,更重要的是,随着微调的进行而提高。我们是第一个研究提示变化如何影响心灵理论任务中探测性能的研究者。我们证明,即使这些变化应该是有益的,模型的表示也对提示变化敏感。最后,我们补充了先前在心灵理论任务上的激活编辑实验,并展示了通过引导它们的激活而无需训练任何探针即可改善模型的推理性能的可能性。
English
While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.

Summary

AI-Generated Summary

PDF31November 29, 2024