ChatPaper.aiChatPaper

在語言模型中對心理狀態表示進行基準測試

Benchmarking Mental State Representations in Language Models

June 25, 2024
作者: Matteo Bortoletto, Constantin Ruhdorfer, Lei Shi, Andreas Bulling
cs.AI

摘要

儘管許多研究已評估語言模型(LMs)在需要心智理論推理的任務上的生成性能,但對模型對心智狀態的內部表示的研究仍然有限。最近的研究使用探測技術來證明LMs可以表示自己和他人的信念。然而,這些主張缺乏充分的評估,使得難以評估模型設計和訓練選擇對心智狀態表示的影響。我們報告了一項廣泛的基準測試,使用不同模型大小、微調方法和提示設計的各種LM類型,以研究心智狀態表示的穩健性和探測中的記憶問題。我們的結果顯示,模型對他人信念的內部表示品質隨著模型大小的增加而提高,更重要的是,隨著微調的進行而提高。我們是第一個研究提示變化如何影響心智理論任務探測性能的研究。我們證明模型的表示對提示變化敏感,即使這些變化應該是有益的。最後,我們補充了先前在心智理論任務上的激活編輯實驗,並展示了通過引導它們的激活可以改善模型的推理性能,而無需訓練任何探測器。
English
While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.

Summary

AI-Generated Summary

PDF31November 29, 2024