ChatPaper.aiChatPaper

評估語言模型對遊戲的評估能力

Evaluating Language Models' Evaluations of Games

October 13, 2025
作者: Katherine M. Collins, Cedegao E. Zhang, Graham Todd, Lance Ying, Mauricio Barba da Costa, Ryan Liu, Prafull Sharma, Adrian Weller, Ionatan Kuperwajs, Lionel Wong, Joshua B. Tenenbaum, Thomas L. Griffiths
cs.AI

摘要

推理不僅關乎解決問題——更在於評估哪些問題真正值得解決。對人工智慧(AI)系統的評估歷來主要聚焦於問題解決能力,例如研究模型如何下棋或玩圍棋。本文中,我們倡導一種新範式,即評估AI系統對遊戲的評價能力。首先,我們引入了一種形式化方法來評估此類評價。接著,我們利用一個包含超過100種新穎棋盤遊戲和450多條人類評判的大規模數據集,將現代語言與推理模型產生的評價與人類及符號計算代理的評價進行比較。我們考慮了兩類評估性查詢:評估遊戲的收益(或公平性)以及趣味性。這些查詢涵蓋了設計AI評估的兩個相關維度:查詢的計算複雜度與量化難度。結果顯示,在遊戲評價上,推理模型通常比非推理語言模型更貼近人類。然而,我們觀察到一種非單調關係:隨著模型趨近於博弈論最優,其與人類數據的契合度反而降低。在評估趣味性時,我們也觀察到模型間存在更多“波動性”,這與量化此類查詢的更大難度相符。無論是針對哪種查詢或遊戲,推理模型在評估查詢時均表現出高度變異且不可預測的資源使用情況,這凸顯了在語言與推理模型中融入更多資源理性元推理的重要性。
English
Reasoning is not just about solving problems -- it is also about evaluating which problems are worth solving at all. Evaluations of artificial intelligence (AI) systems primarily focused on problem solving, historically by studying how models play games such as chess and Go. In this paper, we advocate for a new paradigm that assesses AI systems' evaluation of games. First, we introduce a formalism for evaluating such evaluations. We then leverage a large-scale dataset of over 100 novel board games and over 450 human judgments to compare evaluations produced by modern language and reasoning models against those of people and symbolic computational agents. We consider two kinds of evaluative queries: assessing the payoff (or fairness) and the funness of games. These queries span two dimensions relevant to the design of evaluations of AI evaluations: how complex a query is to compute and how difficult a query is to quantify. Our results show that reasoning models are generally more aligned to people in their evaluations of games than non-reasoning language models. However, we observe a non-monotonic relationship: as models get closer to game-theoretic optimal, their fit to human data weakens. We also observe more "jaggedness" across models for assessing funness, in line with the greater difficulty of quantifying this query. Across queries and games, reasoning models show highly variable and unpredictable resource usage when assessing queries, pointing to the importance of imbuing more resource-rational meta-reasoning in language and reasoning models.
PDF02October 16, 2025