ChatPaper.aiChatPaper

在策略游戏中,人类期望LLM对手展现出理性与合作的行为。

Humans expect rationality and cooperation from LLM opponents in strategic games

May 16, 2025
作者: Darija Barak, Miguel Costa-Gomes
cs.AI

摘要

随着大型语言模型(LLMs)融入我们的社会和经济互动中,我们需要深入理解人类在战略环境中如何应对LLM对手。我们首次展示了在受控且有货币激励的实验室实验中,人类在多玩家p-选美竞赛中对抗其他人类与LLMs时的行为差异。采用被试内设计,以便在个体层面比较行为。研究表明,在此环境中,人类被试在与LLMs对战时选择的数字显著低于与人类对战时,这主要归因于“零”纳什均衡选择频率的增加。这一转变主要由具备高战略推理能力的被试驱动。选择零纳什均衡策略的被试,其动机源于对LLM推理能力的认知,以及出乎意料地,对LLM合作倾向的感知。我们的发现为多人同时选择游戏中的人-LLM互动提供了基础性见解,揭示了被试行为及对LLM游戏策略信念的异质性,并为混合人-LLM系统中的机制设计提出了重要启示。
English
As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of `zero' Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM's reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects' behaviour and beliefs about LLM's play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.

Summary

AI-Generated Summary

PDF42May 19, 2025