ChatPaper.aiChatPaper

物理引导生成式智能体建模

PhysicsAgentABM: Physics-Guided Generative Agent-Based Modeling

February 5, 2026
作者: Kavana Venkatesh, Yinhan He, Jundong Li, Jiaming Cui
cs.AI

摘要

基于大语言模型的多智能体系统虽能实现强表达力的智能体推理,但存在扩展成本高昂、时序状态转移模拟校准性差的问题;而经典基于智能体的模型虽具可解释性优势,却难以整合细粒度个体行为信号与非稳态行为。我们提出物理智能体模型,将推理过程转移至行为一致的智能体集群:状态专用符号智能体编码机制化转移先验,多模态神经转移模型捕捉时序与交互动态,不确定性感知的认知融合则生成校准后的集群级转移分布。个体智能体在局部约束下随机实现状态转移,从而将群体推理与实体级变异性解耦。我们进一步提出ANCHOR聚类策略,基于跨情境行为响应与新型对比损失函数,将大语言模型调用量降低6-8倍。在公共卫生、金融与社会科学的实验中,本模型在事件时间精度与校准度上持续优于机制模型、神经模型及大语言模型基线。通过以不确定性感知的神经符号融合为核心重构生成式智能体模型,物理智能体模型为大语言模型的可扩展校准仿真建立了新范式。
English
Large language model (LLM)-based multi-agent systems enable expressive agent reasoning but are expensive to scale and poorly calibrated for timestep-aligned state-transition simulation, while classical agent-based models (ABMs) offer interpretability but struggle to integrate rich individual-level signals and non-stationary behaviors. We propose PhysicsAgentABM, which shifts inference to behaviorally coherent agent clusters: state-specialized symbolic agents encode mechanistic transition priors, a multimodal neural transition model captures temporal and interaction dynamics, and uncertainty-aware epistemic fusion yields calibrated cluster-level transition distributions. Individual agents then stochastically realize transitions under local constraints, decoupling population inference from entity-level variability. We further introduce ANCHOR, an LLM agent-driven clustering strategy based on cross-contextual behavioral responses and a novel contrastive loss, reducing LLM calls by up to 6-8 times. Experiments across public health, finance, and social sciences show consistent gains in event-time accuracy and calibration over mechanistic, neural, and LLM baselines. By re-architecting generative ABM around population-level inference with uncertainty-aware neuro-symbolic fusion, PhysicsAgentABM establishes a new paradigm for scalable and calibrated simulation with LLMs.
PDF03February 7, 2026