ChatPaper.aiChatPaper

PhysicsAgentABM:基于物理引导的生成式智能体建模

PhysicsAgentABM: Physics-Guided Generative Agent-Based Modeling

February 5, 2026
作者: Kavana Venkatesh, Yinhan He, Jundong Li, Jiaming Cui
cs.AI

摘要

基於大語言模型的多智能體系統雖能實現靈活的智能體推理,但其擴展成本高昂且難以實現時間步對齊的狀態轉移校準;而傳統基於智能體的模型雖具可解釋性,卻難以整合複雜的個體級信號與非平穩行為。為此,我們提出物理智能體ABM框架,將推理過程轉移至行為連貫的智能體集群:狀態專精的符號化智能體編碼機理化轉移先驗,多模態神經轉移模型捕捉時序與交互動態,不確定性感知的認知融合則生成經校準的集群級轉移分佈。個體智能體在局部約束下隨機實現狀態轉移,從而將群體推理與實體級變異解耦。我們進一步引入ANCHOR聚類策略,通過跨情境行為響應與新型對比損失函數驅動LLM智能體聚類,將LLM調用次數降低6-8倍。在公共衛生、金融與社會科學領域的實驗表明,該框架在事件時間精度與校準度上持續優於機理模型、神經網絡及LLM基線。通過重構生成式ABM的架構,以不確定性感知的神經符號融合實現群體級推理,物理智能體ABM為LLM的可擴展校準模擬建立了新範式。
English
Large language model (LLM)-based multi-agent systems enable expressive agent reasoning but are expensive to scale and poorly calibrated for timestep-aligned state-transition simulation, while classical agent-based models (ABMs) offer interpretability but struggle to integrate rich individual-level signals and non-stationary behaviors. We propose PhysicsAgentABM, which shifts inference to behaviorally coherent agent clusters: state-specialized symbolic agents encode mechanistic transition priors, a multimodal neural transition model captures temporal and interaction dynamics, and uncertainty-aware epistemic fusion yields calibrated cluster-level transition distributions. Individual agents then stochastically realize transitions under local constraints, decoupling population inference from entity-level variability. We further introduce ANCHOR, an LLM agent-driven clustering strategy based on cross-contextual behavioral responses and a novel contrastive loss, reducing LLM calls by up to 6-8 times. Experiments across public health, finance, and social sciences show consistent gains in event-time accuracy and calibration over mechanistic, neural, and LLM baselines. By re-architecting generative ABM around population-level inference with uncertainty-aware neuro-symbolic fusion, PhysicsAgentABM establishes a new paradigm for scalable and calibrated simulation with LLMs.
PDF02February 7, 2026