ChatPaper.aiChatPaper

共享天性,独特培育:基于上下文结构建模的多元推理PRISM框架

Shared Nature, Unique Nurture: PRISM for Pluralistic Reasoning via In-context Structure Modeling

February 24, 2026
作者: Guancheng Tu, Shiyang Zhang, Tianyu Zhang, Yi Zhang, Diji Yang
cs.AI

摘要

大型语言模型(LLM)正朝着单一人工群体智能的方向收敛,其共享本性(预训练先验)导致分布多样性的严重坍缩,限制了创造性探索与科学发现所必需的多元视角。为解决此问题,我们提出通过认知进化范式为模型注入推理时培育(个性化认知轨迹),该范式包含探索、内化与表达三个阶段。我们通过PRISM(基于上下文结构建模的多元推理)实现这一范式——这是一个模型无关的系统,通过动态即时认知图谱增强LLM能力。在三个创造力基准测试中,PRISM实现了最优的新颖性指标,并显著扩展了分布多样性。此外,我们通过具有挑战性的罕见疾病诊断基准评估其实际效用。结果表明PRISM能成功发现标准LLM遗漏的长尾正确诊断,证实其发散性源于有意义的探索而非无序噪声。总体而言,本研究确立了多元人工智能的新范式,推动技术从单一共识转向由独特认知个体组成的多样化生态系统,实现集体多视角的发现能力。
English
Large Language Models (LLMs) are converging towards a singular Artificial Hivemind, where shared Nature (pre-training priors) result in a profound collapse of distributional diversity, limiting the distinct perspectives necessary for creative exploration and scientific discovery. To address this, we propose to equip models with inference-time Nurture (individualized epistemic trajectories) using Epistemic Evolution paradigm, progressing through explore, internalize, and express. We instantiate this via PRISM (Pluralistic Reasoning via In-context Structure Modeling), a model-agnostic system that augments LLM with dynamic On-the-fly Epistemic Graphs. On three creativity benchmarks, PRISM achieves state-of-the-art novelty and significantly expands distributional diversity. Moreover, we evaluate the real-world utility via a challenging rare-disease diagnosis benchmark. Results demonstrate that PRISM successfully uncovers correct long-tail diagnoses that standard LLM miss, confirming that its divergence stems from meaningful exploration rather than incoherent noise. Overall, this work establishes a new paradigm for Pluralistic AI, moving beyond monolithic consensus toward a diverse ecosystem of unique cognitive individuals capable of collective, multi-perspective discovery.
PDF42March 7, 2026