潜在后验因子的理论基础:多证据推理的形式化保证
Theoretical Foundations of Latent Posterior Factors: Formal Guarantees for Multi-Evidence Reasoning
March 13, 2026
作者: Aliyu Agboola Alege
cs.AI
摘要
我们提出对潜在后验因子(LPF)的完整理论刻画,该框架是概率预测任务中聚合多重异质证据项的原理性方法。多证据推理普遍存在于高风险领域,包括医疗诊断、金融风险评估、法律案例分析和监管合规等,然而现有方法要么缺乏形式化保证,要么在架构上无法处理多证据场景。LPF通过变分自编码器将每个证据项编码为高斯潜在后验,通过蒙特卡洛边缘化将后验转换为软因子,并借助精确的和积网络推理(LPF-SPN)或习得神经聚合器(LPF-Learned)实现因子聚合。
我们证明了涵盖可信人工智能关键需求的七项形式化保证:校准保持性(预期校准误差≤ε+C/√K_eff);蒙特卡洛误差以O(1/√M)速率衰减;在N=4200时训练-测试差距为0.0085的非平凡PAC-贝叶斯界;以信息论下界1.12倍范围内运行;在数据损坏下以O(εδ√K)速率优雅退化(半数证据被对抗替换时仍保持88%性能);以R²=0.849实现O(1/√K)校准衰减;以及误差低于0.002%的精确认知-偶然不确定性分解。所有定理均在涵盖4200个训练样本的受控数据集上得到实证验证。我们的理论框架确立了LPF作为安全关键应用中可信多证据人工智能的基础。
English
We present a complete theoretical characterization of Latent Posterior Factors (LPF), a principled framework for aggregating multiple heterogeneous evidence items in probabilistic prediction tasks. Multi-evidence reasoning arises pervasively in high-stakes domains including healthcare diagnosis, financial risk assessment, legal case analysis, and regulatory compliance, yet existing approaches either lack formal guarantees or fail to handle multi-evidence scenarios architecturally. LPF encodes each evidence item into a Gaussian latent posterior via a variational autoencoder, converting posteriors to soft factors through Monte Carlo marginalization, and aggregating factors via exact Sum-Product Network inference (LPF-SPN) or a learned neural aggregator (LPF-Learned).
We prove seven formal guarantees spanning the key desiderata for trustworthy AI: Calibration Preservation (ECE <= epsilon + C/sqrt(K_eff)); Monte Carlo Error decaying as O(1/sqrt(M)); a non-vacuous PAC-Bayes bound with train-test gap of 0.0085 at N=4200; operation within 1.12x of the information-theoretic lower bound; graceful degradation as O(epsilon*delta*sqrt(K)) under corruption, maintaining 88% performance with half of evidence adversarially replaced; O(1/sqrt(K)) calibration decay with R^2=0.849; and exact epistemic-aleatoric uncertainty decomposition with error below 0.002%. All theorems are empirically validated on controlled datasets spanning up to 4,200 training examples. Our theoretical framework establishes LPF as a foundation for trustworthy multi-evidence AI in safety-critical applications.