估計生成式人工智慧的幻覺率
Estimating the Hallucination Rate of Generative AI
June 11, 2024
作者: Andrew Jesson, Nicolas Beltran-Velez, Quentin Chu, Sweta Karlekar, Jannik Kossen, Yarin Gal, John P. Cunningham, David Blei
cs.AI
摘要
本研究旨在估計利用生成式人工智慧進行上下文學習(ICL)的幻覺率。在ICL中,條件生成模型(CGM)被提示使用數據集進行預測。ICL的貝葉斯解釋假設CGM正在計算一個未知的貝葉斯模型的潛在參數和數據的後驗預測分佈。從這個角度來看,我們將幻覺定義為在真實潛在參數下概率較低的生成預測。我們開發了一種新方法,該方法將ICL問題(即CGM、數據集和預測問題)作為輸入,並估計CGM生成幻覺的概率。我們的方法僅需要從模型生成查詢和響應,並評估其響應的對數概率。我們在合成回歸和自然語言ICL任務上,使用大型語言模型對我們的方法進行了實證評估。
English
This work is about estimating the hallucination rate for in-context learning
(ICL) with Generative AI. In ICL, a conditional generative model (CGM) is
prompted with a dataset and asked to make a prediction based on that dataset.
The Bayesian interpretation of ICL assumes that the CGM is calculating a
posterior predictive distribution over an unknown Bayesian model of a latent
parameter and data. With this perspective, we define a hallucination
as a generated prediction that has low-probability under the true latent
parameter. We develop a new method that takes an ICL problem -- that is, a CGM,
a dataset, and a prediction question -- and estimates the probability that a
CGM will generate a hallucination. Our method only requires generating queries
and responses from the model and evaluating its response log probability. We
empirically evaluate our method on synthetic regression and natural language
ICL tasks using large language models.Summary
AI-Generated Summary