ChatPaper.aiChatPaper

语言模型中的自我识别

Self-Recognition in Language Models

July 9, 2024
作者: Tim R. Davidson, Viacheslav Surkov, Veniamin Veselovsky, Giuseppe Russo, Robert West, Caglar Gulcehre
cs.AI

摘要

越来越多的应用程序依赖于一小部分闭源语言模型(LMs)。如果LMs发展出自我识别能力,这种依赖可能会引入新的安全风险。受人类身份验证方法启发,我们提出了一种评估LMs自我识别的新方法,利用模型生成的“安全问题”。我们的测试可以在外部进行,以跟踪前沿模型的发展,因为它不需要访问内部模型参数或输出概率。我们使用这一测试来检验目前公开可用的十个最有能力的开源和闭源LMs中的自我识别。我们的广泛实验未发现任何被检测的LM中存在一般或一致的自我识别的经验证据。相反,我们的结果表明,LMs在给定一组备选项时,会寻求选择“最佳”答案,而不考虑其来源。此外,我们发现关于哪些模型产生最佳答案的偏好在LMs之间是一致的迹象。我们还在多项选择设置中发现了有关LMs的位置偏见考虑的新见解。
English
A rapidly growing number of applications rely on a small set of closed-source language models (LMs). This dependency might introduce novel security risks if LMs develop self-recognition capabilities. Inspired by human identity verification methods, we propose a novel approach for assessing self-recognition in LMs using model-generated "security questions". Our test can be externally administered to keep track of frontier models as it does not require access to internal model parameters or output probabilities. We use our test to examine self-recognition in ten of the most capable open- and closed-source LMs currently publicly available. Our extensive experiments found no empirical evidence of general or consistent self-recognition in any examined LM. Instead, our results suggest that given a set of alternatives, LMs seek to pick the "best" answer, regardless of its origin. Moreover, we find indications that preferences about which models produce the best answers are consistent across LMs. We additionally uncover novel insights on position bias considerations for LMs in multiple-choice settings.

Summary

AI-Generated Summary

PDF272November 28, 2024