ChatPaper.aiChatPaper

大型語言模型中的性格特質

Personality Traits in Large Language Models

July 1, 2023
作者: Mustafa Safdari, Greg Serapio-García, Clément Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, Maja Matarić
cs.AI

摘要

大型語言模型(LLM)的問世為自然語言處理帶來革命性突破,使其能生成語境連貫且語義相關的文本。隨著LLM日益成為對話代理系統的核心驅動力,這些模型基於海量人類生成數據訓練所內建的合成人格特質引發關注。考慮到人格是決定溝通效能的關鍵因素,我們提出一套綜合方法,透過實施經過驗證的心理測量測試,對主流LLM生成文本中呈現的人格特質進行量化、分析與塑造。研究發現:1)特定提示配置下,部分LLM輸出中模擬的人格具備可靠性與有效性;2)模型規模越大且經過指令微調的LLM,其人格模擬的可靠性與有效性證據越強;3)LLM輸出的人格可沿特定維度進行塑造,以模擬目標人格特徵。我們亦探討此測量與塑造框架的潛在應用與倫理意涵,特別聚焦於LLM的責任制使用議題。
English
The advent of large language models (LLMs) has revolutionized natural language processing, enabling the generation of coherent and contextually relevant text. As LLMs increasingly power conversational agents, the synthesized personality embedded in these models by virtue of their training on large amounts of human-generated data draws attention. Since personality is an important factor determining the effectiveness of communication, we present a comprehensive method for administering validated psychometric tests and quantifying, analyzing, and shaping personality traits exhibited in text generated from widely-used LLMs. We find that: 1) personality simulated in the outputs of some LLMs (under specific prompting configurations) is reliable and valid; 2) evidence of reliability and validity of LLM-simulated personality is stronger for larger and instruction fine-tuned models; and 3) personality in LLM outputs can be shaped along desired dimensions to mimic specific personality profiles. We also discuss potential applications and ethical implications of our measurement and shaping framework, especially regarding responsible use of LLMs.
PDF200December 15, 2024