ChatPaper.aiChatPaper

競爭中的AI倫理願景:以OpenAI為例

Competing Visions of Ethical AI: A Case Study of OpenAI

January 23, 2026
作者: Melissa Wilfley, Mengting Ai, Madelyn Rose Sanfilippo
cs.AI

摘要

引言:不同行動者與利害關係群體對人工智慧倫理的框架界定存在顯著差異。本文透過OpenAI的個案研究,分析其倫理AI論述的演變軌跡。研究方法:我們針對「OpenAI如何隨時間推移在公開論述中運用『倫理』、『安全』、『對齊』及相關概念?此論述如何反映其實踐中的框架取向?」這一核心問題,建立區分大眾傳播與學術溝通的結構化語料庫,收錄範圍涵蓋所有公開文獻。分析過程:結合歸納推導與演繹應用的編碼系統,對倫理主題進行質性內容分析;同時運用自然語言處理的計算內容分析法,透過量化建模追蹤修辭策略的時序演變,並以可視化形式呈現聚合結果。為確保研究可重現性,我們已公開分析代碼(https://github.com/famous-blue-raincoat/AI_Ethics_Discourse)。研究發現:OpenAI的公開溝通與文件紀錄主要聚焦安全與風險論述,而未採用學術界與倡議團體的倫理框架或詞彙體系。結論:本文闡述此現象對治理機制的啟示,並探討產業界存在的倫理漂白(ethics-washing)實踐現象。
English
Introduction. AI Ethics is framed distinctly across actors and stakeholder groups. We report results from a case study of OpenAI analysing ethical AI discourse. Method. Research addressed: How has OpenAI's public discourse leveraged 'ethics', 'safety', 'alignment' and adjacent related concepts over time, and what does discourse signal about framing in practice? A structured corpus, differentiating between communication for a general audience and communication with an academic audience, was assembled from public documentation. Analysis. Qualitative content analysis of ethical themes combined inductively derived and deductively applied codes. Quantitative analysis leveraged computational content analysis methods via NLP to model topics and quantify changes in rhetoric over time. Visualizations report aggregate results. For reproducible results, we have released our code at https://github.com/famous-blue-raincoat/AI_Ethics_Discourse. Results. Results indicate that safety and risk discourse dominate OpenAI's public communication and documentation, without applying academic and advocacy ethics frameworks or vocabularies. Conclusions. Implications for governance are presented, along with discussion of ethics-washing practices in industry.
PDF12March 12, 2026