ChatPaper.aiChatPaper

LLM時代的維基百科:演進與風險

Wikipedia in the Era of LLMs: Evolution and Risks

March 4, 2025
作者: Siming Huang, Yuliang Xu, Mingmeng Geng, Yao Wan, Dongping Chen
cs.AI

摘要

本文深入探討了大型語言模型(LLMs)對維基百科的影響,透過現有數據分析維基百科的演變,並利用模擬來探索潛在風險。我們首先分析頁面瀏覽量和文章內容,以研究維基百科近期的變化並評估LLMs的影響。隨後,我們評估了LLMs如何影響與維基百科相關的各種自然語言處理(NLP)任務,包括機器翻譯和檢索增強生成(RAG)。我們的研究結果和模擬數據顯示,維基百科文章已受到LLMs的影響,某些類別的影響程度約為1%-2%。如果基於維基百科的機器翻譯基準受到LLMs的影響,模型的得分可能會被誇大,模型之間的比較結果也可能發生變化。此外,如果知識庫被LLM生成的內容污染,RAG的效果可能會降低。儘管LLMs尚未完全改變維基百科的語言和知識結構,我們認為實證研究結果表明,需要謹慎考慮未來可能出現的風險。
English
In this paper, we present a thorough analysis of the impact of Large Language Models (LLMs) on Wikipedia, examining the evolution of Wikipedia through existing data and using simulations to explore potential risks. We begin by analyzing page views and article content to study Wikipedia's recent changes and assess the impact of LLMs. Subsequently, we evaluate how LLMs affect various Natural Language Processing (NLP) tasks related to Wikipedia, including machine translation and retrieval-augmented generation (RAG). Our findings and simulation results reveal that Wikipedia articles have been influenced by LLMs, with an impact of approximately 1%-2% in certain categories. If the machine translation benchmark based on Wikipedia is influenced by LLMs, the scores of the models may become inflated, and the comparative results among models might shift as well. Moreover, the effectiveness of RAG might decrease if the knowledge base becomes polluted by LLM-generated content. While LLMs have not yet fully changed Wikipedia's language and knowledge structures, we believe that our empirical findings signal the need for careful consideration of potential future risks.

Summary

AI-Generated Summary

PDF222March 5, 2025