神经模型比较框架下的生成式人工智能文本自动检测
Automatic detection of Gen-AI texts: A comparative framework of neural models
March 19, 2026
作者: Cristian Buttaro, Irene Amerini
cs.AI
摘要
大型语言模型的迅速扩散显著增加了区分人类书写与AI生成文本的难度,在学术、出版及社会领域引发关键问题。本文通过设计、实现并比较评估多种基于机器学习的检测器,对AI生成文本检测问题展开研究。我们开发并分析了四种神经架构:多层感知器、一维卷积神经网络、基于MobileNet的CNN以及Transformer模型。所提出的模型与广泛使用的在线检测工具(包括ZeroGPT、GPTZero、QuillBot、Originality.AI、Sapling、IsGen、Rephrase和Writer)进行了基准测试。实验在COLING多语言数据集上开展,涵盖英语和意大利语两种配置,同时采用以艺术与心理健康为主题的原创数据集进行验证。结果表明,在不同语言和领域下,有监督检测器比商业工具表现出更稳定、更鲁棒的性能,揭示了当前检测策略的主要优势与局限。
English
The rapid proliferation of Large Language Models has significantly increased the difficulty of distinguishing between human-written and AI generated texts, raising critical issues across academic, editorial, and social domains. This paper investigates the problem of AI generated text detection through the design, implementation, and comparative evaluation of multiple machine learning based detectors. Four neural architectures are developed and analyzed: a Multilayer Perceptron, a one-dimensional Convolutional Neural Network, a MobileNet-based CNN, and a Transformer model. The proposed models are benchmarked against widely used online detectors, including ZeroGPT, GPTZero, QuillBot, Originality.AI, Sapling, IsGen, Rephrase, and Writer. Experiments are conducted on the COLING Multilingual Dataset, considering both English and Italian configurations, as well as on an original thematic dataset focused on Art and Mental Health. Results show that supervised detectors achieve more stable and robust performance than commercial tools across different languages and domains, highlighting key strengths and limitations of current detection strategies.