BOE-XSUM:西班牙法律法令與通知的極簡清晰摘要
BOE-XSUM: Extreme Summarization in Clear Language of Spanish Legal Decrees and Notifications
September 29, 2025
作者: Andrés Fernández García, Javier de la Rosa, Julio Gonzalo, Roser Morante, Enrique Amigó, Alejandro Benito-Santos, Jorge Carrillo-de-Albornoz, Víctor Fresno, Adrian Ghajari, Guillermo Marco, Laura Plaza, Eva Sánchez Salido
cs.AI
摘要
在資訊過載的當下,簡明扼要地總結長篇文件的能力日益重要,然而針對西班牙語文件,尤其是法律領域的此類摘要卻顯著缺乏。本研究介紹了BOE-XSUM,這是一個精心整理的數據集,包含3,648份來自西班牙《國家官方公報》(Boletín Oficial del Estado, BOE)文件的簡明易懂摘要。數據集中的每一條目均包含一份簡短摘要、原文及其文件類型標籤。我們評估了在BOE-XSUM上微調的中等規模大型語言模型(LLMs)的性能,並將其與零樣本設置下的通用生成模型進行了比較。結果顯示,經過微調的模型顯著優於非專業化模型。值得注意的是,表現最佳的模型——BERTIN GPT-J 6B(32位精度)——相較於頂尖的零樣本模型DeepSeek-R1,性能提升了24%(準確率分別為41.6%對33.5%)。
English
The ability to summarize long documents succinctly is increasingly important
in daily life due to information overload, yet there is a notable lack of such
summaries for Spanish documents in general, and in the legal domain in
particular. In this work, we present BOE-XSUM, a curated dataset comprising
3,648 concise, plain-language summaries of documents sourced from Spain's
``Bolet\'{\i}n Oficial del Estado'' (BOE), the State Official Gazette. Each
entry in the dataset includes a short summary, the original text, and its
document type label. We evaluate the performance of medium-sized large language
models (LLMs) fine-tuned on BOE-XSUM, comparing them to general-purpose
generative models in a zero-shot setting. Results show that fine-tuned models
significantly outperform their non-specialized counterparts. Notably, the
best-performing model -- BERTIN GPT-J 6B (32-bit precision) -- achieves a 24\%
performance gain over the top zero-shot model, DeepSeek-R1 (accuracies of
41.6\% vs.\ 33.5\%).