Wasm:建構結構化阿拉伯語交錯多模態語料庫的流程管線
Wasm: A Pipeline for Constructing Structured Arabic Interleaved Multimodal Corpora
November 10, 2025
作者: Khalil Hennara, Ahmad Bastati, Muhammad Hreden, Mohamed Motasim Hamed, Zeina Aldallal, Sara Chrouf, Safwan AlModhayan
cs.AI
摘要
大型語言模型(LLM)與大型多模態模型(LMM)的效能,很大程度上取決於其預訓練資料集的品質與規模。近期研究表明,在自然交錯排列圖文文件的資料上訓練的多模態模型,於各類基準測試中均優於僅使用圖文配對資料訓練的模型。這類模型能借助先進的預訓練技術,強化語義對齊、圖像序列一致性及文本連貫性。然而在阿拉伯語領域,由於缺乏能保留文件結構的高品質多模態資料集,相關進展一直受限。本文提出名為 Wasm 的處理流程,透過處理 Common Crawl 資料集來創建一個獨特提供 Markdown 輸出的阿拉伯語多模態資料集。有別於現有僅專注於文字提取的阿拉伯語語料庫,我們的方法在保持網頁內容結構完整性的同時,還能兼顧純文字與多模態預訓練場景的靈活性。我們針對現有主流資料集的處理流程進行全面比較分析,闡明過濾策略的共性,並論證我們特定設計選擇的合理性。為推動後續研究,我們公開釋出具有代表性的資料集樣本及完整的阿拉伯語多模態處理流程。
English
The performance of large language models (LLMs) and large multimodal models
(LMMs) depends heavily on the quality and scale of their pre-training datasets.
Recent research shows that large multimodal models trained on natural documents
where images and text are interleaved outperform those trained only on
image-text pairs across a wide range of benchmarks, leveraging advanced pre-
trained models to enforce semantic alignment, image-sequence consistency, and
textual coherence. For Arabic, however, the lack of high-quality multimodal
datasets that preserve document structure has limited progress. In this paper,
we present our pipeline Wasm for processing the Common Crawl dataset to create
a new Arabic multimodal dataset that uniquely provides markdown output. Unlike
existing Arabic corpora that focus solely on text extraction, our approach
preserves the structural integrity of web content while maintaining flexibility
for both text-only and multimodal pre-training scenarios. We provide a
comprehensive comparative analysis of our data processing pipeline against
those used for major existing datasets, highlighting the convergences in
filtering strategies and justifying our specific design choices. To support
future research, we publicly release a representative dataset dump along with
the multimodal processing pipeline for Arabic.