SmolDocling:一款超紧凑的视觉-语言模型,用于端到端的多模态文档转换
SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion
March 14, 2025
作者: Ahmed Nassar, Andres Marafioti, Matteo Omenetti, Maksym Lysak, Nikolaos Livathinos, Christoph Auer, Lucas Morin, Rafael Teixeira de Lima, Yusik Kim, A. Said Gurbuz, Michele Dolfi, Miquel Farré, Peter W. J. Staar
cs.AI
摘要
我們推出SmolDocling,這是一款針對端到端文件轉換的超緊湊視覺語言模型。我們的模型通過生成DocTags——一種新的通用標記格式,全面處理整頁內容,捕捉所有頁面元素及其完整上下文與位置信息。與依賴大型基礎模型或由多個專用模型組成的複雜流水線的現有方法不同,SmolDocling提供了一種端到端的轉換方案,在僅256M參數的視覺語言模型中精確捕捉文件元素的內容、結構及空間位置。SmolDocling在正確重現代碼列表、表格、公式、圖表、列表等文件特徵方面表現出強健的性能,適用於包括商業文件、學術論文、技術報告、專利和表格在內的多種文件類型,顯著超越了通常僅關注科學論文的範疇。此外,我們貢獻了針對圖表、表格、公式和代碼識別的新穎公開數據集。實驗結果表明,SmolDocling在性能上可與體積大至27倍的其他視覺語言模型相媲美,同時大幅降低了計算需求。該模型現已可用,數據集即將公開。
English
We introduce SmolDocling, an ultra-compact vision-language model targeting
end-to-end document conversion. Our model comprehensively processes entire
pages by generating DocTags, a new universal markup format that captures all
page elements in their full context with location. Unlike existing approaches
that rely on large foundational models, or ensemble solutions that rely on
handcrafted pipelines of multiple specialized models, SmolDocling offers an
end-to-end conversion for accurately capturing content, structure and spatial
location of document elements in a 256M parameters vision-language model.
SmolDocling exhibits robust performance in correctly reproducing document
features such as code listings, tables, equations, charts, lists, and more
across a diverse range of document types including business documents, academic
papers, technical reports, patents, and forms -- significantly extending beyond
the commonly observed focus on scientific papers. Additionally, we contribute
novel publicly sourced datasets for charts, tables, equations, and code
recognition. Experimental results demonstrate that SmolDocling competes with
other Vision Language Models that are up to 27 times larger in size, while
reducing computational requirements substantially. The model is currently
available, datasets will be publicly available soon.Summary
AI-Generated Summary