MinerU2.5:一種解耦的視覺-語言模型,用於高效的高分辨率文檔解析
MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing
September 26, 2025
作者: Junbo Niu, Zheng Liu, Zhuangcheng Gu, Bin Wang, Linke Ouyang, Zhiyuan Zhao, Tao Chu, Tianyao He, Fan Wu, Qintong Zhang, Zhenjiang Jin, Guang Liang, Rui Zhang, Wenzheng Zhang, Yuan Qu, Zhifei Ren, Yuefeng Sun, Yuanhong Zheng, Dongsheng Ma, Zirui Tang, Boyu Niu, Ziyang Miao, Hejun Dong, Siyi Qian, Junyuan Zhang, Jingzhou Chen, Fangdong Wang, Xiaomeng Zhao, Liqun Wei, Wei Li, Shasha Wang, Ruiliang Xu, Yuanyuan Cao, Lu Chen, Qianqian Wu, Huaiyu Gu, Lindong Lu, Keming Wang, Dechen Lin, Guanlin Shen, Xuanhe Zhou, Linfeng Zhang, Yuhang Zang, Xiaoyi Dong, Jiaqi Wang, Bo Zhang, Lei Bai, Pei Chu, Weijia Li, Jiang Wu, Lijun Wu, Zhenxiang Li, Guangyu Wang, Zhongying Tu, Chao Xu, Kai Chen, Yu Qiao, Bowen Zhou, Dahua Lin, Wentao Zhang, Conghui He
cs.AI
摘要
我們推出MinerU2.5,這是一個擁有12億參數的文件解析視覺語言模型,在保持卓越計算效率的同時,達到了最先進的識別準確率。我們的方法採用了一種由粗到精的兩階段解析策略,將全局佈局分析與局部內容識別分離。在第一階段,模型對下采樣圖像進行高效的佈局分析,以識別結構元素,從而避免了處理高分辨率輸入的計算開銷。在第二階段,在全局佈局的指導下,模型對從原始圖像中提取的原生分辨率裁剪區域進行有針對性的內容識別,保留了密集文本、複雜公式和表格中的精細細節。為了支持這一策略,我們開發了一個全面的數據引擎,生成多樣化、大規模的訓練語料庫,用於預訓練和微調。最終,MinerU2.5展示了強大的文件解析能力,在多個基準測試中達到了最先進的性能,在各種識別任務中超越了通用模型和領域專用模型,同時保持了顯著更低的計算開銷。
English
We introduce MinerU2.5, a 1.2B-parameter document parsing vision-language
model that achieves state-of-the-art recognition accuracy while maintaining
exceptional computational efficiency. Our approach employs a coarse-to-fine,
two-stage parsing strategy that decouples global layout analysis from local
content recognition. In the first stage, the model performs efficient layout
analysis on downsampled images to identify structural elements, circumventing
the computational overhead of processing high-resolution inputs. In the second
stage, guided by the global layout, it performs targeted content recognition on
native-resolution crops extracted from the original image, preserving
fine-grained details in dense text, complex formulas, and tables. To support
this strategy, we developed a comprehensive data engine that generates diverse,
large-scale training corpora for both pretraining and fine-tuning. Ultimately,
MinerU2.5 demonstrates strong document parsing ability, achieving
state-of-the-art performance on multiple benchmarks, surpassing both
general-purpose and domain-specific models across various recognition tasks,
while maintaining significantly lower computational overhead.