ChatPaper.aiChatPaper

CMMMU:一个中文大规模多学科多模态理解基准。

CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark

January 22, 2024
作者: Ge Zhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng, Chunpu Xu, Shuyue Guo, Haoran Zhang, Xingwei Qu, Junjie Wang, Ruibin Yuan, Yizhi Li, Zekun Wang, Yudong Liu, Yu-Hsuan Tsai, Fengji Zhang, Chenghua Lin, Wenhao Huang, Wenhu Chen, Jie Fu
cs.AI

摘要

随着大型多模态模型(LMMs)的能力不断提升,评估LMMs的性能成为一个日益迫切的需求。此外,在评估LMMs在中文等非英语环境中的先进知识和推理能力方面存在更大的差距。我们引入了CMMMU,一个新的中文大规模多学科多模态理解基准,旨在评估LMMs在要求大学水平学科知识和深思熟虑推理的任务中的表现,而且是在中文环境中。CMMMU受MMMUs的注释和分析模式启发并严格遵循。 CMMMU包括来自大学考试、测验和教科书的1.2万个手动收集的多模态问题,涵盖六个核心学科:艺术与设计、商业、科学、健康与医学、人文社会科学和技术与工程,就像其伴侣MMMUs一样。这些问题涵盖30个学科,包括39种高度异质的图像类型,如图表、图表、地图、表格、乐谱和化学结构。 CMMMU侧重于在中文环境中具有领域特定知识的复杂感知和推理。我们评估了11个开源LLMs和一个专有的GPT-4V(ision)。即使是GPT-4V也仅实现了42%的准确率,表明有很大的改进空间。CMMMU将推动社区构建面向专家人工智能的下一代LMMs,并通过提供多样化的语言背景促进LMMs的民主化。
English
As the capabilities of large multimodal models (LMMs) continue to advance, evaluating the performance of LMMs emerges as an increasing need. Additionally, there is an even larger gap in evaluating the advanced knowledge and reasoning abilities of LMMs in non-English contexts such as Chinese. We introduce CMMMU, a new Chinese Massive Multi-discipline Multimodal Understanding benchmark designed to evaluate LMMs on tasks demanding college-level subject knowledge and deliberate reasoning in a Chinese context. CMMMU is inspired by and strictly follows the annotation and analysis pattern of MMMU. CMMMU includes 12k manually collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering, like its companion, MMMU. These questions span 30 subjects and comprise 39 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures. CMMMU focuses on complex perception and reasoning with domain-specific knowledge in the Chinese context. We evaluate 11 open-source LLMs and one proprietary GPT-4V(ision). Even GPT-4V only achieves accuracies of 42%, indicating a large space for improvement. CMMMU will boost the community to build the next-generation LMMs towards expert artificial intelligence and promote the democratization of LMMs by providing diverse language contexts.
PDF282December 15, 2024