ViExam:视觉语言模型在越南语多模态考试题上是否优于人类?
ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?
August 19, 2025
作者: Vy Tuong Dang, An Vo, Quang Tau, Duc Dm, Daeyoung Kim
cs.AI
摘要
视觉语言模型(VLMs)在英语多模态任务中展现出卓越能力,但其在低资源语言且真正多模态教育内容上的表现仍鲜有研究。本工作中,我们测试了VLMs在越南教育评估中的表现,探究主要基于英语数据训练的VLMs能否处理现实世界的跨语言多模态推理。通过提出ViExam这一包含2,548道多模态问题的基准,我们首次全面评估了VLMs在越南多模态考试中的能力。研究发现,最先进的VLMs平均准确率仅为57.74%,而开源模型在包括数学、物理、化学、生物、地理、驾驶测试和智商测试在内的7个学术领域平均准确率为27.70%。大多数VLMs表现不及普通人类考生(66.54%),仅有思维型VLM o3(74.07%)超越人类平均水平,但仍远低于人类最佳表现(99.60%)。采用英语指令同时保留越南语内容的跨语言提示策略未能提升性能,反而使SOTA VLMs的准确率下降了1个百分点。人机协作可部分提升VLMs表现,提高5个百分点。代码与数据可在https://vi-exam.github.io获取。
English
Vision language models (VLMs) demonstrate remarkable capabilities on English
multimodal tasks, but their performance on low-resource languages with
genuinely multimodal educational content remains largely unexplored. In this
work, we test how VLMs perform on Vietnamese educational assessments,
investigating whether VLMs trained predominantly on English data can handle
real-world cross-lingual multimodal reasoning. Our work presents the first
comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams
through proposing ViExam, a benchmark containing 2,548 multimodal questions. We
find that state-of-the-art VLMs achieve only 57.74% while open-source models
achieve 27.70% mean accuracy across 7 academic domains, including Mathematics,
Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs
underperform average human test-takers (66.54%), with only the thinking VLM o3
(74.07%) exceeding human average performance, yet still falling substantially
short of human best performance (99.60%). Cross-lingual prompting with English
instructions while maintaining Vietnamese content fails to improve performance,
decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop
collaboration can partially improve VLM performance by 5 percentage points.
Code and data are available at: https://vi-exam.github.io.