ChatPaper.aiChatPaper

Jina-VLM:轻量化多语言视觉语言模型

Jina-VLM: Small Multilingual Vision Language Model

December 3, 2025
作者: Andreas Koukounas, Georgios Mastrapas, Florian Hönicke, Sedigheh Eslami, Guillaume Roncari, Scott Martens, Han Xiao
cs.AI

摘要

我们推出Jina-VLM——一款拥有24亿参数的视觉语言模型,在开源20亿参数级VLM中实现了多语言视觉问答的顶尖性能。该模型通过注意力池化连接器将SigLIP2视觉编码器与Qwen3语言主干网络相结合,能够以令牌高效的方式处理任意分辨率的图像。在标准VQA基准测试和多语言评估中,Jina-VLM在保持竞争力文本单模态性能的同时,全面超越了同规模可比模型。
English
We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. Across standard VQA benchmarks and multilingual evaluations, Jina-VLM outperforms comparable models while preserving competitive text-only performance.
PDF42December 5, 2025