ChatPaper.aiChatPaper

无令牌浪费:在生物医学视觉-语言模型中充分利用长上下文

No Tokens Wasted: Leveraging Long Context in Biomedical Vision-Language Models

October 4, 2025
作者: Min Woo Sun, Alejandro Lozano, Javier Gamazo Tejero, Vishwesh Nath, Xiao Xiao Sun, James Burgess, Yuhui Zhang, Kun Yuan, Robert Tibshirani, Sean Huver, Serena Yeung-Levy
cs.AI

摘要

视觉语言模型(VLMs)通常以短文本窗口(<77个词元)进行预训练,这导致长格式描述被迫截断。然而,从大规模开源文献中提取的生物医学描述分布显示,大量描述远超77个词元。为此,我们通过扩展VLMs中文本编码器的上下文长度,研究了预训练对长格式生物医学描述的影响。我们发现,更长的上下文(从而利用长格式描述提供的额外监督信息)与更好的检索和分类性能相关。基于这一发现,我们引入了BIOMEDICA-LongCAP,一个包含100万张图像-描述对的数据集,这些描述来自全文文章,提供了更长且更具上下文意识的文本监督。利用BIOMEDICA-LongCAP,我们训练了BMC-LongCLIP,这是一种支持最多512个词元窗口的长上下文生物医学VLM。我们的模型将上下文容量扩展了6.6倍,将词元浪费从55%降至仅2.2%。在长描述检索基准测试中,BMC-LongCLIP在Recall@1上实现了高达+30%的绝对提升,分类平均提高了+2%,同时比短上下文模型收敛更快。我们的结果表明,长上下文建模是推进生物医学VLMs的一个有前景的方向。
English
Embedding vision-language models (VLMs) are typically pretrained with short text windows (<77 tokens), which forces the truncation of long-format captions. Yet, the distribution of biomedical captions from large-scale open source literature reveals that a huge portion of captions far exceed 77 tokens. To this end, we investigate the impact of pretraining on long-format biomedical captions by extending the context length of text encoders in VLMs. We find that longer context (thus, enabling additional supervision provided in long-format captions) correlates with better retrieval and classification performance. Given this finding, we introduce BIOMEDICA-LongCAP, a dataset of 1M image-caption pairs enriched with context-aware descriptions from full-text articles, providing longer and additional textual supervision. Using BIOMEDICA-LongCAP, we train BMC-LongCLIP, a long-context biomedical VLM with a text encoder supporting windows of up to 512 tokens. Our model extends context capacity by 6.6x, reducing token waste from 55% to just 2.2%. On long-caption retrieval benchmarks, BMC-LongCLIP achieves up to +30% absolute gains in Recall@1 and +2% average improvements in classification, while also converging faster than short-context. Our results demonstrate that long-context modeling is a promising direction for advancing biomedical VLMs.
PDF22October 8, 2025