VisMem:潛在視覺記憶釋放視覺語言模型的潛能
VisMem: Latent Vision Memory Unlocks Potential of Vision-Language Models
November 14, 2025
作者: Xinlei Yu, Chengming Xu, Guibin Zhang, Zhangquan Chen, Yudong Zhang, Yongbo He, Peng-Tao Jiang, Jiangning Zhang, Xiaobin Hu, Shuicheng Yan
cs.AI
摘要
尽管视觉语言模型(VLM)取得了显著成功,但其在复杂视觉任务上的表现常受制于"视觉处理瓶颈":即在长序列生成过程中容易丧失视觉证据的锚定,并表现出情境化视觉经验的缺失。受人类认知记忆理论中短期视觉主导记忆与长期语义主导记忆区分的启发,我们提出VisMem——一种认知对齐框架,通过动态潜在视觉记忆为VLM赋能,包含用于细粒度感知保持的短期模块和用于抽象语义整合的长期模块。这些记忆在推理过程中被无缝调用,使VLM能够在整个思考与生成过程中同时保持感知保真度与语义一致性。在涵盖理解、推理和生成的多样化视觉基准测试中,大量实验表明VisMem相较于原始模型实现了11.8%的平均性能提升,并优于所有对比模型,由此确立了潜在空间记忆增强的新范式。代码将发布于:https://github.com/YU-deep/VisMem.git。
English
Despite the remarkable success of Vision-Language Models (VLMs), their performance on a range of complex visual tasks is often hindered by a "visual processing bottleneck": a propensity to lose grounding in visual evidence and exhibit a deficit in contextualized visual experience during prolonged generation. Drawing inspiration from human cognitive memory theory, which distinguishes short-term visually-dominant memory and long-term semantically-dominant memory, we propose VisMem, a cognitively-aligned framework that equips VLMs with dynamic latent vision memories, a short-term module for fine-grained perceptual retention and a long-term module for abstract semantic consolidation. These memories are seamlessly invoked during inference, allowing VLMs to maintain both perceptual fidelity and semantic consistency across thinking and generation. Extensive experiments across diverse visual benchmarks for understanding, reasoning, and generation reveal that VisMem delivers a significant average performance boost of 11.8% relative to the vanilla model and outperforms all counterparts, establishing a new paradigm for latent-space memory enhancement. The code will be available: https://github.com/YU-deep/VisMem.git.