ChatPaper.aiChatPaper

SQUARE:语义查询增强融合与高效批量重排序 ——面向免训练零样本组合图像检索

SQUARE: Semantic Query-Augmented Fusion and Efficient Batch Reranking for Training-free Zero-Shot Composed Image Retrieval

September 30, 2025
作者: Ren-Di Wu, Yu-Yen Lin, Huei-Fang Yang
cs.AI

摘要

组合图像检索(Composed Image Retrieval, CIR)旨在检索出既保留参考图像视觉内容,又融入用户指定文本修改的目标图像。无需特定任务训练或标注数据的零样本CIR(ZS-CIR)方法极具吸引力,但准确捕捉用户意图仍面临挑战。本文提出SQUARE,一种新颖的两阶段无训练框架,利用多模态大语言模型(MLLMs)增强ZS-CIR。在语义查询增强融合(SQAF)阶段,我们通过MLLM生成的目标图像描述,丰富了源自视觉语言模型(如CLIP)的查询嵌入。这些描述提供高层次语义指导,使查询更好地捕捉用户意图,提升全局检索质量。在高效批量重排序(EBR)阶段,将排名靠前的候选图像以带有视觉标记的网格形式呈现给MLLM,后者对所有候选进行联合视觉-语义推理。我们的重排序策略单次执行即可,产生更准确的排序结果。实验表明,SQUARE凭借其简洁高效,在四个标准CIR基准测试中展现出强劲性能。值得注意的是,即使使用轻量级预训练模型,它仍保持高性能,彰显了其潜在的应用价值。
English
Composed Image Retrieval (CIR) aims to retrieve target images that preserve the visual content of a reference image while incorporating user-specified textual modifications. Training-free zero-shot CIR (ZS-CIR) approaches, which require no task-specific training or labeled data, are highly desirable, yet accurately capturing user intent remains challenging. In this paper, we present SQUARE, a novel two-stage training-free framework that leverages Multimodal Large Language Models (MLLMs) to enhance ZS-CIR. In the Semantic Query-Augmented Fusion (SQAF) stage, we enrich the query embedding derived from a vision-language model (VLM) such as CLIP with MLLM-generated captions of the target image. These captions provide high-level semantic guidance, enabling the query to better capture the user's intent and improve global retrieval quality. In the Efficient Batch Reranking (EBR) stage, top-ranked candidates are presented as an image grid with visual marks to the MLLM, which performs joint visual-semantic reasoning across all candidates. Our reranking strategy operates in a single pass and yields more accurate rankings. Experiments show that SQUARE, with its simplicity and effectiveness, delivers strong performance on four standard CIR benchmarks. Notably, it maintains high performance even with lightweight pre-trained, demonstrating its potential applicability.
PDF12October 3, 2025