ChatPaper.aiChatPaper

雙曲空間視覺語言模型中基於不確定性指導的部件到整體語義表徵性組合對齊

Uncertainty-guided Compositional Alignment with Part-to-Whole Semantic Representativeness in Hyperbolic Vision-Language Models

March 23, 2026
作者: Hayeon Kim, Ji Ha Jang, Junghun James Kim, Se Young Chun
cs.AI

摘要

尽管视觉语言模型(VLMs)已取得显著性能,其欧几里得嵌入在捕捉层次关系(如部分-整体或父子结构)方面仍存在局限,且在多对象组合场景中常面临挑战。双曲视觉语言模型通过蕴含关系更好地保留层次结构并建模部分-整体关系(即整体场景及其部分图像),从而缓解了这一问题。然而现有方法未能建模每个部分对整体具有不同层次的语义代表性。我们提出不确定性引导的组合式双曲对齐(UNCHA)来增强双曲视觉语言模型。UNCHA通过双曲不确定性建模部分到整体的语义代表性,对整体场景中更具代表性的部分分配较低不确定性,而对代表性较弱的部分分配较高不确定性。随后将这种代表性通过不确定性引导的权重融入对比学习目标。最后,通过基于信息熵的项进行正则化的蕴含损失进一步校准不确定性。借助所提出的损失函数,UNCHA能够学习具有更精确部分-整体排序关系的双曲嵌入,从而捕捉图像中潜在的组合结构,并提升对复杂多对象场景的理解能力。UNCHA在零样本分类、检索和多标签分类基准测试中实现了最先进的性能。我们的代码与模型已开源:https://github.com/jeeit17/UNCHA.git。
English
While Vision-Language Models (VLMs) have achieved remarkable performance, their Euclidean embeddings remain limited in capturing hierarchical relationships such as part-to-whole or parent-child structures, and often face challenges in multi-object compositional scenarios. Hyperbolic VLMs mitigate this issue by better preserving hierarchical structures and modeling part-whole relations (i.e., whole scene and its part images) through entailment. However, existing approaches do not model that each part has a different level of semantic representativeness to the whole. We propose UNcertainty-guided Compositional Hyperbolic Alignment (UNCHA) for enhancing hyperbolic VLMs. UNCHA models part-to-whole semantic representativeness with hyperbolic uncertainty, by assigning lower uncertainty to more representative parts and higher uncertainty to less representative ones for the whole scene. This representativeness is then incorporated into the contrastive objective with uncertainty-guided weights. Finally, the uncertainty is further calibrated with an entailment loss regularized by entropy-based term. With the proposed losses, UNCHA learns hyperbolic embeddings with more accurate part-whole ordering, capturing the underlying compositional structure in an image and improving its understanding of complex multi-object scenes. UNCHA achieves state-of-the-art performance on zero-shot classification, retrieval, and multi-label classification benchmarks. Our code and models are available at: https://github.com/jeeit17/UNCHA.git.
PDF31March 26, 2026