ChatPaper.aiChatPaper

大型语言模型与极端多标签分类的融合:扩展框架与多模态方法 (注:译文在保持技术术语准确性的基础上,采用"融合"体现技术结合的动态感,"扩展框架"对应原文"Scaling"的规模化内涵,"多模态方法"以方法论视角呈现"Framework"的体系化特征,整体符合中文科技论文标题的简洁规范。)

Large Language Models Meet Extreme Multi-label Classification: Scaling and Multi-modal Framework

November 17, 2025
作者: Diego Ortego, Marlon Rodríguez, Mario Almagro, Kunal Dahiya, David Jiménez, Juan C. SanMiguel
cs.AI

摘要

基础模型已在众多领域彻底改变了人工智能,但在极端多标签分类(XMC)领域其变革性潜力仍远未得到充分挖掘。XMC中的查询需要从极大规模的标签空间中关联相关标签,这要求必须在效率与性能之间取得平衡。因此,近期许多研究通过小型编码器架构学习嵌入向量,将XMC高效转化为最大内积搜索问题。本文聚焦XMC的两个关键方向:如何有效利用更大的仅解码器模型,以及在保持计算效率的同时如何挖掘视觉信息。我们证明这两方面各自在XMC中具有重要作用,并可协同提升性能。实验表明,数十亿参数的仅解码器模型能以可控的计算开销实现显著改进。此外,我们提出的视觉增强型极端多标签学习框架(ViXML)通过每张图像提取单一嵌入向量,高效整合基础视觉模型,在限制计算量增长的同时解锁多模态能力。值得注意的是,采用小型编码器的ViXML在多数情况下优于纯文本仅解码器模型,印证了"一图胜千言"的算力价值。最后,我们扩展了现有纯文本数据集以利用视觉元数据,并将其开源供未来基准测试。在四个公开纯文本数据集及其视觉增强版本上的综合实验验证了方案有效性,在最大数据集上P@1指标较之前最优成果提升高达8.21%。ViXML代码已发布于https://github.com/DiegoOrtego/vixml。
English
Foundation models have revolutionized artificial intelligence across numerous domains, yet their transformative potential remains largely untapped in Extreme Multi-label Classification (XMC). Queries in XMC are associated with relevant labels from extremely large label spaces, where it is critical to strike a balance between efficiency and performance. Therefore, many recent approaches efficiently pose XMC as a maximum inner product search between embeddings learned from small encoder-only transformer architectures. In this paper, we address two important aspects in XMC: how to effectively harness larger decoder-only models, and how to exploit visual information while maintaining computational efficiency. We demonstrate that both play a critical role in XMC separately and can be combined for improved performance. We show that a few billion-size decoder can deliver substantial improvements while keeping computational overhead manageable. Furthermore, our Vision-enhanced eXtreme Multi-label Learning framework (ViXML) efficiently integrates foundation vision models by pooling a single embedding per image. This limits computational growth while unlocking multi-modal capabilities. Remarkably, ViXML with small encoders outperforms text-only decoder in most cases, showing that an image is worth billions of parameters. Finally, we present an extension of existing text-only datasets to exploit visual metadata and make them available for future benchmarking. Comprehensive experiments across four public text-only datasets and their corresponding image enhanced versions validate our proposals' effectiveness, surpassing previous state-of-the-art by up to +8.21\% in P@1 on the largest dataset. ViXML's code is available at https://github.com/DiegoOrtego/vixml.
PDF383December 1, 2025