多模態大型語言模型的自適應推理學習
Learning to Inference Adaptively for Multimodal Large Language Models
March 13, 2025
作者: Zhuoyan Xu, Khoi Duc Nguyen, Preeti Mukherjee, Saurabh Bagchi, Somali Chaterji, Yingyu Liang, Yin Li
cs.AI
摘要
多模态大型语言模型(MLLMs)在推理方面展现了令人瞩目的能力,但其伴随的庞大计算成本限制了其在资源受限环境中的部署。尽管近期已有提升MLLMs效率的努力,现有解决方案在应对运行时条件变化,尤其是资源可用性变动(例如,因设备上其他程序执行导致的资源争用)方面仍显不足。为填补这一空白,我们提出了AdaLLaVA,一种自适应推理框架,该框架能够在推理过程中根据输入数据及延迟预算,动态调整MLLM的操作配置。我们在涉及问答、推理及幻觉检测的多个基准测试上进行了广泛实验。结果表明,AdaLLaVA能有效遵循输入延迟预算,在运行时实现不同的准确性与延迟权衡。此外,我们展示了AdaLLaVA能够同时适应输入延迟与内容,可与令牌选择机制结合以进一步提升效率,并且能够泛化至多种MLLMs。我们的项目网页及代码发布地址为https://zhuoyan-xu.github.io/ada-llava/。
English
Multimodal Large Language Models (MLLMs) have shown impressive capabilities
in reasoning, yet come with substantial computational cost, limiting their
deployment in resource-constrained settings. Despite recent efforts on
improving the efficiency of MLLMs, prior solutions fall short in responding to
varying runtime conditions, in particular changing resource availability (e.g.,
contention due to the execution of other programs on the device). To bridge
this gap, we introduce AdaLLaVA, an adaptive inference framework that learns to
dynamically reconfigure operations in an MLLM during inference, accounting for
the input data and a latency budget. We conduct extensive experiments across
benchmarks involving question-answering, reasoning, and hallucination. Our
results show that AdaLLaVA effectively adheres to input latency budget,
achieving varying accuracy and latency tradeoffs at runtime. Further, we
demonstrate that AdaLLaVA adapts to both input latency and content, can be
integrated with token selection for enhanced efficiency, and generalizes across
MLLMs. Our project webpage with code release is at
https://zhuoyan-xu.github.io/ada-llava/.Summary
AI-Generated Summary