基于大语言模型的重排序效率与效果权衡的浮点运算优化
Efficiency-Effectiveness Reranking FLOPs for LLM-based Rerankers
July 8, 2025
作者: Zhiyuan Peng, Ting-ruen Wei, Tingyu Song, Yilun Zhao, Yi Fang
cs.AI
摘要
大型语言模型(LLMs)近期被应用于信息检索中的重排序任务,并展现出强劲性能。然而,其高计算需求往往阻碍了实际部署。现有研究通过代理指标如延迟、前向传播次数、输入标记数和输出标记数来评估基于LLM的重排序器效率。然而,这些指标依赖于硬件及运行时选择(例如是否并行、批量大小等),且常未能考虑模型规模,导致难以解读并模糊了效率与效果权衡的评估。为解决此问题,我们提出了针对基于LLM重排序器的E2R-FLOPs指标:每PetaFLOP的排序指标(RPP)用于衡量计算效率与相关性,以及每PetaFLOP的查询数(QPP)用于硬件无关的吞吐量评估。伴随新指标,我们还构建了一个可解释的FLOPs估算器,即便不进行任何实验也能估算基于LLM重排序器的FLOPs。基于所提出的指标,我们开展了全面实验,评估了多种不同架构的基于LLM的重排序器,研究了效率与效果的权衡,并将此问题提请研究界关注。
English
Large Language Models (LLMs) have recently been applied to reranking tasks in
information retrieval, achieving strong performance. However, their high
computational demands often hinder practical deployment. Existing studies
evaluate the efficiency of LLM-based rerankers using proxy metrics such as
latency, the number of forward passes, input tokens, and output tokens.
However, these metrics depend on hardware and running-time choices (\eg
parallel or not, batch size, etc), and often fail to account for model size,
making it difficult to interpret and obscuring the evaluation of the
efficiency-effectiveness tradeoff. To address this issue, we propose
E2R-FLOPs, for LLM-based rerankers: ranking metrics per
PetaFLOP (RPP) for relevance per compute and queries per PetaFLOP (QPP) for
hardware-agnostic throughput. Companied with the new metrics, an interpretable
FLOPs estimator is built to estimate the FLOPs of an LLM-based reranker even
without running any experiments. Based on the proposed metrics, we conduct
comprehensive experiments to evaluate a wide range of LLM-based rerankers with
different architecture, studying the efficiency-effectiveness trade-off and
bringing this issue to the attention of the research community.