ChatPaper.aiChatPaper

NeedleBench:LLM 能否在 100 萬個上下文窗口中進行檢索和推理?

NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?

July 16, 2024
作者: Mo Li, Songyang Zhang, Yunxin Liu, Kai Chen
cs.AI

摘要

在評估大型語言模型(LLMs)的長文本能力時,從原始長文檔中識別與用戶查詢相關的內容是任何LLM回答基於長文本的問題的重要先決條件。我們提出NeedleBench,這是一個由一系列逐漸更具挑戰性任務組成的框架,用於評估雙語長文本能力,跨越多個長度間隔(4k、8k、32k、128k、200k、1000k及更大)和不同深度範圍,允許在不同文本深度區域中策略性地插入關鍵數據點,以嚴格測試模型在不同情境中的檢索和推理能力。我們使用NeedleBench框架來評估領先的開源模型在識別與問題相關的關鍵信息以及應用該信息進行推理在雙語長文本中的表現。此外,我們提出祖先跟踪挑戰(ATC)來模擬可能存在於現實世界長文本任務中的邏輯推理挑戰的複雜性,提供了一種簡單的方法來評估LLMs在應對複雜長文本情況方面的表現。我們的結果表明,目前的LLMs在實際長文本應用中仍有很大改進空間,因為它們在可能存在於現實世界長文本任務中的邏輯推理挑戰的複雜性方面表現不佳。所有代碼和資源均可在OpenCompass找到:https://github.com/open-compass/opencompass。
English
In evaluating the long-context capabilities of large language models (LLMs), identifying content relevant to a user's query from original long documents is a crucial prerequisite for any LLM to answer questions based on long text. We present NeedleBench, a framework consisting of a series of progressively more challenging tasks for assessing bilingual long-context capabilities, spanning multiple length intervals (4k, 8k, 32k, 128k, 200k, 1000k, and beyond) and different depth ranges, allowing the strategic insertion of critical data points in different text depth zones to rigorously test the retrieval and reasoning capabilities of models in diverse contexts. We use the NeedleBench framework to assess how well the leading open-source models can identify key information relevant to the question and apply that information to reasoning in bilingual long texts. Furthermore, we propose the Ancestral Trace Challenge (ATC) to mimic the complexity of logical reasoning challenges that are likely to be present in real-world long-context tasks, providing a simple method for evaluating LLMs in dealing with complex long-context situations. Our results suggest that current LLMs have significant room for improvement in practical long-context applications, as they struggle with the complexity of logical reasoning challenges that are likely to be present in real-world long-context tasks. All codes and resources are available at OpenCompass: https://github.com/open-compass/opencompass.

Summary

AI-Generated Summary

PDF453November 28, 2024