超多步驟:困難長文本任務背後的真相
Hyper-multi-step: The Truth Behind Difficult Long-context Tasks
October 6, 2024
作者: Yijiong Yu
cs.AI
摘要
長文本語言模型(LCLM)以其廣泛的上下文窗口而聞名,正變得日益普及。與此同時,許多長文本基準題目提出了具有挑戰性的任務,即使是最先進的LCLM也難以完成。然而,這些各種具挑戰性的長文本任務的根源卻鮮少被研究。為彌補這一不足,我們進行實驗,指出這些困難主要源於兩個基本問題:"多匹配檢索",需要同時檢索多個項目,以及"基於邏輯的檢索",需要在檢索標準中進行邏輯判斷。這兩個問題,雖然看似簡單,實際上超出了LCLM的能力範圍,因為它們被證明具有超級多步驟(需要大量步驟才能解決)的性質。這一發現可以解釋為何LLM在更高級的長文本任務中遇到困難,為重新思考解決方案提供了更準確的觀點。
English
Long-context language models (LCLM), characterized by their extensive context
window, is becoming increasingly popular. Meanwhile, many long-context
benchmarks present challenging tasks that even the most advanced LCLMs struggle
to complete. However, the underlying sources of various challenging
long-context tasks have seldom been studied. To bridge this gap, we conduct
experiments to indicate their difficulty stems primarily from two basic issues:
"multi-matching retrieval," which requires the simultaneous retrieval of
multiple items, and "logic-based retrieval," which necessitates logical
judgment within retrieval criteria. These two problems, while seemingly
straightforward, actually exceed the capabilities of LCLMs because they are
proven to be hyper-multi-step (demanding numerous steps to solve) in nature.
This finding could explain why LLMs struggle with more advanced long-context
tasks, providing a more accurate perspective for rethinking solutions for them.Summary
AI-Generated Summary