ChatPaper.aiChatPaper

FlashPrefill:超高速長上下文預填充的即時模式發現與閾值處理

FlashPrefill: Instantaneous Pattern Discovery and Thresholding for Ultra-Fast Long-Context Prefilling

March 6, 2026
作者: Qihang Fan, Huaibo Huang, Zhiying Wu, Juqiu Wang, Bingning Wang, Ran He
cs.AI

摘要

長文本建模是大型語言模型的關鍵能力,然而注意力機制的二次方複雜度仍是主要瓶頸,尤其在計算密集的前填充階段。儘管已有各種稀疏注意力機制被提出,但這些方法通常存在搜尋延遲過高或稀疏度不足的問題。本文提出FlashPrefill框架,透過即時模式發現與閾值處理實現超高速前填充。該框架採用快速區塊搜尋技術,能同步定位動態的垂直型、斜線型與區塊稀疏注意力模式。關鍵在於其引入的動態閾值機制,既能規避注意力分數排序或累積的巨量開銷,又能有效消除長尾分布以提升稀疏度。大量實驗表明,FlashPrefill實現了效率的飛躍性突破,在256K長度序列上達成27.78倍的加速效果。值得注意的是,有別於現有方法在短文本上出現效能衰減,FlashPrefill即使在4K上下文長度下仍保持1.71倍加速,展現其跨序列尺度的穩健性與實用價值。
English
Long-context modeling is a pivotal capability for Large Language Models, yet the quadratic complexity of attention remains a critical bottleneck, particularly during the compute-intensive prefilling phase. While various sparse attention mechanisms have been explored, they typically suffer from either significant search latency or insufficient sparsity. In this paper, we propose FlashPrefill, a framework enabling ultra-fast prefilling via instantaneous pattern discovery and thresholding. FlashPrefill leverages a fast block-searching technique to simultaneously locate dynamic vertical, slash, and block-sparse attention patterns. Crucially, it introduces a dynamic thresholding mechanism that bypasses the prohibitive overhead of sorting or accumulating attention scores while effectively eliminating the long-tail distribution to enhance sparsity. Extensive evaluations demonstrate that FlashPrefill achieves a substantial leap in efficiency, delivering an unprecedented 27.78x speedup on 256K sequences. Notably, unlike existing methods that incur efficiency degradation on shorter contexts, FlashPrefill maintains a 1.71x speedup even at a 4K context length, demonstrating its robustness and practical utility across varying sequence scales.
PDF91May 8, 2026