解绑尤利西斯:基于分头处理的记忆高效上下文并行技术
Untied Ulysses: Memory-Efficient Context Parallelism via Headwise Chunking
February 24, 2026
作者: Ravi Ghadia, Maksim Abraham, Sergei Vorobyov, Max Ryabinin
cs.AI
摘要
针对长序列的高效Transformer模型处理通常需要借助上下文并行技术将计算任务分配至多个加速器。该领域的主流方法(如环形注意力或DeepSpeed Ulysses)虽能实现上下文维度的扩展,但未着重优化内存效率,从而限制了其支持的序列长度。更先进的技术(如全流水线分布式Transformer或激活值卸载)虽能进一步扩展上下文长度,但会牺牲训练吞吐量。本文提出UPipe——一种在注意力头层级进行细粒度分块的简洁而高效的上下文并行技术。该方法显著降低了自注意力机制的激活内存消耗,突破激活内存瓶颈,从而支持更长的上下文长度。在32B参数规模的Transformer中,我们的方法将注意力层中间张量的内存使用量降低高达87.5%,同时保持与既有上下文并行技术相当的训练速度。在单台8×H100节点上训练Llama3-8B时,UPipe可支持500万标记的上下文长度,较现有方法提升超过25%。
English
Efficiently processing long sequences with Transformer models usually requires splitting the computations across accelerators via context parallelism. The dominant approaches in this family of methods, such as Ring Attention or DeepSpeed Ulysses, enable scaling over the context dimension but do not focus on memory efficiency, which limits the sequence lengths they can support. More advanced techniques, such as Fully Pipelined Distributed Transformer or activation offloading, can further extend the possible context length at the cost of training throughput. In this paper, we present UPipe, a simple yet effective context parallelism technique that performs fine-grained chunking at the attention head level. This technique significantly reduces the activation memory usage of self-attention, breaking the activation memory barrier and unlocking much longer context lengths. Our approach reduces intermediate tensor memory usage in the attention layer by as much as 87.5% for 32B Transformers, while matching previous context parallelism techniques in terms of training speed. UPipe can support the context length of 5M tokens when training Llama3-8B on a single 8timesH100 node, improving upon prior methods by over 25%.