面向大语言模型系统的RDMA点对点通信技术
RDMA Point-to-Point Communication for LLM Systems
October 31, 2025
作者: Nandor Licker, Kevin Hu, Vladimir Zaytsev, Lequn Chen
cs.AI
摘要
新兴大语言模型(LLM)系统范式——如分离式推理、专家混合(MoE)路由和异步强化学习微调——需要超越简单集合通信的灵活点对点通信能力。现有实现方案受限于特定网络接口控制器(NIC),难以集成至推理引擎且缺乏跨硬件供应商的移植性。我们提出TransferEngine,通过桥接通用NIC功能来提供统一接口。该系统暴露具有ImmCounter完成通知原语的单边WriteImm操作,无需网络传输的排序假设即可实现每GPU多NIC的透明管理。我们在NVIDIA ConnectX-7和AWS弹性结构适配器(EFA)上均实现了400 Gbps的峰值吞吐量。通过三个生产系统展示TransferEngine的效能:(1)支持动态扩展的分离式推理KvCache传输;(2)万亿参数模型的RL权重更新仅需1.3秒;(3)MoE分发/聚合实现在ConnectX-7上超越DeepEP解码延迟,并在EFA上首次实现可行延迟。实验证明我们的可移植点对点通信既能与集合通信形成互补,又可有效避免供应商锁定。
English
Emerging Large Language Model (LLM) system patterns, such as disaggregated
inference, Mixture-of-Experts (MoE) routing, and asynchronous reinforcement
fine-tuning, require flexible point-to-point communication beyond simple
collectives. Existing implementations are locked to specific Network Interface
Controllers (NICs), hindering integration into inference engines and
portability across hardware providers. We present TransferEngine, which bridges
the functionality of common NICs to expose a uniform interface. TransferEngine
exposes one-sided WriteImm operations with a ImmCounter primitive for
completion notification, without ordering assumptions of network transport,
transparently managing multiple NICs per GPU. We demonstrate peak throughput of
400 Gbps on both NVIDIA ConnectX-7 and AWS Elastic Fabric Adapter (EFA). We
showcase TransferEngine through three production systems: (1) KvCache transfer
for disaggregated inference with dynamic scaling, (2) RL weight updates
achieving 1.3 seconds for trillion-parameter models, and (3) MoE
dispatch/combine implementation exceeding DeepEP decode latency on ConnectX-7,
with the first viable latencies on EFA. We demonstrate that our portable
point-to-point communication complements collectives while avoiding lock-in.