DeepFlow:規模化伺服器無伺服器大型語言模型服務
DeepFlow: Serverless Large Language Model Serving at Scale
January 24, 2025
作者: Junhao Hu, Jiang Xu, Zhixia Liu, Yulong He, Yuetao Chen, Hao Xu, Jiang Liu, Baoquan Zhang, Shining Wan, Gengyuan Dan, Zhiyu Dong, Zhihao Ren, Jie Meng, Chao He, Changhong Liu, Tao Xie, Dayun Lin, Qin Zhang, Yue Yu, Hao Feng, Xusheng Chen, Yizhou Shan
cs.AI
摘要
本文介紹了DeepFlow,一個可擴展且無伺服器的人工智慧平台,旨在有效地在雲環境中大規模提供大型語言模型(LLMs)服務。DeepFlow通過四個主要設計組件解決了資源分配、服務效率和冷啟動延遲等關鍵挑戰。首先,它使用一種簡單的無伺服器抽象,稱為請求-任務-任務模型,有助於管理人工智慧工作負載,包括訓練後和模型服務任務。其次,它通過採用微內核設計、以NPU為中心的執行和基於SPMD的并行性,構建了一個內部服務引擎FlowServe,以優化LLM服務。該系統還包括針對PD分離和PD共置配置量身定制的新型調度策略。通過像預熱Pod、DRAM預加載和NPU分叉等優化,DeepFlow可以在幾秒內擴展到64個實例。DeepFlow已經投入生產超過一年,運行在一個大型Ascend NPU集群上,為我們的客戶提供了行業標準的API,用於微調、代理服務和模型服務。
English
This paper introduces DeepFlow, a scalable and serverless AI platform
designed to efficiently serve large language models (LLMs) at scale in cloud
environments. DeepFlow addresses key challenges such as resource allocation,
serving efficiency, and cold start latencies through four main design
components. First, it uses a simple serverless abstraction called the
request-job-task model, which helps manage AI workloads across post-training
and model serving tasks. Second, it builds an in-house serving engine FlowServe
using a microkernel-inspired design, NPU-centric execution, and SPMD-based
parallelism to optimize LLM serving. The system also includes novel scheduling
policies tailored for both PD-disaggregated and PD-colocated configurations.
With optimizations like pre-warmed pods, DRAM pre-loading, and NPU-fork,
DeepFlow can scale up to 64 instances in seconds. DeepFlow has been in
production for over a year, operating on a large Ascend NPU cluster and
providing industrystandard APIs for fine-tuning, agent serving, and model
serving to our customers.Summary
AI-Generated Summary