ByDeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, Zhen Zhang
420
7
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and
DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement
learning (RL) without supervised fine-tuning (SFT) as a preliminary step,
demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero
naturally emerges with numerous powerful and intriguing reasoning behaviors.
However, it encounters challenges such as poor readability, and language
mixing. To address these issues and further enhance reasoning performance, we
introduce DeepSeek-R1, which incorporates multi-stage training and cold-start
data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217
on reasoning tasks. To support the research community, we open-source
DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B,
70B) distilled from DeepSeek-R1 based on Qwen and Llama.
Language model pretraining with next token prediction has proved effective
for scaling compute but is limited to the amount of available training data.
Scaling reinforcement learning (RL) unlocks a new axis for the continued
improvement of artificial intelligence, with the promise that large language
models (LLMs) can scale their training data by learning to explore with
rewards. However, prior published work has not produced competitive results. In
light of this, we report on the training practice of Kimi k1.5, our latest
multi-modal LLM trained with RL, including its RL training techniques,
multi-modal data recipes, and infrastructure optimization. Long context scaling
and improved policy optimization methods are key ingredients of our approach,
which establishes a simplistic, effective RL framework without relying on more
complex techniques such as Monte Carlo tree search, value functions, and
process reward models. Notably, our system achieves state-of-the-art reasoning
performance across multiple benchmarks and modalities -- e.g., 77.5 on AIME,
96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVista -- matching
OpenAI's o1. Moreover, we present effective long2short methods that use
long-CoT techniques to improve short-CoT models, yielding state-of-the-art
short-CoT reasoning results -- e.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on
LiveCodeBench -- outperforming existing short-CoT models such as GPT-4o and
Claude Sonnet 3.5 by a large margin (up to +550%).
ByBoqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao
90
4
In this paper, we propose VideoLLaMA3, a more advanced multimodal foundation
model for image and video understanding. The core design philosophy of
VideoLLaMA3 is vision-centric. The meaning of "vision-centric" is two-fold: the
vision-centric training paradigm and vision-centric framework design. The key
insight of our vision-centric training paradigm is that high-quality image-text
data is crucial for both image and video understanding. Instead of preparing
massive video-text datasets, we focus on constructing large-scale and
high-quality image-text datasets. VideoLLaMA3 has four training stages: 1)
vision-centric alignment stage, which warms up the vision encoder and
projector; 2) vision-language pretraining stage, which jointly tunes the vision
encoder, projector, and LLM with large-scale image-text data covering multiple
types (including scene images, documents, charts) as well as text-only data. 3)
multi-task fine-tuning stage, which incorporates image-text SFT data for
downstream tasks and video-text data to establish a foundation for video
understanding. 4) video-centric fine-tuning, which further improves the model's
capability in video understanding. As for the framework design, to better
capture fine-grained details in images, the pretrained vision encoder is
adapted to encode images of varying sizes into vision tokens with corresponding
numbers, rather than a fixed number of tokens. For video inputs, we reduce the
number of vision tokens according to their similarity so that the
representation of videos will be more precise and compact. Benefit from
vision-centric designs, VideoLLaMA3 achieves compelling performances in both
image and video understanding benchmarks.
ByZhenran Xu, Longyue Wang, Jifang Wang, Zhouyi Li, Senbao Shi, Xue Yang, Yiyu Wang, Baotian Hu, Jun Yu, Min Zhang
71
3
Virtual film production requires intricate decision-making processes,
including scriptwriting, virtual cinematography, and precise actor positioning
and actions. Motivated by recent advances in automated decision-making with
language agent-based societies, this paper introduces FilmAgent, a novel
LLM-based multi-agent collaborative framework for end-to-end film automation in
our constructed 3D virtual spaces. FilmAgent simulates various crew roles,
including directors, screenwriters, actors, and cinematographers, and covers
key stages of a film production workflow: (1) idea development transforms
brainstormed ideas into structured story outlines; (2) scriptwriting elaborates
on dialogue and character actions for each scene; (3) cinematography determines
the camera setups for each shot. A team of agents collaborates through
iterative feedback and revisions, thereby verifying intermediate scripts and
reducing hallucinations. We evaluate the generated videos on 15 ideas and 4 key
aspects. Human evaluation shows that FilmAgent outperforms all baselines across
all aspects and scores 3.98 out of 5 on average, showing the feasibility of
multi-agent collaboration in filmmaking. Further analysis reveals that
FilmAgent, despite using the less advanced GPT-4o model, surpasses the
single-agent o1, showing the advantage of a well-coordinated multi-agent
system. Lastly, we discuss the complementary strengths and weaknesses of
OpenAI's text-to-video model Sora and our FilmAgent in filmmaking.
Large language models (LLMs) demonstrate impressive performance but lack the
flexibility to adapt to human preferences quickly without retraining. In this
work, we introduce Test-time Preference Optimization (TPO), a framework that
aligns LLM outputs with human preferences during inference, removing the need
to update model parameters. Rather than relying on purely numerical rewards,
TPO translates reward signals into textual critiques and uses them as textual
rewards to iteratively refine its response. Evaluations on benchmarks covering
instruction following, preference alignment, safety, and mathematics reveal
that TPO progressively improves alignment with human preferences. Notably,
after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can
surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO
scales efficiently with both the search width and depth during inference.
Through case studies, we illustrate how TPO exploits the innate capacity of LLM
to interpret and act upon reward signals. Our findings establish TPO as a
practical, lightweight alternative for test-time preference optimization,
achieving alignment on the fly. Our code is publicly available at
https://github.com/yafuly/TPO.
ByAng Lv, Ruobing Xie, Yining Qian, Songhao Wu, Xingwu Sun, Zhanhui Kang, Di Wang, Rui Yan
44
5
Mixture-of-Experts (MoE) models mostly use a router to assign tokens to
specific expert modules, activating only partial parameters and often
outperforming dense models. We argue that the separation between the router's
decision-making and the experts' execution is a critical yet overlooked issue,
leading to suboptimal expert selection and ineffective learning. To address
this, we propose Autonomy-of-Experts (AoE), a novel MoE paradigm in which
experts autonomously select themselves to process inputs. AoE is based on the
insight that an expert is aware of its own capacity to effectively process a
token, an awareness reflected in the scale of its internal activations. In AoE,
routers are removed; instead, experts pre-compute internal activations for
inputs and are ranked based on their activation norms. Only the top-ranking
experts proceed with the forward pass, while the others abort. The overhead of
pre-computing activations is reduced through a low-rank weight factorization.
This self-evaluating-then-partner-comparing approach ensures improved expert
selection and effective learning. We pre-train language models having 700M up
to 4B parameters, demonstrating that AoE outperforms traditional MoE models
with comparable efficiency.
ByHaotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, Dacheng Tao
27
2
Recently, long-thought reasoning LLMs, such as OpenAI's O1, adopt extended
reasoning processes similar to how humans ponder over complex problems. This
reasoning paradigm significantly enhances the model's problem-solving abilities
and has achieved promising results. However, long-thought reasoning process
leads to a substantial increase in inference time. A pressing challenge is
reducing the inference overhead of long-thought LLMs while ensuring accuracy.
In this paper, we experimentally demonstrate that long-thought reasoning models
struggle to effectively allocate token budgets based on problem difficulty and
reasoning redundancies. To address this, we propose Length-Harmonizing
Fine-Tuning (O1-Pruner), aiming at minimizing reasoning overhead while
maintaining accuracy. This effective fine-tuning method first estimates the
LLM's baseline performance through pre-sampling and then uses RL-style
fine-tuning to encourage the model to generate shorter reasoning processes
under accuracy constraints. This allows the model to achieve efficient
reasoning with lower redundancy while maintaining accuracy. Experiments on
various mathematical reasoning benchmarks show that O1-Pruner not only
significantly reduces inference overhead but also achieves higher accuracy,
providing a novel and promising solution to this challenge. Our code is coming
soon at https://github.com/StarDewXXX/O1-Pruner
ByYantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, Juanzi Li
20
3
Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large
Language Models (LLMs), relies on reward models to select the best candidate
solution from multiple generations. However, traditional reward models often
assign arbitrary and inconsistent scores, limiting their effectiveness. To
address this, we propose a Pairwise Reward Model (Pairwise RM) combined with a
knockout tournament for BoN sampling. Instead of assigning absolute scores,
given one math problem, Pairwise RM evaluates two candidate solutions'
correctness simultaneously. This approach eliminates the need for arbitrary
scoring and enables cross-validation of solutions through parallel comparison.
In the knockout tournament, Pairwise RM conducts pairwise comparisons between
candidate solutions and eliminates the incorrect ones iteratively. We construct
\ourdataset, a large-scale dataset of 443K pairwise comparisons derived from
NumiaMath and annotated using gemini-1.5-flash, and train the Pairwise
RM via supervised fine-tuning. Experiments on MATH-500 and the Olympiad Bench
demonstrate significant improvements over traditional discriminative reward
models. And a 40\% to 60\% relative improvement is achieved on the top 50\%
challenging problems.
ByJianing Yang, Alexander Sax, Kevin J. Liang, Mikael Henaff, Hao Tang, Ang Cao, Joyce Chai, Franziska Meier, Matt Feiszli
17
5
Multi-view 3D reconstruction remains a core challenge in computer vision,
particularly in applications requiring accurate and scalable representations
across diverse perspectives. Current leading methods such as DUSt3R employ a
fundamentally pairwise approach, processing images in pairs and necessitating
costly global alignment procedures to reconstruct from multiple views. In this
work, we propose Fast 3D Reconstruction (Fast3R), a novel multi-view
generalization to DUSt3R that achieves efficient and scalable 3D reconstruction
by processing many views in parallel. Fast3R's Transformer-based architecture
forwards N images in a single forward pass, bypassing the need for iterative
alignment. Through extensive experiments on camera pose estimation and 3D
reconstruction, Fast3R demonstrates state-of-the-art performance, with
significant improvements in inference speed and reduced error accumulation.
These results establish Fast3R as a robust alternative for multi-view
applications, offering enhanced scalability without compromising reconstruction
accuracy.
Large Language Models (LLMs) are transforming artificial intelligence,
evolving into task-oriented systems capable of autonomous planning and
execution. One of the primary applications of LLMs is conversational AI
systems, which must navigate multi-turn dialogues, integrate domain-specific
APIs, and adhere to strict policy constraints. However, evaluating these agents
remains a significant challenge, as traditional methods fail to capture the
complexity and variability of real-world interactions. We introduce
IntellAgent, a scalable, open-source multi-agent framework designed to evaluate
conversational AI systems comprehensively. IntellAgent automates the creation
of diverse, synthetic benchmarks by combining policy-driven graph modeling,
realistic event generation, and interactive user-agent simulations. This
innovative approach provides fine-grained diagnostics, addressing the
limitations of static and manually curated benchmarks with coarse-grained
metrics. IntellAgent represents a paradigm shift in evaluating conversational
AI. By simulating realistic, multi-policy scenarios across varying levels of
complexity, IntellAgent captures the nuanced interplay of agent capabilities
and policy constraints. Unlike traditional methods, it employs a graph-based
policy model to represent relationships, likelihoods, and complexities of
policy interactions, enabling highly detailed diagnostics. IntellAgent also
identifies critical performance gaps, offering actionable insights for targeted
optimization. Its modular, open-source design supports seamless integration of
new domains, policies, and APIs, fostering reproducibility and community
collaboration. Our findings demonstrate that IntellAgent serves as an effective
framework for advancing conversational AI by addressing challenges in bridging
research and deployment. The framework is available at
https://github.com/plurai-ai/intellagent