We present Pangu Ultra, a Large Language Model (LLM) with 135 billion
parameters and dense Transformer modules trained on Ascend Neural Processing
Units (NPUs). Although the field of LLM has been witnessing unprecedented
advances in pushing the scale and capability of LLM in recent years, training
such a large-scale model still involves significant optimization and system
challenges. To stabilize the training process, we propose depth-scaled sandwich
normalization, which effectively eliminates loss spikes during the training
process of deep models. We pre-train our model on 13.2 trillion diverse and
high-quality tokens and further enhance its reasoning capabilities during
post-training. To perform such large-scale training efficiently, we utilize
8,192 Ascend NPUs with a series of system optimizations. Evaluations on
multiple diverse benchmarks indicate that Pangu Ultra significantly advances
the state-of-the-art capabilities of dense LLMs such as Llama 405B and Mistral
Large 2, and even achieves competitive results with DeepSeek-R1, whose sparse
model structure contains much more parameters. Our exploration demonstrates
that Ascend NPUs are capable of efficiently and effectively training dense
models with more than 100 billion parameters. Our model and system will be
available for our commercial customers.
We present Kimi-VL, an efficient open-source Mixture-of-Experts (MoE)
vision-language model (VLM) that offers advanced multimodal reasoning,
long-context understanding, and strong agent capabilities - all while
activating only 2.8B parameters in its language decoder (Kimi-VL-A3B). Kimi-VL
demonstrates strong performance across challenging domains: as a
general-purpose VLM, Kimi-VL excels in multi-turn agent tasks (e.g., OSWorld),
matching flagship models. Furthermore, it exhibits remarkable capabilities
across diverse challenging vision language tasks, including college-level image
and video comprehension, OCR, mathematical reasoning, and multi-image
understanding. In comparative evaluations, it effectively competes with
cutting-edge efficient VLMs such as GPT-4o-mini, Qwen2.5-VL-7B, and
Gemma-3-12B-IT, while surpassing GPT-4o in several key domains. Kimi-VL also
advances in processing long contexts and perceiving clearly. With a 128K
extended context window, Kimi-VL can process diverse long inputs, achieving
impressive scores of 64.5 on LongVideoBench and 35.1 on MMLongBench-Doc. Its
native-resolution vision encoder, MoonViT, further allows it to see and
understand ultra-high-resolution visual inputs, achieving 83.2 on InfoVQA and
34.5 on ScreenSpot-Pro, while maintaining lower computational cost for common
tasks. Building upon Kimi-VL, we introduce an advanced long-thinking variant:
Kimi-VL-Thinking. Developed through long chain-of-thought (CoT) supervised
fine-tuning (SFT) and reinforcement learning (RL), this model exhibits strong
long-horizon reasoning capabilities. It achieves scores of 61.7 on MMMU, 36.8
on MathVision, and 71.3 on MathVista while maintaining the compact 2.8B
activated LLM parameters, setting a new standard for efficient multimodal
thinking models. Code and models are publicly accessible at
https://github.com/MoonshotAI/Kimi-VL.
Sara Vera Marjanović, Arkil Patel, Vaibhav Adlakha, Milad Aghajohari, Parishad BehnamGhader, Mehar Bhatia, Aditi Khandelwal, Austin Kraft, Benno Krojer, Xing Han Lù, Nicholas Meade, Dongchan Shin, Amirhossein Kazemnejad, Gaurav Kamath, Marius Mosbach, Karolina Stańczak, Siva Reddy
855
Large Reasoning Models like DeepSeek-R1 mark a fundamental shift in how LLMs
approach complex problems. Instead of directly producing an answer for a given
input, DeepSeek-R1 creates detailed multi-step reasoning chains, seemingly
"thinking" about a problem before providing an answer. This reasoning process
is publicly available to the user, creating endless opportunities for studying
the reasoning behaviour of the model and opening up the field of Thoughtology.
Starting from a taxonomy of DeepSeek-R1's basic building blocks of reasoning,
our analyses on DeepSeek-R1 investigate the impact and controllability of
thought length, management of long or confusing contexts, cultural and safety
concerns, and the status of DeepSeek-R1 vis-\`a-vis cognitive phenomena, such
as human-like language processing and world modelling. Our findings paint a
nuanced picture. Notably, we show DeepSeek-R1 has a 'sweet spot' of reasoning,
where extra inference time can impair model performance. Furthermore, we find a
tendency for DeepSeek-R1 to persistently ruminate on previously explored
problem formulations, obstructing further exploration. We also note strong
safety vulnerabilities of DeepSeek-R1 compared to its non-reasoning
counterpart, which can also compromise safety-aligned LLMs.
Mixture-of-Experts (MoE) Large Language Models (LLMs) suffer from severely
sub-optimal expert pathways-our study reveals that naive expert selection
learned from pretraining leaves a surprising 10-20% accuracy gap for
improvement. Motivated by this observation, we develop a novel class of
test-time optimization methods to re-weight or "re-mixing" the experts in
different layers jointly for each test sample. Since the test sample's ground
truth is unknown, we propose to optimize a surrogate objective defined by the
sample's "successful neighbors" from a reference set of samples. We introduce
three surrogates and algorithms based on mode-finding, kernel regression, and
the average loss of similar reference samples/tasks. To reduce the cost of
optimizing whole pathways, we apply our algorithms merely to the core experts'
mixing weights in critical layers, which enjoy similar performance but save
significant computation. This leads to "Critical-Layer, Core-Expert,
Collaborative Pathway Optimization (C3PO)". We apply C3PO to two recent MoE
LLMs and examine it on six widely-used benchmarks. It consistently improves the
base model by 7-15% in accuracy and outperforms widely used test-time learning
baselines, e.g., in-context learning and prompt/prefix tuning, by a large
margin. Moreover, C3PO enables MoE LLMs with 1-3B active parameters to
outperform LLMs of 7-9B parameters, hence improving MoE's advantages on
efficiency. Our thorough ablation study further sheds novel insights on
achieving test-time improvement on MoE.
Zhong-Yu Li, Ruoyi Du, Juncheng Yan, Le Zhuo, Zhen Li, Peng Gao, Zhanyu Ma, Ming-Ming Cheng
483
Recent progress in diffusion models significantly advances various image
generation tasks. However, the current mainstream approach remains focused on
building task-specific models, which have limited efficiency when supporting a
wide range of different needs. While universal models attempt to address this
limitation, they face critical challenges, including generalizable task
instruction, appropriate task distributions, and unified architectural design.
To tackle these challenges, we propose VisualCloze, a universal image
generation framework, which supports a wide range of in-domain tasks,
generalization to unseen ones, unseen unification of multiple tasks, and
reverse generation. Unlike existing methods that rely on language-based task
instruction, leading to task ambiguity and weak generalization, we integrate
visual in-context learning, allowing models to identify tasks from visual
demonstrations. Meanwhile, the inherent sparsity of visual task distributions
hampers the learning of transferable knowledge across tasks. To this end, we
introduce Graph200K, a graph-structured dataset that establishes various
interrelated tasks, enhancing task density and transferable knowledge.
Furthermore, we uncover that our unified image generation formulation shared a
consistent objective with image infilling, enabling us to leverage the strong
generative priors of pre-trained infilling models without modifying the
architectures.
Yukun Qi, Yiming Zhao, Yu Zeng, Xikun Bao, Wenxuan Huang, Lin Chen, Zehui Chen, Jie Zhao, Zhongang Qi, Feng Zhao
462
The advancement of Chain-of-Thought (CoT) reasoning has significantly
enhanced the capabilities of large language models (LLMs) and large
vision-language models (LVLMs). However, a rigorous evaluation framework for
video CoT reasoning remains absent. Current video benchmarks fail to adequately
assess the reasoning process and expose whether failures stem from deficiencies
in perception or reasoning capabilities. Therefore, we introduce VCR-Bench, a
novel benchmark designed to comprehensively evaluate LVLMs' Video
Chain-of-Thought Reasoning capabilities. VCR-Bench comprises 859 videos
spanning a variety of video content and durations, along with 1,034
high-quality question-answer pairs. Each pair is manually annotated with a
stepwise CoT rationale, where every step is tagged to indicate its association
with the perception or reasoning capabilities. Furthermore, we design seven
distinct task dimensions and propose the CoT score to assess the entire CoT
process based on the stepwise tagged CoT rationals. Extensive experiments on
VCR-Bench highlight substantial limitations in current LVLMs. Even the
top-performing model, o1, only achieves a 62.8% CoT score and an 56.7%
accuracy, while most models score below 40%. Experiments show most models score
lower on perception than reasoning steps, revealing LVLMs' key bottleneck in
temporal-spatial information processing for complex video reasoning. A robust
positive correlation between the CoT score and accuracy confirms the validity
of our evaluation framework and underscores the critical role of CoT reasoning
in solving complex video reasoning tasks. We hope VCR-Bench to serve as a
standardized evaluation framework and expose the actual drawbacks in complex
video reasoning task.
Shengyuan Ding, Shenxi Wu, Xiangyu Zhao, Yuhang Zang, Haodong Duan, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Dahua Lin, Jiaqi Wang
342
The Instruction Following (IF) ability measures how well Multi-modal Large
Language Models (MLLMs) understand exactly what users are telling them and
whether they are doing it right. Existing multimodal instruction following
training data is scarce, the benchmarks are simple with atomic instructions,
and the evaluation strategies are imprecise for tasks demanding exact output
constraints. To address this, we present MM-IFEngine, an effective pipeline to
generate high-quality image-instruction pairs. Our MM-IFEngine pipeline yields
large-scale, diverse, and high-quality training data MM-IFInstruct-23k, which
is suitable for Supervised Fine-Tuning (SFT) and extended as MM-IFDPO-23k for
Direct Preference Optimization (DPO). We further introduce MM-IFEval, a
challenging and diverse multi-modal instruction-following benchmark that
includes (1) both compose-level constraints for output responses and
perception-level constraints tied to the input images, and (2) a comprehensive
evaluation pipeline incorporating both rule-based assessment and judge model.
We conduct SFT and DPO experiments and demonstrate that fine-tuning MLLMs on
MM-IFInstruct-23k and MM-IFDPO-23k achieves notable gains on various IF
benchmarks, such as MM-IFEval (+10.2%), MIA (+7.6%), and IFEval
(+12.3%). The full data and evaluation code will be released on
https://github.com/SYuan03/MM-IFEngine.
Mustafa Shukor, Enrico Fini, Victor Guilherme Turrisi da Costa, Matthieu Cord, Joshua Susskind, Alaaeldin El-Nouby
292
Building general-purpose models that can effectively perceive the world
through multimodal signals has been a long-standing goal. Current approaches
involve integrating separately pre-trained components, such as connecting
vision encoders to LLMs and continuing multimodal training. While such
approaches exhibit remarkable sample efficiency, it remains an open question
whether such late-fusion architectures are inherently superior. In this work,
we revisit the architectural design of native multimodal models (NMMs)--those
trained from the ground up on all modalities--and conduct an extensive scaling
laws study, spanning 457 trained models with different architectures and
training mixtures. Our investigation reveals no inherent advantage to
late-fusion architectures over early-fusion ones, which do not rely on image
encoders. On the contrary, early-fusion exhibits stronger performance at lower
parameter counts, is more efficient to train, and is easier to deploy.
Motivated by the strong performance of the early-fusion architectures, we show
that incorporating Mixture of Experts (MoEs) allows for models that learn
modality-specific weights, significantly enhancing performance.
3D part amodal segmentation--decomposing a 3D shape into complete,
semantically meaningful parts, even when occluded--is a challenging but crucial
task for 3D content creation and understanding. Existing 3D part segmentation
methods only identify visible surface patches, limiting their utility. Inspired
by 2D amodal segmentation, we introduce this novel task to the 3D domain and
propose a practical, two-stage approach, addressing the key challenges of
inferring occluded 3D geometry, maintaining global shape consistency, and
handling diverse shapes with limited training data. First, we leverage existing
3D part segmentation to obtain initial, incomplete part segments. Second, we
introduce HoloPart, a novel diffusion-based model, to complete these segments
into full 3D parts. HoloPart utilizes a specialized architecture with local
attention to capture fine-grained part geometry and global shape context
attention to ensure overall shape consistency. We introduce new benchmarks
based on the ABO and PartObjaverse-Tiny datasets and demonstrate that HoloPart
significantly outperforms state-of-the-art shape completion methods. By
incorporating HoloPart with existing segmentation techniques, we achieve
promising results on 3D part amodal segmentation, opening new avenues for
applications in geometry editing, animation, and material assignment.
Xiyao Wang, Zhengyuan Yang, Chao Feng, Hongjin Lu, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, Lijuan Wang
182
In this paper, we present an effective method to enhance visual reasoning
with significantly fewer training samples, relying purely on self-improvement
with no knowledge distillation. Our key insight is that the difficulty of
training data during reinforcement fine-tuning (RFT) is critical. Appropriately
challenging samples can substantially boost reasoning capabilities even when
the dataset is small. Despite being intuitive, the main challenge remains in
accurately quantifying sample difficulty to enable effective data filtering. To
this end, we propose a novel way of repurposing Monte Carlo Tree Search (MCTS)
to achieve that. Starting from our curated 70k open-source training samples, we
introduce an MCTS-based selection method that quantifies sample difficulty
based on the number of iterations required by the VLMs to solve each problem.
This explicit step-by-step reasoning in MCTS enforces the model to think longer
and better identifies samples that are genuinely challenging. We filter and
retain 11k samples to perform RFT on Qwen2.5-VL-7B-Instruct, resulting in our
final model, ThinkLite-VL. Evaluation results on eight benchmarks show that
ThinkLite-VL improves the average performance of Qwen2.5-VL-7B-Instruct by 7%,
using only 11k training samples with no knowledge distillation. This
significantly outperforms all existing 7B-level reasoning VLMs, and our fairly
comparable baselines that use classic selection methods such as accuracy-based
filtering. Notably, on MathVista, ThinkLite-VL-7B achieves the SoTA accuracy of
75.1, surpassing Qwen2.5-VL-72B, GPT-4o, and O1. Our code, data, and model are
available at https://github.com/si0wang/ThinkLite-VL.
Genglin Liu, Salman Rahman, Elisa Kreiss, Marzyeh Ghassemi, Saadia Gabriel
182
We present a novel, open-source social network simulation framework, MOSAIC,
where generative language agents predict user behaviors such as liking,
sharing, and flagging content. This simulation combines LLM agents with a
directed social graph to analyze emergent deception behaviors and gain a better
understanding of how users determine the veracity of online social content. By
constructing user representations from diverse fine-grained personas, our
system enables multi-agent simulations that model content dissemination and
engagement dynamics at scale. Within this framework, we evaluate three
different content moderation strategies with simulated misinformation
dissemination, and we find that they not only mitigate the spread of
non-factual content but also increase user engagement. In addition, we analyze
the trajectories of popular content in our simulations, and explore whether
simulation agents' articulated reasoning for their social interactions truly
aligns with their collective engagement patterns. We open-source our simulation
software to encourage further research within AI and social sciences.
Ming Li, Ruiyi Zhang, Jian Chen, Jiuxiang Gu, Yufan Zhou, Franck Dernoncourt, Wanrong Zhu, Tianyi Zhou, Tong Sun
162
Despite the existing evolution of Multimodal Large Language Models (MLLMs), a
non-neglectable limitation remains in their struggle with visual text
grounding, especially in text-rich images of documents. Document images, such
as scanned forms and infographics, highlight critical challenges due to their
complex layouts and textual content. However, current benchmarks do not fully
address these challenges, as they mostly focus on visual grounding on natural
images, rather than text-rich document images. Thus, to bridge this gap, we
introduce TRIG, a novel task with a newly designed instruction dataset for
benchmarking and improving the Text-Rich Image Grounding capabilities of MLLMs
in document question-answering. Specifically, we propose an OCR-LLM-human
interaction pipeline to create 800 manually annotated question-answer pairs as
a benchmark and a large-scale training set of 90$ synthetic data based on four
diverse datasets. A comprehensive evaluation of various MLLMs on our proposed
benchmark exposes substantial limitations in their grounding capability on
text-rich images. In addition, we propose two simple and effective TRIG methods
based on general instruction tuning and plug-and-play efficient embedding,
respectively. By finetuning MLLMs on our synthetic dataset, they promisingly
improve spatial reasoning and grounding capabilities.
Rishubh Parihar, Vaibhav Agrawal, Sachidanand VS, R. Venkatesh Babu
105
Existing approaches for controlling text-to-image diffusion models, while
powerful, do not allow for explicit 3D object-centric control, such as precise
control of object orientation. In this work, we address the problem of
multi-object orientation control in text-to-image diffusion models. This
enables the generation of diverse multi-object scenes with precise orientation
control for each object. The key idea is to condition the diffusion model with
a set of orientation-aware compass tokens, one for each object, along
with text tokens. A light-weight encoder network predicts these compass tokens
taking object orientation as the input. The model is trained on a synthetic
dataset of procedurally generated scenes, each containing one or two 3D assets
on a plain background. However, direct training this framework results in poor
orientation control as well as leads to entanglement among objects. To mitigate
this, we intervene in the generation process and constrain the cross-attention
maps of each compass token to its corresponding object regions. The trained
model is able to achieve precise orientation control for a) complex objects not
seen during training and b) multi-object scenes with more than two objects,
indicating strong generalization capabilities. Further, when combined with
personalization methods, our method precisely controls the orientation of the
new object in diverse contexts. Our method achieves state-of-the-art
orientation control and text alignment, quantified with extensive evaluations
and a user study.
Zeren Jiang, Chuanxia Zheng, Iro Laina, Diane Larlus, Andrea Vedaldi
62
We introduce Geo4D, a method to repurpose video diffusion models for
monocular 3D reconstruction of dynamic scenes. By leveraging the strong dynamic
prior captured by such video models, Geo4D can be trained using only synthetic
data while generalizing well to real data in a zero-shot manner. Geo4D predicts
several complementary geometric modalities, namely point, depth, and ray maps.
It uses a new multi-modal alignment algorithm to align and fuse these
modalities, as well as multiple sliding windows, at inference time, thus
obtaining robust and accurate 4D reconstruction of long videos. Extensive
experiments across multiple benchmarks show that Geo4D significantly surpasses
state-of-the-art video depth estimation methods, including recent methods such
as MonST3R, which are also designed to handle dynamic scenes.
Current monocular 3D detectors are held back by the limited diversity and
scale of real-world datasets. While data augmentation certainly helps, it's
particularly difficult to generate realistic scene-aware augmented data for
outdoor settings. Most current approaches to synthetic data generation focus on
realistic object appearance through improved rendering techniques. However, we
show that where and how objects are positioned is just as crucial for training
effective 3D monocular detectors. The key obstacle lies in automatically
determining realistic object placement parameters - including position,
dimensions, and directional alignment when introducing synthetic objects into
actual scenes. To address this, we introduce MonoPlace3D, a novel system that
considers the 3D scene content to create realistic augmentations. Specifically,
given a background scene, MonoPlace3D learns a distribution over plausible 3D
bounding boxes. Subsequently, we render realistic objects and place them
according to the locations sampled from the learned distribution. Our
comprehensive evaluation on two standard datasets KITTI and NuScenes,
demonstrates that MonoPlace3D significantly improves the accuracy of multiple
existing monocular 3D detectors while being highly data efficient.
Artem Zholus, Carl Doersch, Yi Yang, Skanda Koppula, Viorica Patraucean, Xu Owen He, Ignacio Rocco, Mehdi S. M. Sajjadi, Sarath Chandar, Ross Goroshin
52
Tracking Any Point (TAP) in a video is a challenging computer vision problem
with many demonstrated applications in robotics, video editing, and 3D
reconstruction. Existing methods for TAP rely heavily on complex
tracking-specific inductive biases and heuristics, limiting their generality
and potential for scaling. To address these challenges, we present TAPNext, a
new approach that casts TAP as sequential masked token decoding. Our model is
causal, tracks in a purely online fashion, and removes tracking-specific
inductive biases. This enables TAPNext to run with minimal latency, and removes
the temporal windowing required by many existing state of art trackers. Despite
its simplicity, TAPNext achieves a new state-of-the-art tracking performance
among both online and offline trackers. Finally, we present evidence that many
widely used tracking heuristics emerge naturally in TAPNext through end-to-end
training.