Visual reasoning is a core component of human intelligence and a critical
capability for advanced multimodal models. Yet current reasoning evaluations of
multimodal large language models (MLLMs) often rely on text descriptions and
allow language-based reasoning shortcuts, failing to measure genuine
vision-centric reasoning. To address this, we introduce VisuLogic: a benchmark
of 1,000 human-verified problems across six categories (e.g., quantitative
shifts, spatial relations, attribute comparisons). These various types of
questions can be evaluated to assess the visual reasoning capabilities of MLLMs
from multiple perspectives. We evaluate leading MLLMs on this benchmark and
analyze their results to identify common failure modes. Most models score below
30% accuracy-only slightly above the 25% random baseline and far below the
51.4% achieved by humans-revealing significant gaps in visual reasoning.
Furthermore, we provide a supplementary training dataset and a
reinforcement-learning baseline to support further progress.
How cost-effectively can strong reasoning abilities be achieved in language
models? Driven by this fundamental question, we present Tina, a family of tiny
reasoning models achieved with high cost-efficiency. Notably, Tina demonstrates
that substantial reasoning performance can be developed using only minimal
resources, by applying parameter-efficient updates during reinforcement
learning (RL), using low-rank adaptation (LoRA), to an already tiny 1.5B
parameter base model. This minimalist approach produces models that achieve
reasoning performance which is competitive with, and sometimes surpasses, SOTA
RL reasoning models built upon the same base model. Crucially, this is achieved
at a tiny fraction of the computational post-training cost employed by existing
SOTA models. In fact, the best Tina model achieves a >20\% reasoning
performance increase and 43.33\% Pass@1 accuracy on AIME24, at only \$9 USD
post-training and evaluation cost (i.e., an estimated 260x cost reduction). Our
work reveals the surprising effectiveness of efficient RL reasoning via LoRA.
We validate this across multiple open-source reasoning datasets and various
ablation settings starting with a single, fixed set of hyperparameters.
Furthermore, we hypothesize that this effectiveness and efficiency stem from
LoRA rapidly adapting the model to the structural format of reasoning rewarded
by RL, while largely preserving the base model's underlying knowledge. In
service of accessibility and open research, we fully open-source all code,
training logs, and model weights \& checkpoints.
In this paper, we introduce DreamID, a diffusion-based face swapping model
that achieves high levels of ID similarity, attribute preservation, image
fidelity, and fast inference speed. Unlike the typical face swapping training
process, which often relies on implicit supervision and struggles to achieve
satisfactory results. DreamID establishes explicit supervision for face
swapping by constructing Triplet ID Group data, significantly enhancing
identity similarity and attribute preservation. The iterative nature of
diffusion models poses challenges for utilizing efficient image-space loss
functions, as performing time-consuming multi-step sampling to obtain the
generated image during training is impractical. To address this issue, we
leverage the accelerated diffusion model SD Turbo, reducing the inference steps
to a single iteration, enabling efficient pixel-level end-to-end training with
explicit Triplet ID Group supervision. Additionally, we propose an improved
diffusion-based model architecture comprising SwapNet, FaceNet, and ID Adapter.
This robust architecture fully unlocks the power of the Triplet ID Group
explicit supervision. Finally, to further extend our method, we explicitly
modify the Triplet ID Group data during training to fine-tune and preserve
specific attributes, such as glasses and face shape. Extensive experiments
demonstrate that DreamID outperforms state-of-the-art methods in terms of
identity similarity, pose and expression preservation, and image fidelity.
Overall, DreamID achieves high-quality face swapping results at 512*512
resolution in just 0.6 seconds and performs exceptionally well in challenging
scenarios such as complex lighting, large angles, and occlusions.
Sungjun Han, Juyoung Suk, Suyeong An, Hyungguk Kim, Kyuseok Kim, Wonsuk Yang, Seungtaek Choi, Jamin Shin
372
We introduce Trillion-7B, the most token-efficient Korean-centric
multilingual LLM available. Our novel Cross-lingual Document Attention (XLDA)
mechanism enables highly efficient and effective knowledge transfer from
English to target languages like Korean and Japanese. Combined with optimized
data mixtures, language-specific filtering, and tailored tokenizer
construction, Trillion-7B achieves competitive performance while dedicating
only 10\% of its 2T training tokens to multilingual data and requiring just
59.4K H100 GPU hours (\$148K) for full training. Comprehensive evaluations
across 27 benchmarks in four languages demonstrate Trillion-7B's robust
multilingual performance and exceptional cross-lingual consistency.
We introduce PHYBench, a novel, high-quality benchmark designed for
evaluating reasoning capabilities of large language models (LLMs) in physical
contexts. PHYBench consists of 500 meticulously curated physics problems based
on real-world physical scenarios, designed to assess the ability of models to
understand and reason about realistic physical processes. Covering mechanics,
electromagnetism, thermodynamics, optics, modern physics, and advanced physics,
the benchmark spans difficulty levels from high school exercises to
undergraduate problems and Physics Olympiad challenges. Additionally, we
propose the Expression Edit Distance (EED) Score, a novel evaluation metric
based on the edit distance between mathematical expressions, which effectively
captures differences in model reasoning processes and results beyond
traditional binary scoring methods. We evaluate various LLMs on PHYBench and
compare their performance with human experts. Our results reveal that even
state-of-the-art reasoning models significantly lag behind human experts,
highlighting their limitations and the need for improvement in complex physical
reasoning scenarios. Our benchmark results and dataset are publicly available
at https://phybench-official.github.io/phybench-demo/.
Shaden Alshammari, John Hershey, Axel Feldmann, William T. Freeman, Mark Hamilton
302
As the field of representation learning grows, there has been a proliferation
of different loss functions to solve different classes of problems. We
introduce a single information-theoretic equation that generalizes a large
collection of modern loss functions in machine learning. In particular, we
introduce a framework that shows that several broad classes of machine learning
methods are precisely minimizing an integrated KL divergence between two
conditional distributions: the supervisory and learned representations. This
viewpoint exposes a hidden information geometry underlying clustering, spectral
methods, dimensionality reduction, contrastive learning, and supervised
learning. This framework enables the development of new loss functions by
combining successful techniques from across the literature. We not only present
a wide array of proofs, connecting over 23 different approaches, but we also
leverage these theoretical results to create state-of-the-art unsupervised
image classifiers that achieve a +8% improvement over the prior
state-of-the-art on unsupervised classification on ImageNet-1K. We also
demonstrate that I-Con can be used to derive principled debiasing methods which
improve contrastive representation learners.
Recently, extensive research on image customization (e.g., identity, subject,
style, background, etc.) demonstrates strong customization capabilities in
large-scale generative models. However, most approaches are designed for
specific tasks, restricting their generalizability to combine different types
of condition. Developing a unified framework for image customization remains an
open challenge. In this paper, we present DreamO, an image customization
framework designed to support a wide range of tasks while facilitating seamless
integration of multiple conditions. Specifically, DreamO utilizes a diffusion
transformer (DiT) framework to uniformly process input of different types.
During training, we construct a large-scale training dataset that includes
various customization tasks, and we introduce a feature routing constraint to
facilitate the precise querying of relevant information from reference images.
Additionally, we design a placeholder strategy that associates specific
placeholders with conditions at particular positions, enabling control over the
placement of conditions in the generated results. Moreover, we employ a
progressive training strategy consisting of three stages: an initial stage
focused on simple tasks with limited data to establish baseline consistency, a
full-scale training stage to comprehensively enhance the customization
capabilities, and a final quality alignment stage to correct quality biases
introduced by low-quality data. Extensive experiments demonstrate that the
proposed DreamO can effectively perform various image customization tasks with
high quality and flexibly integrate different types of control conditions.
Ivan Moshkov, Darragh Hanley, Ivan Sorokin, Shubham Toshniwal, Christof Henkel, Benedikt Schifferer, Wei Du, Igor Gitman
222
This paper presents our winning submission to the AI Mathematical Olympiad -
Progress Prize 2 (AIMO-2) competition. Our recipe for building state-of-the-art
mathematical reasoning models relies on three key pillars. First, we create a
large-scale dataset comprising 540K unique high-quality math problems,
including olympiad-level problems, and their 3.2M long-reasoning solutions.
Second, we develop a novel method to integrate code execution with long
reasoning models through iterative training, generation, and quality filtering,
resulting in 1.7M high-quality Tool-Integrated Reasoning solutions. Third, we
create a pipeline to train models to select the most promising solution from
many candidates. We show that such generative solution selection (GenSelect)
can significantly improve upon majority voting baseline. Combining these ideas,
we train a series of models that achieve state-of-the-art results on
mathematical reasoning benchmarks. To facilitate further research, we release
our code, models, and the complete OpenMathReasoning dataset under a
commercially permissive license.
Direct Preference Optimization (DPO) simplifies reinforcement learning from
human feedback (RLHF) for large language models (LLMs) by directly optimizing
human preferences without an explicit reward model. We find that during DPO
training, the reference model plays the role of a data weight adjuster.
However, the common practice of initializing the policy and reference models
identically in DPO can lead to inefficient data utilization and impose a
performance ceiling. Meanwhile, the lack of a reference model in Simple
Preference Optimization (SimPO) reduces training robustness and necessitates
stricter conditions to prevent catastrophic forgetting. In this work, we
propose Pre-DPO, a simple yet effective DPO-based training paradigm that
enhances preference optimization performance by leveraging a guiding reference
model. This reference model provides foresight into the optimal policy state
achievable through the training preference data, serving as a guiding mechanism
that adaptively assigns higher weights to samples more suitable for the model
and lower weights to those less suitable. Extensive experiments on AlpacaEval
2.0 and Arena-Hard v0.1 benchmarks demonstrate that Pre-DPO consistently
improves the performance of both DPO and SimPO, without relying on external
models or additional data.
Xiaoxing Hu, Kaicheng Yang, Jun Wang, Haoran Xu, Ziyong Feng, Yupei Wang
152
Contrastive Language-Image Pre-training (CLIP) has achieved success on
multiple downstream tasks by aligning image and text modalities. However, the
nature of global contrastive learning limits CLIP's ability to comprehend
compositional concepts, such as relations and attributes. Although recent
studies employ global hard negative samples to improve compositional
understanding, these methods significantly compromise the model's inherent
general capabilities by forcibly distancing textual negative samples from
images in the embedding space. To overcome this limitation, we introduce a
Decoupled Global-Local Alignment (DeGLA) framework that improves compositional
understanding while substantially mitigating losses in general capabilities. To
optimize the retention of the model's inherent capabilities, we incorporate a
self-distillation mechanism within the global alignment process, aligning the
learnable image-text encoder with a frozen teacher model derived from an
exponential moving average. Under the constraint of self-distillation, it
effectively mitigates the catastrophic forgetting of pretrained knowledge
during fine-tuning. To improve compositional understanding, we first leverage
the in-context learning capability of Large Language Models (LLMs) to construct
about 2M high-quality negative captions across five types. Subsequently, we
propose the Image-Grounded Contrast (IGC) loss and Text-Grounded Contrast (TGC)
loss to enhance vision-language compositionally. Extensive experimental results
demonstrate the effectiveness of the DeGLA framework. Compared to previous
state-of-the-art methods, DeGLA achieves an average enhancement of 3.5% across
the VALSE, SugarCrepe, and ARO benchmarks. Concurrently, it obtains an average
performance improvement of 13.0% on zero-shot classification tasks across
eleven datasets. Our code will be released at
https://github.com/xiaoxing2001/DeGLA
Kun Wang, Guibin Zhang, Zhenhong Zhou, Jiahao Wu, Miao Yu, Shiqian Zhao, Chenlong Yin, Jinhu Fu, Yibo Yan, Hanjun Luo, Liang Lin, Zhihao Xu, Haolang Lu, Xinye Cao, Xinyun Zhou, Weifei Jin, Fanci Meng, Junyuan Mao, Hao Wu, Minghe Wang, Fan Zhang, Junfeng Fang, Chengwei Liu, Yifan Zhang, Qiankun Li, Chongye Guo, Yalan Qin, Yi Ding, Donghai Hong, Jiaming Ji, Xinfeng Li, Yifan Jiang, Dongxia Wang, Yihao Huang, Yufei Guo, Jen-tse Huang, Yanwei Yue, Wenke Huang, Guancheng Wan, Tianlin Li, Lei Bai, Jie Zhang, Qing Guo, Jingyi Wang, Tianlong Chen, Joey Tianyi Zhou, Xiaojun Jia, Weisong Sun, Cong Wu, Jing Chen, Xuming Hu, Yiming Li, Xiao Wang, Ningyu Zhang, Luu Anh Tuan, Guowen Xu, Tianwei Zhang, Xingjun Ma, Xiang Wang, Bo An, Jun Sun, Mohit Bansal, Shirui Pan, Yuval Elovici, Bhavya Kailkhura, Bo Li, Yaodong Yang, Hongwei Li, Wenyuan Xu, Yizhou Sun, Wei Wang, Qing Li, Ke Tang, Yu-Gang Jiang, Felix Juefei-Xu, Hui Xiong, Xiaofeng Wang, Shuicheng Yan, Dacheng Tao, Philip S. Yu, Qingsong Wen, Yang Liu
132
The remarkable success of Large Language Models (LLMs) has illuminated a
promising pathway toward achieving Artificial General Intelligence for both
academic and industrial communities, owing to their unprecedented performance
across various applications. As LLMs continue to gain prominence in both
research and commercial domains, their security and safety implications have
become a growing concern, not only for researchers and corporations but also
for every nation. Currently, existing surveys on LLM safety primarily focus on
specific stages of the LLM lifecycle, e.g., deployment phase or fine-tuning
phase, lacking a comprehensive understanding of the entire "lifechain" of LLMs.
To address this gap, this paper introduces, for the first time, the concept of
"full-stack" safety to systematically consider safety issues throughout the
entire process of LLM training, deployment, and eventual commercialization.
Compared to the off-the-shelf LLM safety surveys, our work demonstrates several
distinctive advantages: (I) Comprehensive Perspective. We define the complete
LLM lifecycle as encompassing data preparation, pre-training, post-training,
deployment and final commercialization. To our knowledge, this represents the
first safety survey to encompass the entire lifecycle of LLMs. (II) Extensive
Literature Support. Our research is grounded in an exhaustive review of over
800+ papers, ensuring comprehensive coverage and systematic organization of
security issues within a more holistic understanding. (III) Unique Insights.
Through systematic literature analysis, we have developed reliable roadmaps and
perspectives for each chapter. Our work identifies promising research
directions, including safety in data generation, alignment techniques, model
editing, and LLM-based agent systems. These insights provide valuable guidance
for researchers pursuing future work in this field.
Recently, DeepSeek-R1 (671B) (DeepSeek-AIet al., 2025) has demonstrated its
excellent reasoning ability in complex tasks and has publiclyshared its
methodology. This provides potentially high-quality chain-of-thought (CoT) data
for stimulating the reasoning abilities of small-sized large language models
(LLMs). To generate high-quality CoT data for different LLMs, we seek an
efficient method for generating high-quality CoT data with LLM-Adaptive
questiondifficulty levels. First, we grade the difficulty of the questions
according to the reasoning ability of the LLMs themselves and construct a
LLM-Adaptive question database. Second, we sample the problem database based on
a distribution of difficulty levels of the questions and then use DeepSeek-R1
(671B) (DeepSeek-AI et al., 2025) to generate the corresponding high-quality
CoT data with correct answers. Thanks to the construction of CoT data with
LLM-Adaptive difficulty levels, we have significantly reduced the cost of data
generation and enhanced the efficiency of model supervised fine-tuning (SFT).
Finally, we have validated the effectiveness and generalizability of the
proposed method in the fields of complex mathematical competitions and code
generation tasks. Notably, with only 2k high-quality mathematical CoT data, our
ZMath-32B surpasses DeepSeek-Distill-32B in math reasoning task. Similarly,
with only 2k high-quality code CoT data, our ZCode-32B surpasses
DeepSeek-Distill-32B in code reasoning tasks.
Since data annotation is costly, benchmark datasets often incorporate labels
from established image datasets. In this work, we assess the impact of label
errors in MSCOCO on the frequently used object hallucination benchmark POPE. We
re-annotate the benchmark images and identify an imbalance in annotation errors
across different subsets. Evaluating multiple models on the revised labels,
which we denote as RePOPE, we observe notable shifts in model rankings,
highlighting the impact of label quality. Code and data are available at
https://github.com/YanNeu/RePOPE .
Causal analysis plays a foundational role in scientific discovery and
reliable decision-making, yet it remains largely inaccessible to domain experts
due to its conceptual and algorithmic complexity. This disconnect between
causal methodology and practical usability presents a dual challenge: domain
experts are unable to leverage recent advances in causal learning, while causal
researchers lack broad, real-world deployment to test and refine their methods.
To address this, we introduce Causal-Copilot, an autonomous agent that
operationalizes expert-level causal analysis within a large language model
framework. Causal-Copilot automates the full pipeline of causal analysis for
both tabular and time-series data -- including causal discovery, causal
inference, algorithm selection, hyperparameter optimization, result
interpretation, and generation of actionable insights. It supports interactive
refinement through natural language, lowering the barrier for non-specialists
while preserving methodological rigor. By integrating over 20 state-of-the-art
causal analysis techniques, our system fosters a virtuous cycle -- expanding
access to advanced causal methods for domain experts while generating rich,
real-world applications that inform and advance causal theory. Empirical
evaluations demonstrate that Causal-Copilot achieves superior performance
compared to existing baselines, offering a reliable, scalable, and extensible
solution that bridges the gap between theoretical sophistication and real-world
applicability in causal analysis. A live interactive demo of Causal-Copilot is
available at https://causalcopilot.com/.
C-to-Rust transpilation is essential for modernizing legacy C code while
enhancing safety and interoperability with modern Rust ecosystems. However, no
dataset currently exists for evaluating whether a system can transpile C into
safe Rust that passes a set of test cases. We introduce CRUST-Bench, a dataset
of 100 C repositories, each paired with manually-written interfaces in safe
Rust as well as test cases that can be used to validate correctness of the
transpilation. By considering entire repositories rather than isolated
functions, CRUST-Bench captures the challenges of translating complex projects
with dependencies across multiple files. The provided Rust interfaces provide
explicit specifications that ensure adherence to idiomatic, memory-safe Rust
patterns, while the accompanying test cases enforce functional correctness. We
evaluate state-of-the-art large language models (LLMs) on this task and find
that safe and idiomatic Rust generation is still a challenging problem for
various state-of-the-art methods and techniques. We also provide insights into
the errors LLMs usually make in transpiling code from C to safe Rust. The best
performing model, OpenAI o1, is able to solve only 15 tasks in a single-shot
setting. Improvements on CRUST-Bench would lead to improved transpilation
systems that can reason about complex scenarios and help in migrating legacy
codebases from C into languages like Rust that ensure memory safety. You can
find the dataset and code at https://github.com/anirudhkhatry/CRUST-bench.
Michał Turski, Mateusz Chiliński, Łukasz Borchmann
42
Checkboxes are critical in real-world document processing where the presence
or absence of ticks directly informs data extraction and decision-making
processes. Yet, despite the strong performance of Large Vision and Language
Models across a wide range of tasks, they struggle with interpreting checkable
content. This challenge becomes particularly pressing in industries where a
single overlooked checkbox may lead to costly regulatory or contractual
oversights. To address this gap, we introduce the CheckboxQA dataset, a
targeted resource designed to evaluate and improve model performance on
checkbox-related tasks. It reveals the limitations of current models and serves
as a valuable tool for advancing document comprehension systems, with
significant implications for applications in sectors such as legal tech and
finance.
The dataset is publicly available at:
https://github.com/Snowflake-Labs/CheckboxQA
Jingchao Wang, Hong Wang, Wenlong Zhang, Kunhua Ji, Dingjiang Huang, Yefeng Zheng
32
Multi-task visual grounding (MTVG) includes two sub-tasks, i.e., Referring
Expression Comprehension (REC) and Referring Expression Segmentation (RES). The
existing representative approaches generally follow the research pipeline which
mainly consists of three core procedures, including independent feature
extraction for visual and linguistic modalities, respectively, cross-modal
interaction module, and independent prediction heads for different sub-tasks.
Albeit achieving remarkable performance, this research line has two
limitations: 1) The linguistic content has not been fully injected into the
entire visual backbone for boosting more effective visual feature extraction
and it needs an extra cross-modal interaction module; 2) The relationship
between REC and RES tasks is not effectively exploited to help the
collaborative prediction for more accurate output. To deal with these problems,
in this paper, we propose a Progressive Language-guided Visual Learning
framework for multi-task visual grounding, called PLVL, which not only finely
mine the inherent feature expression of the visual modality itself but also
progressively inject the language information to help learn linguistic-related
visual features. In this manner, our PLVL does not need additional cross-modal
fusion module while fully introducing the language guidance. Furthermore, we
analyze that the localization center for REC would help identify the
to-be-segmented object region for RES to some extent. Inspired by this
investigation, we design a multi-task head to accomplish collaborative
predictions for these two sub-tasks. Extensive experiments conducted on several
benchmark datasets comprehensively substantiate that our PLVL obviously
outperforms the representative methods in both REC and RES tasks.
https://github.com/jcwang0602/PLVL