We present Seed1.5-VL, a vision-language foundation model designed to advance
general-purpose multimodal understanding and reasoning. Seed1.5-VL is composed
with a 532M-parameter vision encoder and a Mixture-of-Experts (MoE) LLM of 20B
active parameters. Despite its relatively compact architecture, it delivers
strong performance across a wide spectrum of public VLM benchmarks and internal
evaluation suites, achieving the state-of-the-art performance on 38 out of 60
public benchmarks. Moreover, in agent-centric tasks such as GUI control and
gameplay, Seed1.5-VL outperforms leading multimodal systems, including OpenAI
CUA and Claude 3.7. Beyond visual and video understanding, it also demonstrates
strong reasoning abilities, making it particularly effective for multimodal
reasoning challenges such as visual puzzles. We believe these capabilities will
empower broader applications across diverse tasks. In this report, we mainly
provide a comprehensive review of our experiences in building Seed1.5-VL across
model design, data construction, and training at various stages, hoping that
this report can inspire further research. Seed1.5-VL is now accessible at
https://www.volcengine.com/ (Volcano Engine Model ID:
doubao-1-5-thinking-vision-pro-250428)
We present MiMo-7B, a large language model born for reasoning tasks, with
optimization across both pre-training and post-training stages. During
pre-training, we enhance the data preprocessing pipeline and employ a
three-stage data mixing strategy to strengthen the base model's reasoning
potential. MiMo-7B-Base is pre-trained on 25 trillion tokens, with additional
Multi-Token Prediction objective for enhanced performance and accelerated
inference speed. During post-training, we curate a dataset of 130K verifiable
mathematics and programming problems for reinforcement learning, integrating a
test-difficulty-driven code-reward scheme to alleviate sparse-reward issues and
employing strategic data resampling to stabilize training. Extensive
evaluations show that MiMo-7B-Base possesses exceptional reasoning potential,
outperforming even much larger 32B models. The final RL-tuned model,
MiMo-7B-RL, achieves superior performance on mathematics, code and general
reasoning tasks, surpassing the performance of OpenAI o1-mini. The model
checkpoints are available at https://github.com/xiaomimimo/MiMo.
While generative artificial intelligence has advanced significantly across
text, image, audio, and video domains, 3D generation remains comparatively
underdeveloped due to fundamental challenges such as data scarcity, algorithmic
limitations, and ecosystem fragmentation. To this end, we present Step1X-3D, an
open framework addressing these challenges through: (1) a rigorous data
curation pipeline processing >5M assets to create a 2M high-quality dataset
with standardized geometric and textural properties; (2) a two-stage 3D-native
architecture combining a hybrid VAE-DiT geometry generator with an
diffusion-based texture synthesis module; and (3) the full open-source release
of models, training code, and adaptation modules. For geometry generation, the
hybrid VAE-DiT component produces TSDF representations by employing
perceiver-based latent encoding with sharp edge sampling for detail
preservation. The diffusion-based texture synthesis module then ensures
cross-view consistency through geometric conditioning and latent-space
synchronization. Benchmark results demonstrate state-of-the-art performance
that exceeds existing open-source methods, while also achieving competitive
quality with proprietary solutions. Notably, the framework uniquely bridges the
2D and 3D generation paradigms by supporting direct transfer of 2D control
techniques~(e.g., LoRA) to 3D synthesis. By simultaneously advancing data
quality, algorithmic fidelity, and reproducibility, Step1X-3D aims to establish
new standards for open research in controllable 3D asset generation.
Tongxu Luo, Wenyu Du, Jiaxi Bi, Stephen Chung, Zhengyang Tang, Hao Yang, Min Zhang, Benyou Wang
454
Large Reasoning Models (LRMs) have the ability to self-correct even when they
make mistakes in their reasoning paths. However, our study reveals that when
the reasoning process starts with a short but poor beginning, it becomes
difficult for the model to recover. We refer to this phenomenon as the "Prefix
Dominance Trap". Inspired by psychological findings that peer interaction can
promote self-correction without negatively impacting already accurate
individuals, we propose **Learning from Peers** (LeaP) to address this
phenomenon. Specifically, every tokens, each reasoning path summarizes its
intermediate reasoning and shares it with others through a routing mechanism,
enabling paths to incorporate peer insights during inference. However, we
observe that smaller models sometimes fail to follow summarization and
reflection instructions effectively. To address this, we fine-tune them into
our **LeaP-T** model series. Experiments on AIME 2024, AIME 2025, AIMO 2025,
and GPQA Diamond show that LeaP provides substantial improvements. For
instance, QwQ-32B with LeaP achieves nearly 5 absolute points higher than the
baseline on average, and surpasses DeepSeek-R1-671B on three math benchmarks
with an average gain of 3.3 points. Notably, our fine-tuned LeaP-T-7B matches
the performance of DeepSeek-R1-Distill-Qwen-14B on AIME 2024. In-depth analysis
reveals LeaP's robust error correction by timely peer insights, showing strong
error tolerance and handling varied task difficulty. LeaP marks a milestone by
enabling LRMs to collaborate during reasoning. Our code, datasets, and models
are available at https://learning-from-peers.github.io/ .
Recent advances in continuous generative models, including multi-step
approaches like diffusion and flow-matching (typically requiring 8-1000
sampling steps) and few-step methods such as consistency models (typically 1-8
steps), have demonstrated impressive generative performance. However, existing
work often treats these approaches as distinct paradigms, resulting in separate
training and sampling methodologies. We introduce a unified framework for
training, sampling, and analyzing these models. Our implementation, the Unified
Continuous Generative Models Trainer and Sampler (UCGM-{T,S}), achieves
state-of-the-art (SOTA) performance. For example, on ImageNet 256x256 using a
675M diffusion transformer, UCGM-T trains a multi-step model achieving 1.30 FID
in 20 steps and a few-step model reaching 1.42 FID in just 2 steps.
Additionally, applying UCGM-S to a pre-trained model (previously 1.26 FID at
250 steps) improves performance to 1.06 FID in only 40 steps. Code is available
at: https://github.com/LINs-lab/UCGM.
Instruction-based Large Language Models (LLMs) have proven effective in
numerous few-shot or zero-shot Natural Language Processing (NLP) tasks.
However, creating human-annotated instruction data is time-consuming,
expensive, and often limited in quantity and task diversity. Previous research
endeavors have attempted to address this challenge by proposing frameworks
capable of generating instructions in a semi-automated and task-agnostic manner
directly from the model itself. Many of these efforts have relied on large
API-only parameter-based models such as GPT-3.5 (175B), which are expensive,
and subject to limits on a number of queries. This paper explores the
performance of three open-source small LLMs such as LLaMA 2-7B, LLama 2-13B,
and Mistral 7B, using a semi-automated framework, thereby reducing human
intervention, effort, and cost required to generate an instruction dataset for
fine-tuning LLMs. Furthermore, we demonstrate that incorporating a
Reinforcement Learning (RL) based training algorithm into this LLMs-based
framework leads to further enhancements. Our evaluation of the dataset reveals
that these RL-based frameworks achieve a substantial improvements in 63-66% of
the tasks compared to previous approaches.
Recent breakthroughs in generative models-particularly diffusion models and
rectified flows-have revolutionized visual content creation, yet aligning model
outputs with human preferences remains a critical challenge. Existing
reinforcement learning (RL)-based methods for visual generation face critical
limitations: incompatibility with modern Ordinary Differential Equations
(ODEs)-based sampling paradigms, instability in large-scale training, and lack
of validation for video generation. This paper introduces DanceGRPO, the first
unified framework to adapt Group Relative Policy Optimization (GRPO) to visual
generation paradigms, unleashing one unified RL algorithm across two generative
paradigms (diffusion models and rectified flows), three tasks (text-to-image,
text-to-video, image-to-video), four foundation models (Stable Diffusion,
HunyuanVideo, FLUX, SkyReel-I2V), and five reward models (image/video
aesthetics, text-image alignment, video motion quality, and binary reward). To
our knowledge, DanceGRPO is the first RL-based unified framework capable of
seamless adaptation across diverse generative paradigms, tasks, foundational
models, and reward models. DanceGRPO demonstrates consistent and substantial
improvements, which outperform baselines by up to 181% on benchmarks such as
HPS-v2.1, CLIP Score, VideoAlign, and GenEval. Notably, DanceGRPO not only can
stabilize policy optimization for complex video generation, but also enables
generative policy to better capture denoising trajectories for Best-of-N
inference scaling and learn from sparse binary feedback. Our results establish
DanceGRPO as a robust and versatile solution for scaling Reinforcement Learning
from Human Feedback (RLHF) tasks in visual generation, offering new insights
into harmonizing reinforcement learning and visual synthesis. The code will be
released.
Xiaokun Wang, Chris, Jiangbo Pei, Wei Shen, Yi Peng, Yunzhuo Hao, Weijie Qiu, Ai Jian, Tianyidan Xie, Xuchen Song, Yang Liu, Yahui Zhou
293
We propose Skywork-VL Reward, a multimodal reward model that provides reward
signals for both multimodal understanding and reasoning tasks. Our technical
approach comprises two key components: First, we construct a large-scale
multimodal preference dataset that covers a wide range of tasks and scenarios,
with responses collected from both standard vision-language models (VLMs) and
advanced VLM reasoners. Second, we design a reward model architecture based on
Qwen2.5-VL-7B-Instruct, integrating a reward head and applying multi-stage
fine-tuning using pairwise ranking loss on pairwise preference data.
Experimental evaluations show that Skywork-VL Reward achieves state-of-the-art
results on multimodal VL-RewardBench and exhibits competitive performance on
the text-only RewardBench benchmark. Furthermore, preference data constructed
based on our Skywork-VL Reward proves highly effective for training Mixed
Preference Optimization (MPO), leading to significant improvements in
multimodal reasoning capabilities. Our results underscore Skywork-VL Reward as
a significant advancement toward general-purpose, reliable reward models for
multimodal alignment. Our model has been publicly released to promote
transparency and reproducibility.
Recently, there has been growing interest in collecting reasoning-intensive
pretraining data to improve LLMs' complex reasoning ability. Prior approaches
typically rely on supervised classifiers to identify such data, which requires
labeling by humans or LLMs, often introducing domain-specific biases. Due to
the attention heads being crucial to in-context reasoning, we propose
AttentionInfluence, a simple yet effective, training-free method without
supervision signal. Our approach enables a small pretrained language model to
act as a strong data selector through a simple attention head masking
operation. Specifically, we identify retrieval heads and compute the loss
difference when masking these heads. We apply AttentionInfluence to a
1.3B-parameter dense model to conduct data selection on the SmolLM corpus of
241B tokens, and mix the SmolLM corpus with the selected subset comprising 73B
tokens to pretrain a 7B-parameter dense model using 1T training tokens and WSD
learning rate scheduling. Our experimental results demonstrate substantial
improvements, ranging from 1.4pp to 3.5pp, across several knowledge-intensive
and reasoning-heavy benchmarks (i.e., MMLU, MMLU-Pro, AGIEval-en, GSM8K, and
HumanEval). This demonstrates an effective weak-to-strong scaling property,
with small models improving the final performance of larger models-offering a
promising and scalable path for reasoning-centric data selection.
Xingjin Wang, Howe Tissue, Lu Wang, Linjing Li, Daniel Dajun Zeng
194
Continual Pre-Training (CPT) has become a popular and effective method to
apply strong foundation models to specific downstream tasks. In this work, we
explore the learning dynamics throughout the CPT process for large language
models. We specifically focus on how general and downstream domain performance
evolves at each training step, with domain performance measured via validation
losses. We have observed that the CPT loss curve fundamentally characterizes
the transition from one curve to another hidden curve, and could be described
by decoupling the effects of distribution shift and learning rate annealing. We
derive a CPT scaling law that combines the two factors, enabling the prediction
of loss at any (continual) training steps and across learning rate schedules
(LRS) in CPT. Our formulation presents a comprehensive understanding of several
critical factors in CPT, including loss potential, peak learning rate, training
steps, replay ratio, etc. Moreover, our approach can be adapted to customize
training hyper-parameters to different CPT goals such as balancing general and
domain-specific performance. Extensive experiments demonstrate that our scaling
law holds across various CPT datasets and training hyper-parameters.
Zimu Lu, Yunqiao Yang, Houxing Ren, Haotian Hou, Han Xiao, Ke Wang, Weikang Shi, Aojun Zhou, Mingjie Zhan, Hongsheng Li
162
LLM-based agents have demonstrated great potential in generating and managing
code within complex codebases. In this paper, we introduce WebGen-Bench, a
novel benchmark designed to measure an LLM-based agent's ability to create
multi-file website codebases from scratch. It contains diverse instructions for
website generation, created through the combined efforts of human annotators
and GPT-4o. These instructions span three major categories and thirteen minor
categories, encompassing nearly all important types of web applications. To
assess the quality of the generated websites, we use GPT-4o to generate test
cases targeting each functionality described in the instructions, and then
manually filter, adjust, and organize them to ensure accuracy, resulting in 647
test cases. Each test case specifies an operation to be performed on the
website and the expected result after the operation. To automate testing and
improve reproducibility, we employ a powerful web-navigation agent to execute
tests on the generated websites and determine whether the observed responses
align with the expected results. We evaluate three high-performance code-agent
frameworks, Bolt.diy, OpenHands, and Aider, using multiple proprietary and
open-source LLMs as engines. The best-performing combination, Bolt.diy powered
by DeepSeek-R1, achieves only 27.8\% accuracy on the test cases, highlighting
the challenging nature of our benchmark. Additionally, we construct
WebGen-Instruct, a training set consisting of 6,667 website-generation
instructions. Training Qwen2.5-Coder-32B-Instruct on Bolt.diy trajectories
generated from a subset of this training set achieves an accuracy of 38.2\%,
surpassing the performance of the best proprietary model.
Prime Intellect Team, Sami Jaghouar, Justus Mattern, Jack Min Ong, Jannik Straube, Manveer Basra, Aaron Pazdera, Kushal Thaman, Matthew Di Ferrante, Felix Gabriel, Fares Obeid, Kemal Erdem, Michael Keiblinger, Johannes Hagemann
132
We introduce INTELLECT-2, the first globally distributed reinforcement
learning (RL) training run of a 32 billion parameter language model. Unlike
traditional centralized training efforts, INTELLECT-2 trains a reasoning model
using fully asynchronous RL across a dynamic, heterogeneous swarm of
permissionless compute contributors.
To enable a training run with this unique infrastructure, we built various
components from scratch: we introduce PRIME-RL, our training framework
purpose-built for distributed asynchronous reinforcement learning, based on top
of novel components such as TOPLOC, which verifies rollouts from untrusted
inference workers, and SHARDCAST, which efficiently broadcasts policy weights
from training nodes to inference workers.
Beyond infrastructure components, we propose modifications to the standard
GRPO training recipe and data filtering techniques that were crucial to achieve
training stability and ensure that our model successfully learned its training
objective, thus improving upon QwQ-32B, the state of the art reasoning model in
the 32B parameter range.
We open-source INTELLECT-2 along with all of our code and data, hoping to
encourage and enable more open research in the field of decentralized training.
Conventional wisdom suggests that autoregressive models are used to process
discrete data. When applied to continuous modalities such as visual data,
Visual AutoRegressive modeling (VAR) typically resorts to quantization-based
approaches to cast the data into a discrete space, which can introduce
significant information loss. To tackle this issue, we introduce a Continuous
VAR framework that enables direct visual autoregressive generation without
vector quantization. The underlying theoretical foundation is strictly proper
scoring rules, which provide powerful statistical tools capable of evaluating
how well a generative model approximates the true distribution. Within this
framework, all we need is to select a strictly proper score and set it as the
training objective to optimize. We primarily explore a class of training
objectives based on the energy score, which is likelihood-free and thus
overcomes the difficulty of making probabilistic predictions in the continuous
space. Previous efforts on continuous autoregressive generation, such as GIVT
and diffusion loss, can also be derived from our framework using other strictly
proper scores. Source code: https://github.com/shaochenze/EAR.
Niladri Shekhar Dutt, Duygu Ceylan, Niloy J. Mitra
112
Retouching is an essential task in post-manipulation of raw photographs.
Generative editing, guided by text or strokes, provides a new tool accessible
to users but can easily change the identity of the original objects in
unacceptable and unpredictable ways. In contrast, although traditional
procedural edits, as commonly supported by photoediting tools (e.g., Gimp,
Lightroom), are conservative, they are still preferred by professionals.
Unfortunately, professional quality retouching involves many individual
procedural editing operations that is challenging to plan for most novices. In
this paper, we ask if a multimodal large language model (MLLM) can be taught to
critique raw photographs, suggest suitable remedies, and finally realize them
with a given set of pre-authored procedural image operations. We demonstrate
that MLLMs can be first made aware of the underlying image processing
operations, by training them to solve specially designed visual puzzles.
Subsequently, such an operation-aware MLLM can both plan and propose edit
sequences. To facilitate training, given a set of expert-edited photos, we
synthesize a reasoning dataset by procedurally manipulating the expert edits
and then grounding a pretrained LLM on the visual adjustments, to synthesize
reasoning for finetuning. The proposed retouching operations are, by
construction, understandable by the users, preserve object details and
resolution, and can be optionally overridden. We evaluate our setup on a
variety of test examples and show advantages, in terms of explainability and
identity preservation, over existing generative and other procedural
alternatives. Code, data, models, and supplementary results can be found via
our project website at https://monetgpt.github.io.
Ziyang Huang, Xiaowei Yuan, Yiming Ju, Jun Zhao, Kang Liu
102
Retrieval-augmented generation (RAG) is a common strategy to reduce
hallucinations in Large Language Models (LLMs). While reinforcement learning
(RL) can enable LLMs to act as search agents by activating retrieval
capabilities, existing ones often underutilize their internal knowledge. This
can lead to redundant retrievals, potential harmful knowledge conflicts, and
increased inference latency. To address these limitations, an efficient and
adaptive search agent capable of discerning optimal retrieval timing and
synergistically integrating parametric (internal) and retrieved (external)
knowledge is in urgent need. This paper introduces the Reinforced
Internal-External Knowledge Synergistic Reasoning Agent (IKEA), which could
indentify its own knowledge boundary and prioritize the utilization of internal
knowledge, resorting to external search only when internal knowledge is deemed
insufficient. This is achieved using a novel knowledge-boundary aware reward
function and a knowledge-boundary aware training dataset. These are designed
for internal-external knowledge synergy oriented RL, incentivizing the model to
deliver accurate answers, minimize unnecessary retrievals, and encourage
appropriate external searches when its own knowledge is lacking. Evaluations
across multiple knowledge reasoning tasks demonstrate that IKEA significantly
outperforms baseline methods, reduces retrieval frequency significantly, and
exhibits robust generalization capabilities.
D. Sculley, Will Cukierski, Phil Culliton, Sohier Dane, Maggie Demkin, Ryan Holbrook, Addison Howard, Paul Mooney, Walter Reade, Megan Risdal, Nate Keating
92
In this position paper, we observe that empirical evaluation in Generative AI
is at a crisis point since traditional ML evaluation and benchmarking
strategies are insufficient to meet the needs of evaluating modern GenAI models
and systems. There are many reasons for this, including the fact that these
models typically have nearly unbounded input and output spaces, typically do
not have a well defined ground truth target, and typically exhibit strong
feedback loops and prediction dependence based on context of previous model
outputs. On top of these critical issues, we argue that the problems of {\em
leakage} and {\em contamination} are in fact the most important and difficult
issues to address for GenAI evaluations. Interestingly, the field of AI
Competitions has developed effective measures and practices to combat leakage
for the purpose of counteracting cheating by bad actors within a competition
setting. This makes AI Competitions an especially valuable (but underutilized)
resource. Now is time for the field to view AI Competitions as the gold
standard for empirical rigor in GenAI evaluation, and to harness and harvest
their results with according value.
Sparse Mixture of Experts (MoE) architectures have emerged as a promising
approach for scaling Transformer models. While initial works primarily
incorporated MoE into feed-forward network (FFN) layers, recent studies have
explored extending the MoE paradigm to attention layers to enhance model
performance. However, existing attention-based MoE layers require specialized
implementations and demonstrate suboptimal performance compared to their
FFN-based counterparts. In this paper, we aim to unify the MoE designs in
attention and FFN layers by introducing a novel reformulation of the attention
mechanism, revealing an underlying FFN-like structure within attention modules.
Our proposed architecture, UMoE, achieves superior performance through
attention-based MoE layers while enabling efficient parameter sharing between
FFN and attention components.
Jiashuo Sun, Xianrui Zhong, Sizhe Zhou, Jiawei Han
83
Retrieval-augmented generation (RAG) systems combine large language models
(LLMs) with external knowledge retrieval, making them highly effective for
knowledge-intensive tasks. A crucial but often under-explored component of
these systems is the reranker, which refines retrieved documents to enhance
generation quality and explainability. The challenge of selecting the optimal
number of documents (k) remains unsolved: too few may omit critical
information, while too many introduce noise and inefficiencies. Although recent
studies have explored LLM-based rerankers, they primarily leverage internal
model knowledge and overlook the rich supervisory signals that LLMs can
provide, such as using response quality as feedback for optimizing reranking
decisions. In this paper, we propose DynamicRAG, a novel RAG framework where
the reranker dynamically adjusts both the order and number of retrieved
documents based on the query. We model the reranker as an agent optimized
through reinforcement learning (RL), using rewards derived from LLM output
quality. Across seven knowledge-intensive datasets, DynamicRAG demonstrates
superior performance, achieving state-of-the-art results. The model, data and
code are available at https://github.com/GasolSun36/DynamicRAG
Tuochao Chen, Nicholas Batchelder, Alisa Liu, Noah Smith, Shyamnath Gollakota
62
We introduce LlamaPIE, the first real-time proactive assistant designed to
enhance human conversations through discreet, concise guidance delivered via
hearable devices. Unlike traditional language models that require explicit user
invocation, this assistant operates in the background, anticipating user needs
without interrupting conversations. We address several challenges, including
determining when to respond, crafting concise responses that enhance
conversations, leveraging knowledge of the user for context-aware assistance,
and real-time, on-device processing. To achieve this, we construct a
semi-synthetic dialogue dataset and propose a two-model pipeline: a small model
that decides when to respond and a larger model that generates the response. We
evaluate our approach on real-world datasets, demonstrating its effectiveness
in providing helpful, unobtrusive assistance. User studies with our assistant,
implemented on Apple Silicon M2 hardware, show a strong preference for the
proactive assistant over both a baseline with no assistance and a reactive
model, highlighting the potential of LlamaPie to enhance live conversations.
Visuomotor policy learning has witnessed substantial progress in robotic
manipulation, with recent approaches predominantly relying on generative models
to model the action distribution. However, these methods often overlook the
critical coupling between visual perception and action prediction. In this
work, we introduce Triply-Hierarchical Diffusion
Policy~(H^{\mathbf{3}DP}), a novel visuomotor learning framework
that explicitly incorporates hierarchical structures to strengthen the
integration between visual features and action generation. H^{3}DP contains
3 levels of hierarchy: (1) depth-aware input layering that organizes
RGB-D observations based on depth information; (2) multi-scale visual
representations that encode semantic features at varying levels of granularity;
and (3) a hierarchically conditioned diffusion process that aligns the
generation of coarse-to-fine actions with corresponding visual features.
Extensive experiments demonstrate that H^{3}DP yields a +27.5%
average relative improvement over baselines across 44 simulation
tasks and achieves superior performance in 4 challenging bimanual
real-world manipulation tasks. Project Page: https://lyy-iiis.github.io/h3dp/.
Assaf Ben-Kish, Itamar Zimerman, M. Jehanzeb Mirza, James Glass, Leonid Karlinsky, Raja Giryes
32
A recent trend in LLMs is developing recurrent sub-quadratic models that
improve long-context processing efficiency. We investigate leading large
long-context models, focusing on how their fixed-size recurrent memory affects
their performance. Our experiments reveal that, even when these models are
trained for extended contexts, their use of long contexts remains
underutilized. Specifically, we demonstrate that a chunk-based inference
procedure, which identifies and processes only the most relevant portion of the
input can mitigate recurrent memory failures and be effective for many
long-context tasks: On LongBench, our method improves the overall performance
of Falcon3-Mamba-Inst-7B by 14%, Falcon-Mamba-Inst-7B by 28%,
RecurrentGemma-IT-9B by 50%, and RWKV6-Finch-7B by 51%. Surprisingly, this
simple approach also leads to state-of-the-art results in the challenging
LongBench v2 benchmark, showing competitive performance with equivalent size
Transformers. Furthermore, our findings raise questions about whether recurrent
models genuinely exploit long-range dependencies, as our single-chunk strategy
delivers stronger performance - even in tasks that presumably require
cross-context relations.
Vipula Rawte, Ryan A. Rossi, Franck Dernoncourt, Nedim Lipka
32
As Large Language Models (LLMs) are increasingly applied to document-based
tasks - such as document summarization, question answering, and information
extraction - where user requirements focus on retrieving information from
provided documents rather than relying on the model's parametric knowledge,
ensuring the trustworthiness and interpretability of these systems has become a
critical concern. A central approach to addressing this challenge is
attribution, which involves tracing the generated outputs back to their source
documents. However, since LLMs can produce inaccurate or imprecise responses,
it is crucial to assess the reliability of these citations.
To tackle this, our work proposes two techniques. (1) A zero-shot approach
that frames attribution as a straightforward textual entailment task. Our
method using flan-ul2 demonstrates an improvement of 0.27% and 2.4% over the
best baseline of ID and OOD sets of AttributionBench, respectively. (2) We also
explore the role of the attention mechanism in enhancing the attribution
process. Using a smaller LLM, flan-t5-small, the F1 scores outperform the
baseline across almost all layers except layer 4 and layers 8 through 11.
Although deep learning models have demonstrated remarkable potential in
weather prediction, most of them overlook either the physics of the
underlying weather evolution or the topology of the Earth's surface.
In light of these disadvantages, we develop PASSAT, a novel Physics-ASSisted
And Topology-informed deep learning model for weather prediction. PASSAT
attributes the weather evolution to two key factors: (i) the advection process
that can be characterized by the advection equation and the Navier-Stokes
equation; (ii) the Earth-atmosphere interaction that is difficult to both model
and calculate. PASSAT also takes the topology of the Earth's surface into
consideration, other than simply treating it as a plane. With these
considerations, PASSAT numerically solves the advection equation and the
Navier-Stokes equation on the spherical manifold, utilizes a spherical graph
neural network to capture the Earth-atmosphere interaction, and generates the
initial velocity fields that are critical to solving the advection equation
from the same spherical graph neural network. In the 5.625^circ-resolution
ERA5 data set, PASSAT outperforms both the state-of-the-art deep learning-based
weather prediction models and the operational numerical weather prediction
model IFS T42. Code and checkpoint are available at
https://github.com/Yumenomae/PASSAT_5p625.
Designing biological sequences that satisfy multiple, often conflicting,
functional and biophysical criteria remains a central challenge in biomolecule
engineering. While discrete flow matching models have recently shown promise
for efficient sampling in high-dimensional sequence spaces, existing approaches
address only single objectives or require continuous embeddings that can
distort discrete distributions. We present Multi-Objective-Guided Discrete Flow
Matching (MOG-DFM), a general framework to steer any pretrained discrete-time
flow matching generator toward Pareto-efficient trade-offs across multiple
scalar objectives. At each sampling step, MOG-DFM computes a hybrid
rank-directional score for candidate transitions and applies an adaptive
hypercone filter to enforce consistent multi-objective progression. We also
trained two unconditional discrete flow matching models, PepDFM for diverse
peptide generation and EnhancerDFM for functional enhancer DNA generation, as
base generation models for MOG-DFM. We demonstrate MOG-DFM's effectiveness in
generating peptide binders optimized across five properties (hemolysis,
non-fouling, solubility, half-life, and binding affinity), and in designing DNA
sequences with specific enhancer classes and DNA shapes. In total, MOG-DFM
proves to be a powerful tool for multi-property-guided biomolecule sequence
design.