Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Xin Wang, Rachel Ward, Yue Wu, Dingli Yu, Cyril Zhang, Yi Zhang
1197
We present phi-4, a 14-billion parameter language model developed with a
training recipe that is centrally focused on data quality. Unlike most language
models, where pre-training is based primarily on organic data sources such as
web content or code, phi-4 strategically incorporates synthetic data throughout
the training process. While previous models in the Phi family largely distill
the capabilities of a teacher model (specifically GPT-4), phi-4 substantially
surpasses its teacher model on STEM-focused QA capabilities, giving evidence
that our data-generation and post-training techniques go beyond distillation.
Despite minimal changes to the phi-3 architecture, phi-4 achieves strong
performance relative to its size -- especially on reasoning-focused benchmarks
-- due to improved data, training curriculum, and innovations in the
post-training scheme.
Pan Zhang, Xiaoyi Dong, Yuhang Cao, Yuhang Zang, Rui Qian, Xilin Wei, Lin Chen, Yifei Li, Junbo Niu, Shuangrui Ding, Qipeng Guo, Haodong Duan, Xin Chen, Han Lv, Zheng Nie, Min Zhang, Bin Wang, Wenwei Zhang, Xinyue Zhang, Jiaye Ge, Wei Li, Jingwen Li, Zhongying Tu, Conghui He, Xingcheng Zhang, Kai Chen, Yu Qiao, Dahua Lin, Jiaqi Wang
993
Creating AI systems that can interact with environments over long periods,
similar to human cognition, has been a longstanding research goal. Recent
advancements in multimodal large language models (MLLMs) have made significant
strides in open-world understanding. However, the challenge of continuous and
simultaneous streaming perception, memory, and reasoning remains largely
unexplored. Current MLLMs are constrained by their sequence-to-sequence
architecture, which limits their ability to process inputs and generate
responses simultaneously, akin to being unable to think while perceiving.
Furthermore, relying on long contexts to store historical data is impractical
for long-term interactions, as retaining all information becomes costly and
inefficient. Therefore, rather than relying on a single foundation model to
perform all functions, this project draws inspiration from the concept of the
Specialized Generalist AI and introduces disentangled streaming perception,
reasoning, and memory mechanisms, enabling real-time interaction with streaming
video and audio input. The proposed framework InternLM-XComposer2.5-OmniLive
(IXC2.5-OL) consists of three key modules: (1) Streaming Perception Module:
Processes multimodal information in real-time, storing key details in memory
and triggering reasoning in response to user queries. (2) Multi-modal Long
Memory Module: Integrates short-term and long-term memory, compressing
short-term memories into long-term ones for efficient retrieval and improved
accuracy. (3) Reasoning Module: Responds to queries and executes reasoning
tasks, coordinating with the perception and memory modules. This project
simulates human-like cognition, enabling multimodal large language models to
provide continuous and adaptive service over time.
Multimodal large language models (MLLMs) have made rapid progress in recent
years, yet continue to struggle with low-level visual perception (LLVP) --
particularly the ability to accurately describe the geometric details of an
image. This capability is crucial for applications in areas such as robotics,
medical image analysis, and manufacturing. In this paper, we first introduce
Geoperception, a benchmark designed to evaluate an MLLM's ability to accurately
transcribe 2D geometric information from an image. Using this benchmark, we
demonstrate the limitations of leading MLLMs, and then conduct a comprehensive
empirical study to explore strategies for improving their performance on
geometric tasks. Our findings highlight the benefits of certain model
architectures, training techniques, and data strategies, including the use of
high-fidelity synthetic data and multi-stage training with a data curriculum.
Notably, we find that a data curriculum enables models to learn challenging
geometry understanding tasks which they fail to learn from scratch. Leveraging
these insights, we develop Euclid, a family of models specifically optimized
for strong low-level geometric perception. Although purely trained on synthetic
multimodal data, Euclid shows strong generalization ability to novel geometry
shapes. For instance, Euclid outperforms the best closed-source model,
Gemini-1.5-Pro, by up to 58.56% on certain Geoperception benchmark tasks and
10.65% on average across all tasks.
As Multi-modal Large Language Models (MLLMs) evolve, expanding beyond
single-domain capabilities is essential to meet the demands for more versatile
and efficient AI. However, previous omni-models have insufficiently explored
speech, neglecting its integration with multi-modality. We introduce Lyra, an
efficient MLLM that enhances multimodal abilities, including advanced
long-speech comprehension, sound understanding, cross-modality efficiency, and
seamless speech interaction. To achieve efficiency and speech-centric
capabilities, Lyra employs three strategies: (1) leveraging existing
open-source large models and a proposed multi-modality LoRA to reduce training
costs and data requirements; (2) using a latent multi-modality regularizer and
extractor to strengthen the relationship between speech and other modalities,
thereby enhancing model performance; and (3) constructing a high-quality,
extensive dataset that includes 1.5M multi-modal (language, vision, audio) data
samples and 12K long speech samples, enabling Lyra to handle complex long
speech inputs and achieve more robust omni-cognition. Compared to other
omni-methods, Lyra achieves state-of-the-art performance on various
vision-language, vision-speech, and speech-language benchmarks, while also
using fewer computational resources and less training data.
Multimodal generative models require a unified approach to handle both
discrete data (e.g., text and code) and continuous data (e.g., image, audio,
video). In this work, we propose Latent Language Modeling (LatentLM), which
seamlessly integrates continuous and discrete data using causal Transformers.
Specifically, we employ a variational autoencoder (VAE) to represent continuous
data as latent vectors and introduce next-token diffusion for autoregressive
generation of these vectors. Additionally, we develop sigma-VAE to address
the challenges of variance collapse, which is crucial for autoregressive
modeling. Extensive experiments demonstrate the effectiveness of LatentLM
across various modalities. In image generation, LatentLM surpasses Diffusion
Transformers in both performance and scalability. When integrated into
multimodal large language models, LatentLM provides a general-purpose interface
that unifies multimodal generation and understanding. Experimental results show
that LatentLM achieves favorable performance compared to Transfusion and vector
quantized models in the setting of scaling up training tokens. In
text-to-speech synthesis, LatentLM outperforms the state-of-the-art VALL-E 2
model in speaker similarity and robustness, while requiring 10x fewer decoding
steps. The results establish LatentLM as a highly effective and scalable
approach to advance large multimodal models.
Graphical User Interface (GUI) agents hold great potential for automating
complex tasks across diverse digital environments, from web applications to
desktop software. However, the development of such agents is hindered by the
lack of high-quality, multi-step trajectory data required for effective
training. Existing approaches rely on expensive and labor-intensive human
annotation, making them unsustainable at scale. To address this challenge, we
propose AgentTrek, a scalable data synthesis pipeline that generates
high-quality GUI agent trajectories by leveraging web tutorials. Our method
automatically gathers tutorial-like texts from the internet, transforms them
into task goals with step-by-step instructions, and employs a visual-language
model agent to simulate their execution in a real digital environment. A
VLM-based evaluator ensures the correctness of the generated trajectories. We
demonstrate that training GUI agents with these synthesized trajectories
significantly improves their grounding and planning performance over the
current models. Moreover, our approach is more cost-efficient compared to
traditional human annotation methods. This work underscores the potential of
guided replay with web tutorials as a viable strategy for large-scale GUI agent
training, paving the way for more capable and autonomous digital agents.
Existing text-to-image (T2I) diffusion models face several limitations,
including large model sizes, slow runtime, and low-quality generation on mobile
devices. This paper aims to address all of these challenges by developing an
extremely small and fast T2I model that generates high-resolution and
high-quality images on mobile platforms. We propose several techniques to
achieve this goal. First, we systematically examine the design choices of the
network architecture to reduce model parameters and latency, while ensuring
high-quality generation. Second, to further improve generation quality, we
employ cross-architecture knowledge distillation from a much larger model,
using a multi-level approach to guide the training of our model from scratch.
Third, we enable a few-step generation by integrating adversarial guidance with
knowledge distillation. For the first time, our model SnapGen, demonstrates the
generation of 1024x1024 px images on a mobile device around 1.4 seconds. On
ImageNet-1K, our model, with only 372M parameters, achieves an FID of 2.06 for
256x256 px generation. On T2I benchmarks (i.e., GenEval and DPG-Bench), our
model with merely 379M parameters, surpasses large-scale models with billions
of parameters at a significantly smaller size (e.g., 7x smaller than SDXL, 14x
smaller than IF-XL).
Significant achievements in personalization of diffusion models have been
witnessed. Conventional tuning-free methods mostly encode multiple reference
images by averaging their image embeddings as the injection condition, but such
an image-independent operation cannot perform interaction among images to
capture consistent visual elements within multiple references. Although the
tuning-based Low-Rank Adaptation (LoRA) can effectively extract consistent
elements within multiple images through the training process, it necessitates
specific finetuning for each distinct image group. This paper introduces
EasyRef, a novel plug-and-play adaptation method that enables diffusion models
to be conditioned on multiple reference images and the text prompt. To
effectively exploit consistent visual elements within multiple images, we
leverage the multi-image comprehension and instruction-following capabilities
of the multimodal large language model (MLLM), prompting it to capture
consistent visual elements based on the instruction. Besides, injecting the
MLLM's representations into the diffusion process through adapters can easily
generalize to unseen domains, mining the consistent visual elements within
unseen data. To mitigate computational costs and enhance fine-grained detail
preservation, we introduce an efficient reference aggregation strategy and a
progressive training scheme. Finally, we introduce MRBench, a new
multi-reference image generation benchmark. Experimental results demonstrate
EasyRef surpasses both tuning-free methods like IP-Adapter and tuning-based
methods like LoRA, achieving superior aesthetic quality and robust zero-shot
generalization across diverse domains.
Given the rapid progress of generative AI, there is a pressing need to
systematically compare and choose between the numerous models and
configurations available. The scale and versatility of such evaluations make
the use of LLM-based judges a compelling solution for this challenge.
Crucially, this approach requires first to validate the quality of the LLM
judge itself. Previous work has focused on instance-based assessment of LLM
judges, where a judge is evaluated over a set of responses, or response pairs,
while being agnostic to their source systems. We argue that this setting
overlooks critical factors affecting system-level ranking, such as a judge's
positive or negative bias towards certain systems. To address this gap, we
conduct the first large-scale study of LLM judges as system rankers. System
scores are generated by aggregating judgment scores over multiple system
outputs, and the judge's quality is assessed by comparing the resulting system
ranking to a human-based ranking. Beyond overall judge assessment, our analysis
provides a fine-grained characterization of judge behavior, including their
decisiveness and bias.
Zexin He, Tengfei Wang, Xin Huang, Xingang Pan, Ziwei Liu
184
Recovering the geometry and materials of objects from a single image is
challenging due to its under-constrained nature. In this paper, we present
Neural LightRig, a novel framework that boosts intrinsic estimation by
leveraging auxiliary multi-lighting conditions from 2D diffusion priors.
Specifically, 1) we first leverage illumination priors from large-scale
diffusion models to build our multi-light diffusion model on a synthetic
relighting dataset with dedicated designs. This diffusion model generates
multiple consistent images, each illuminated by point light sources in
different directions. 2) By using these varied lighting images to reduce
estimation uncertainty, we train a large G-buffer model with a U-Net backbone
to accurately predict surface normals and materials. Extensive experiments
validate that our approach significantly outperforms state-of-the-art methods,
enabling accurate surface normal and PBR material estimation with vivid
relighting effects. Code and dataset are available on our project page at
https://projects.zxhezexin.com/neural-lightrig.
Namgyu Kang, Jaemin Oh, Youngjoon Hong, Eunbyung Park
182
The approximation of Partial Differential Equations (PDEs) using neural
networks has seen significant advancements through Physics-Informed Neural
Networks (PINNs). Despite their straightforward optimization framework and
flexibility in implementing various PDEs, PINNs often suffer from limited
accuracy due to the spectral bias of Multi-Layer Perceptrons (MLPs), which
struggle to effectively learn high-frequency and non-linear components.
Recently, parametric mesh representations in combination with neural networks
have been investigated as a promising approach to eliminate the inductive
biases of neural networks. However, they usually require very high-resolution
grids and a large number of collocation points to achieve high accuracy while
avoiding overfitting issues. In addition, the fixed positions of the mesh
parameters restrict their flexibility, making it challenging to accurately
approximate complex PDEs. To overcome these limitations, we propose
Physics-Informed Gaussians (PIGs), which combine feature embeddings using
Gaussian functions with a lightweight neural network. Our approach uses
trainable parameters for the mean and variance of each Gaussian, allowing for
dynamic adjustment of their positions and shapes during training. This
adaptability enables our model to optimally approximate PDE solutions, unlike
models with fixed parameter positions. Furthermore, the proposed approach
maintains the same optimization framework used in PINNs, allowing us to benefit
from their excellent properties. Experimental results show the competitive
performance of our model across various PDEs, demonstrating its potential as a
robust tool for solving complex PDEs. Our project page is available at
https://namgyukang.github.io/Physics-Informed-Gaussians/
Modern sensors produce increasingly rich streams of high-resolution data. Due
to resource constraints, machine learning systems discard the vast majority of
this information via resolution reduction. Compressed-domain learning allows
models to operate on compact latent representations, allowing higher effective
resolution for the same budget. However, existing compression systems are not
ideal for compressed learning. Linear transform coding and end-to-end learned
compression systems reduce bitrate, but do not uniformly reduce dimensionality;
thus, they do not meaningfully increase efficiency. Generative autoencoders
reduce dimensionality, but their adversarial or perceptual objectives lead to
significant information loss. To address these limitations, we introduce WaLLoC
(Wavelet Learned Lossy Compression), a neural codec architecture that combines
linear transform coding with nonlinear dimensionality-reducing autoencoders.
WaLLoC sandwiches a shallow, asymmetric autoencoder and entropy bottleneck
between an invertible wavelet packet transform. Across several key metrics,
WaLLoC outperforms the autoencoders used in state-of-the-art latent diffusion
models. WaLLoC does not require perceptual or adversarial losses to represent
high-frequency detail, providing compatibility with modalities beyond RGB
images and stereo audio. WaLLoC's encoder consists almost entirely of linear
operations, making it exceptionally efficient and suitable for mobile
computing, remote sensing, and learning directly from compressed data. We
demonstrate WaLLoC's capability for compressed-domain learning across several
tasks, including image classification, colorization, document understanding,
and music source separation. Our code, experiments, and pre-trained audio and
image codecs are available at https://ut-sysml.org/walloc
This study presents a new image super-resolution (SR) technique based on
diffusion inversion, aiming at harnessing the rich image priors encapsulated in
large pre-trained diffusion models to improve SR performance. We design a
Partial noise Prediction strategy to construct an intermediate state of the
diffusion model, which serves as the starting sampling point. Central to our
approach is a deep noise predictor to estimate the optimal noise maps for the
forward diffusion process. Once trained, this noise predictor can be used to
initialize the sampling process partially along the diffusion trajectory,
generating the desirable high-resolution result. Compared to existing
approaches, our method offers a flexible and efficient sampling mechanism that
supports an arbitrary number of sampling steps, ranging from one to five. Even
with a single sampling step, our method demonstrates superior or comparable
performance to recent state-of-the-art approaches. The code and model are
publicly available at https://github.com/zsyOAOA/InvSR.
Christopher Chou, Lisa Dunlap, Koki Mashita, Krishna Mandal, Trevor Darrell, Ion Stoica, Joseph E. Gonzalez, Wei-Lin Chiang
133
With the growing adoption and capabilities of vision-language models (VLMs)
comes the need for benchmarks that capture authentic user-VLM interactions. In
response, we create VisionArena, a dataset of 230K real-world conversations
between users and VLMs. Collected from Chatbot Arena - an open-source platform
where users interact with VLMs and submit preference votes - VisionArena spans
73K unique users, 45 VLMs, and 138 languages. Our dataset contains three
subsets: VisionArena-Chat, 200k single and multi-turn conversations between a
user and a VLM; VisionArena-Battle, 30K conversations comparing two anonymous
VLMs with user preference votes; and VisionArena-Bench, an automatic benchmark
of 500 diverse user prompts that efficiently approximate the live Chatbot Arena
model rankings. Additionally, we highlight the types of question asked by
users, the influence of response style on preference, and areas where models
often fail. We find open-ended tasks like captioning and humor are highly
style-dependent, and current VLMs struggle with spatial reasoning and planning
tasks. Lastly, we show finetuning the same base model on VisionArena-Chat
outperforms Llava-Instruct-158K, with a 17-point gain on MMMU and a 46-point
gain on the WildVision benchmark. Dataset at https://huggingface.co/lmarena-ai
Jitesh Jain, Zhengyuan Yang, Humphrey Shi, Jianfeng Gao, Jianwei Yang
112
The standard practice for developing contemporary MLLMs is to feed features
from vision encoder(s) into the LLM and train with natural language
supervision. In this work, we posit an overlooked opportunity to optimize the
intermediate LLM representations through a vision perspective (objective),
i.e., solely natural language supervision is sub-optimal for the MLLM's visual
understanding ability. To that end, we propose OLA-VLM, the first approach
distilling knowledge into the LLM's hidden representations from a set of target
visual representations. Firstly, we formulate the objective during the
pretraining stage in MLLMs as a coupled optimization of predictive visual
embedding and next text-token prediction. Secondly, we investigate MLLMs
trained solely with natural language supervision and identify a positive
correlation between the quality of visual representations within these models
and their downstream performance. Moreover, upon probing our OLA-VLM, we
observe improved representation quality owing to the embedding optimization.
Thirdly, we demonstrate that our OLA-VLM outperforms the single and
multi-encoder baselines, proving our approach's superiority over explicitly
feeding the corresponding features to the LLM. Particularly, OLA-VLM boosts
performance by an average margin of up to 2.5% on various benchmarks, with a
notable improvement of 8.7% on the Depth task in CV-Bench. Our code is
open-sourced at https://github.com/SHI-Labs/OLA-VLM .
Ruiwen Zhou, Wenyue Hua, Liangming Pan, Sitao Cheng, Xiaobao Wu, En Yu, William Yang Wang
102
This paper introduces RuleArena, a novel and challenging benchmark designed
to evaluate the ability of large language models (LLMs) to follow complex,
real-world rules in reasoning. Covering three practical domains -- airline
baggage fees, NBA transactions, and tax regulations -- RuleArena assesses LLMs'
proficiency in handling intricate natural language instructions that demand
long-context understanding, logical reasoning, and accurate mathematical
computation. Two key attributes distinguish RuleArena from traditional
rule-based reasoning benchmarks: (1) it extends beyond standard first-order
logic representations, and (2) it is grounded in authentic, practical
scenarios, providing insights into the suitability and reliability of LLMs for
real-world applications. Our findings reveal several notable limitations in
LLMs: (1) they struggle to identify and apply the appropriate rules, frequently
becoming confused by similar but distinct regulations, (2) they cannot
consistently perform accurate mathematical computations, even when they
correctly identify the relevant rules, and (3) in general, they perform poorly
in the benchmark. These results highlight significant challenges in advancing
LLMs' rule-guided reasoning capabilities in real-life applications.
Javier de la Rosa, Vladislav Mikhailov, Lemei Zhang, Freddy Wetjen, David Samuel, Peng Liu, Rolv-Arild Braaten, Petter Mæhlum, Magnus Breder Birkenes, Andrey Kutuzov, Tita Enstad, Svein Arne Brygfjeld, Jon Atle Gulla, Stephan Oepen, Erik Velldal, Wilfred Østgulen, Liljia Øvrelid, Aslak Sira Myhre
92
The use of copyrighted materials in training generative language models
raises critical legal and ethical questions. This paper presents a framework
for and the results of empirically assessing the impact of copyrighted
materials on the performance of large language models (LLMs) for Norwegian. We
found that both books and newspapers contribute positively when the models are
evaluated on a diverse set of Norwegian benchmarks, while fiction works
possibly lead to decreased performance. Our experiments could inform the
creation of a compensation scheme for authors whose works contribute to AI
development.
Andrei Stefan Bejgu, Edoardo Barba, Luigi Procopio, Alberte Fernández-Castro, Roberto Navigli
92
Word Sense Disambiguation (WSD) is the task of associating a word in a given
context with its most suitable meaning among a set of possible candidates.
While the task has recently witnessed renewed interest, with systems achieving
performances above the estimated inter-annotator agreement, at the time of
writing it still struggles to find downstream applications. We argue that one
of the reasons behind this is the difficulty of applying WSD to plain text.
Indeed, in the standard formulation, models work under the assumptions that a)
all the spans to disambiguate have already been identified, and b) all the
possible candidate senses of each span are provided, both of which are
requirements that are far from trivial. In this work, we present a new task
called Word Sense Linking (WSL) where, given an input text and a reference
sense inventory, systems have to both identify which spans to disambiguate and
then link them to their most suitable meaning.We put forward a
transformer-based architecture for the task and thoroughly evaluate both its
performance and those of state-of-the-art WSD systems scaled to WSL,
iteratively relaxing the assumptions of WSD. We hope that our work will foster
easier integration of lexical semantics into downstream applications.
Normalizing Flows (NFs) are likelihood-based models for continuous inputs.
They have demonstrated promising results on both density estimation and
generative modeling tasks, but have received relatively little attention in
recent years. In this work, we demonstrate that NFs are more powerful than
previously believed. We present TarFlow: a simple and scalable architecture
that enables highly performant NF models. TarFlow can be thought of as a
Transformer-based variant of Masked Autoregressive Flows (MAFs): it consists of
a stack of autoregressive Transformer blocks on image patches, alternating the
autoregression direction between layers. TarFlow is straightforward to train
end-to-end, and capable of directly modeling and generating pixels. We also
propose three key techniques to improve sample quality: Gaussian noise
augmentation during training, a post training denoising procedure, and an
effective guidance method for both class-conditional and unconditional
settings. Putting these together, TarFlow sets new state-of-the-art results on
likelihood estimation for images, beating the previous best methods by a large
margin, and generates samples with quality and diversity comparable to
diffusion models, for the first time with a stand-alone NF model. We make our
code available at https://github.com/apple/ml-tarflow.
Enis Simsar, Thomas Hofmann, Federico Tombari, Pinar Yanardag
82
Recent advances in text-to-image customization have enabled high-fidelity,
context-rich generation of personalized images, allowing specific concepts to
appear in a variety of scenarios. However, current methods struggle with
combining multiple personalized models, often leading to attribute entanglement
or requiring separate training to preserve concept distinctiveness. We present
LoRACLR, a novel approach for multi-concept image generation that merges
multiple LoRA models, each fine-tuned for a distinct concept, into a single,
unified model without additional individual fine-tuning. LoRACLR uses a
contrastive objective to align and merge the weight spaces of these models,
ensuring compatibility while minimizing interference. By enforcing distinct yet
cohesive representations for each concept, LoRACLR enables efficient, scalable
model composition for high-quality, multi-concept image synthesis. Our results
highlight the effectiveness of LoRACLR in accurately merging multiple concepts,
advancing the capabilities of personalized image generation.
Existing sparse-view reconstruction models heavily rely on accurate known
camera poses. However, deriving camera extrinsics and intrinsics from
sparse-view images presents significant challenges. In this work, we present
FreeSplatter, a highly scalable, feed-forward reconstruction framework capable
of generating high-quality 3D Gaussians from uncalibrated sparse-view images
and recovering their camera parameters in mere seconds. FreeSplatter is built
upon a streamlined transformer architecture, comprising sequential
self-attention blocks that facilitate information exchange among multi-view
image tokens and decode them into pixel-wise 3D Gaussian primitives. The
predicted Gaussian primitives are situated in a unified reference frame,
allowing for high-fidelity 3D modeling and instant camera parameter estimation
using off-the-shelf solvers. To cater to both object-centric and scene-level
reconstruction, we train two model variants of FreeSplatter on extensive
datasets. In both scenarios, FreeSplatter outperforms state-of-the-art
baselines in terms of reconstruction quality and pose estimation accuracy.
Furthermore, we showcase FreeSplatter's potential in enhancing the productivity
of downstream applications, such as text/image-to-3D content creation.
Controllable human image animation aims to generate videos from reference
images using driving videos. Due to the limited control signals provided by
sparse guidance (e.g., skeleton pose), recent works have attempted to introduce
additional dense conditions (e.g., depth map) to ensure motion alignment.
However, such strict dense guidance impairs the quality of the generated video
when the body shape of the reference character differs significantly from that
of the driving video. In this paper, we present DisPose to mine more
generalizable and effective control signals without additional dense input,
which disentangles the sparse skeleton pose in human image animation into
motion field guidance and keypoint correspondence. Specifically, we generate a
dense motion field from a sparse motion field and the reference image, which
provides region-level dense guidance while maintaining the generalization of
the sparse pose control. We also extract diffusion features corresponding to
pose keypoints from the reference image, and then these point features are
transferred to the target pose to provide distinct identity information. To
seamlessly integrate into existing models, we propose a plug-and-play hybrid
ControlNet that improves the quality and consistency of generated videos while
freezing the existing model parameters. Extensive qualitative and quantitative
experiments demonstrate the superiority of DisPose compared to current methods.
Code:
https://github.com/lihxxx/DisPose{https://github.com/lihxxx/DisPose}.
Adhiraj Ghosh, Sebastian Dziadzio, Ameya Prabhu, Vishaal Udandarao, Samuel Albanie, Matthias Bethge
62
Traditional fixed test sets fall short in evaluating open-ended capabilities
of foundation models. To address this, we propose ONEBench(OpeN-Ended
Benchmarking), a new testing paradigm that consolidates individual evaluation
datasets into a unified, ever-expanding sample pool. ONEBench allows users to
generate custom, open-ended evaluation benchmarks from this pool, corresponding
to specific capabilities of interest. By aggregating samples across test sets,
ONEBench enables the assessment of diverse capabilities beyond those covered by
the original test sets, while mitigating overfitting and dataset bias. Most
importantly, it frames model evaluation as a collective process of selecting
and aggregating sample-level tests.
The shift from task-specific benchmarks to ONEBench introduces two
challenges: (1)heterogeneity and (2)incompleteness. Heterogeneity refers to the
aggregation over diverse metrics, while incompleteness describes comparing
models evaluated on different data subsets. To address these challenges, we
explore algorithms to aggregate sparse measurements into reliable model scores.
Our aggregation algorithm ensures identifiability(asymptotically recovering
ground-truth scores) and rapid convergence, enabling accurate model ranking
with less data. On homogenous datasets, we show our aggregation algorithm
provides rankings that highly correlate with those produced by average scores.
We also demonstrate robustness to ~95% of measurements missing, reducing
evaluation cost by up to 20x with little-to-no change in model rankings. We
introduce ONEBench-LLM for language models and ONEBench-LMM for vision-language
models, unifying evaluations across these domains. Overall, we present a
technique for open-ended evaluation, which can aggregate over incomplete,
heterogeneous sample-level measurements to continually grow a benchmark
alongside the rapidly developing foundation models.
The academic field of learning instruction-guided visual navigation can be
generally categorized into high-level category-specific search and low-level
language-guided navigation, depending on the granularity of language
instruction, in which the former emphasizes the exploration process, while the
latter concentrates on following detailed textual commands. Despite the
differing focuses of these tasks, the underlying requirements of interpreting
instructions, comprehending the surroundings, and inferring action decisions
remain consistent. This paper consolidates diverse navigation tasks into a
unified and generic framework -- we investigate the core difficulties of
sharing general knowledge and exploiting task-specific capabilities in learning
navigation and propose a novel State-Adaptive Mixture of Experts (SAME) model
that effectively enables an agent to infer decisions based on
different-granularity language and dynamic observations. Powered by SAME, we
present a versatile agent capable of addressing seven navigation tasks
simultaneously that outperforms or achieves highly comparable performance to
task-specific agents.
Fiona Ryan, Ajay Bati, Sangmin Lee, Daniel Bolya, Judy Hoffman, James M. Rehg
52
We address the problem of gaze target estimation, which aims to predict where
a person is looking in a scene. Predicting a person's gaze target requires
reasoning both about the person's appearance and the contents of the scene.
Prior works have developed increasingly complex, hand-crafted pipelines for
gaze target estimation that carefully fuse features from separate scene
encoders, head encoders, and auxiliary models for signals like depth and pose.
Motivated by the success of general-purpose feature extractors on a variety of
visual tasks, we propose Gaze-LLE, a novel transformer framework that
streamlines gaze target estimation by leveraging features from a frozen DINOv2
encoder. We extract a single feature representation for the scene, and apply a
person-specific positional prompt to decode gaze with a lightweight module. We
demonstrate state-of-the-art performance across several gaze benchmarks and
provide extensive analysis to validate our design choices. Our code is
available at: http://github.com/fkryan/gazelle .
Neural Machine Translation (NMT) models are typically trained on datasets
with limited exposure to Scientific, Technical and Educational domains.
Translation models thus, in general, struggle with tasks that involve
scientific understanding or technical jargon. Their performance is found to be
even worse for low-resource Indian languages. Finding a translation dataset
that tends to these domains in particular, poses a difficult challenge. In this
paper, we address this by creating a multilingual parallel corpus containing
more than 2.8 million rows of English-to-Indic and Indic-to-Indic high-quality
translation pairs across 8 Indian languages. We achieve this by bitext mining
human-translated transcriptions of NPTEL video lectures. We also finetune and
evaluate NMT models using this corpus and surpass all other publicly available
models at in-domain tasks. We also demonstrate the potential for generalizing
to out-of-domain translation tasks by improving the baseline by over 2 BLEU on
average for these Indian languages on the Flores+ benchmark. We are pleased to
release our model and dataset via this link: https://huggingface.co/SPRINGLab.