This work presents Depth Anything V2. Without pursuing fancy techniques, we
aim to reveal crucial findings to pave the way towards building a powerful
monocular depth estimation model. Notably, compared with V1, this version
produces much finer and more robust depth predictions through three key
practices: 1) replacing all labeled real images with synthetic images, 2)
scaling up the capacity of our teacher model, and 3) teaching student models
via the bridge of large-scale pseudo-labeled real images. Compared with the
latest models built on Stable Diffusion, our models are significantly more
efficient (more than 10x faster) and more accurate. We offer models of
different scales (ranging from 25M to 1.3B params) to support extensive
scenarios. Benefiting from their strong generalization capability, we fine-tune
them with metric depth labels to obtain our metric depth models. In addition to
our models, considering the limited diversity and frequent noise in current
test sets, we construct a versatile evaluation benchmark with precise
annotations and diverse scenes to facilitate future research.
ByDuy-Kien Nguyen, Mahmoud Assran, Unnat Jain, Martin R. Oswald, Cees G. M. Snoek, Xinlei Chen
51
2
This work does not introduce a new method. Instead, we present an interesting
finding that questions the necessity of the inductive bias -- locality in
modern computer vision architectures. Concretely, we find that vanilla
Transformers can operate by directly treating each individual pixel as a token
and achieve highly performant results. This is substantially different from the
popular design in Vision Transformer, which maintains the inductive bias from
ConvNets towards local neighborhoods (e.g. by treating each 16x16 patch as a
token). We mainly showcase the effectiveness of pixels-as-tokens across three
well-studied tasks in computer vision: supervised learning for object
classification, self-supervised learning via masked autoencoding, and image
generation with diffusion models. Although directly operating on individual
pixels is less computationally practical, we believe the community must be
aware of this surprising piece of knowledge when devising the next generation
of neural architectures for computer vision.
ByWilfried Bounsi, Borja Ibarz, Andrew Dudzik, Jessica B. Hamrick, Larisa Markeeva, Alex Vitvitskyi, Razvan Pascanu, Petar Veličković
44
1
Transformers have revolutionized machine learning with their simple yet
effective architecture. Pre-training Transformers on massive text datasets from
the Internet has led to unmatched generalization for natural language
understanding (NLU) tasks. However, such language models remain fragile when
tasked with algorithmic forms of reasoning, where computations must be precise
and robust. To address this limitation, we propose a novel approach that
combines the Transformer's language understanding with the robustness of graph
neural network (GNN)-based neural algorithmic reasoners (NARs). Such NARs
proved effective as generic solvers for algorithmic tasks, when specified in
graph form. To make their embeddings accessible to a Transformer, we propose a
hybrid architecture with a two-phase training procedure, allowing the tokens in
the language model to cross-attend to the node embeddings from the NAR. We
evaluate our resulting TransNAR model on CLRS-Text, the text-based version of
the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only
models for algorithmic reasoning, both in and out of distribution.
ByMoo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, Chelsea Finn
42
1
Large policies pretrained on a combination of Internet-scale vision-language
data and diverse robot demonstrations have the potential to change how we teach
robots new skills: rather than training new behaviors from scratch, we can
fine-tune such vision-language-action (VLA) models to obtain robust,
generalizable policies for visuomotor control. Yet, widespread adoption of VLAs
for robotics has been challenging as 1) existing VLAs are largely closed and
inaccessible to the public, and 2) prior work fails to explore methods for
efficiently fine-tuning VLAs for new tasks, a key component for adoption.
Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-source
VLA trained on a diverse collection of 970k real-world robot demonstrations.
OpenVLA builds on a Llama 2 language model combined with a visual encoder that
fuses pretrained features from DINOv2 and SigLIP. As a product of the added
data diversity and new model components, OpenVLA demonstrates strong results
for generalist manipulation, outperforming closed models such as RT-2-X (55B)
by 16.5% in absolute task success rate across 29 tasks and multiple robot
embodiments, with 7x fewer parameters. We further show that we can effectively
fine-tune OpenVLA for new settings, with especially strong generalization
results in multi-task environments involving multiple objects and strong
language grounding abilities, and outperform expressive from-scratch imitation
learning methods such as Diffusion Policy by 20.4%. We also explore compute
efficiency; as a separate contribution, we show that OpenVLA can be fine-tuned
on consumer GPUs via modern low-rank adaptation methods and served efficiently
via quantization without a hit to downstream success rate. Finally, we release
model checkpoints, fine-tuning notebooks, and our PyTorch codebase with
built-in support for training VLAs at scale on Open X-Embodiment datasets.
Efficiently modeling sequences with infinite context length has been a
long-standing problem. Past works suffer from either the quadratic computation
complexity or the limited extrapolation ability on length generalization. In
this work, we present Samba, a simple hybrid architecture that layer-wise
combines Mamba, a selective State Space Model (SSM), with Sliding Window
Attention (SWA). Samba selectively compresses a given sequence into recurrent
hidden states while still maintaining the ability to precisely recall memories
with the attention mechanism. We scale Samba up to 3.8B parameters with 3.2T
training tokens and show that Samba substantially outperforms the
state-of-the-art models based on pure attention or SSMs on a wide range of
benchmarks. When trained on 4K length sequences, Samba can be efficiently
extrapolated to 256K context length with perfect memory recall and show
improved token predictions up to 1M context length. As a linear-time sequence
model, Samba enjoys a 3.73x higher throughput compared to Transformers with
grouped-query attention when processing user prompts of 128K length, and 3.64x
speedup when generating 64K tokens with unlimited streaming. A sample
implementation of Samba is publicly available in
https://github.com/microsoft/Samba.
This paper presents innovative enhancements to diffusion models by
integrating a novel multi-resolution network and time-dependent layer
normalization. Diffusion models have gained prominence for their effectiveness
in high-fidelity image generation. While conventional approaches rely on
convolutional U-Net architectures, recent Transformer-based designs have
demonstrated superior performance and scalability. However, Transformer
architectures, which tokenize input data (via "patchification"), face a
trade-off between visual fidelity and computational complexity due to the
quadratic nature of self-attention operations concerning token length. While
larger patch sizes enable attention computation efficiency, they struggle to
capture fine-grained visual details, leading to image distortions. To address
this challenge, we propose augmenting the Diffusion model with the
Multi-Resolution network (DiMR), a framework that refines features across
multiple resolutions, progressively enhancing detail from low to high
resolution. Additionally, we introduce Time-Dependent Layer Normalization
(TD-LN), a parameter-efficient approach that incorporates time-dependent
parameters into layer normalization to inject time information and achieve
superior performance. Our method's efficacy is demonstrated on the
class-conditional ImageNet generation benchmark, where DiMR-XL variants
outperform prior diffusion models, setting new state-of-the-art FID scores of
1.70 on ImageNet 256 x 256 and 2.89 on ImageNet 512 x 512. Project page:
https://qihao067.github.io/projects/DiMR
ByBahare Fatemi, Mehran Kazemi, Anton Tsitsulin, Karishma Malkan, Jinyeong Yim, John Palowitch, Sungyong Seo, Jonathan Halcrow, Bryan Perozzi
27
1
Large language models (LLMs) have showcased remarkable reasoning
capabilities, yet they remain susceptible to errors, particularly in temporal
reasoning tasks involving complex temporal logic. Existing research has
explored LLM performance on temporal reasoning using diverse datasets and
benchmarks. However, these studies often rely on real-world data that LLMs may
have encountered during pre-training or employ anonymization techniques that
can inadvertently introduce factual inconsistencies. In this work, we address
these limitations by introducing novel synthetic datasets specifically designed
to assess LLM temporal reasoning abilities in various scenarios. The diversity
of question types across these datasets enables systematic investigation into
the impact of the problem structure, size, question type, fact order, and other
factors on LLM performance. Our findings provide valuable insights into the
strengths and weaknesses of current LLMs in temporal reasoning tasks. To foster
further research in this area, we are open-sourcing the datasets and evaluation
framework used in our experiments: https://huggingface.co/datasets/baharef/ToT.
ByZhihang Yuan, Pu Lu, Hanling Zhang, Xuefei Ning, Linfeng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, Yu Wang
25
1
Diffusion Transformers (DiT) excel at image and video generation but face
computational challenges due to self-attention's quadratic complexity. We
propose DiTFastAttn, a novel post-training compression method to alleviate
DiT's computational bottleneck. We identify three key redundancies in the
attention computation during DiT inference: 1. spatial redundancy, where many
attention heads focus on local information; 2. temporal redundancy, with high
similarity between neighboring steps' attention outputs; 3. conditional
redundancy, where conditional and unconditional inferences exhibit significant
similarity. To tackle these redundancies, we propose three techniques: 1.
Window Attention with Residual Caching to reduce spatial redundancy; 2.
Temporal Similarity Reduction to exploit the similarity between steps; 3.
Conditional Redundancy Elimination to skip redundant computations during
conditional generation. To demonstrate the effectiveness of DiTFastAttn, we
apply it to DiT, PixArt-Sigma for image generation tasks, and OpenSora for
video generation tasks. Evaluation results show that for image generation, our
method reduces up to 88\% of the FLOPs and achieves up to 1.6x speedup at high
resolution generation.
ByYushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Ranjay Krishna
23
1
Humans draw to facilitate reasoning: we draw auxiliary lines when solving
geometry problems; we mark and circle when reasoning on maps; we use sketches
to amplify our ideas and relieve our limited-capacity working memory. However,
such actions are missing in current multimodal language models (LMs). Current
chain-of-thought and tool-use paradigms only use text as intermediate reasoning
steps. In this work, we introduce Sketchpad, a framework that gives multimodal
LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts
planning and reasoning according to the visual artifacts it has drawn.
Different from prior work, which uses text-to-image models to enable LMs to
draw, Sketchpad enables LMs to draw with lines, boxes, marks, etc., which is
closer to human sketching and better facilitates reasoning. Sketchpad can also
use specialist vision models during the sketching process (e.g., draw bounding
boxes with object detection models, draw masks with segmentation models), to
further enhance visual perception and reasoning. We experiment with a wide
range of math tasks (including geometry, functions, graphs, and chess) and
complex visual reasoning tasks. Sketchpad substantially improves performance on
all tasks over strong base models with no sketching, yielding an average gain
of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with Sketchpad sets a
new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial
reasoning (83.9%), and visual correspondence (80.8%). All codes and data are in
https://visualsketchpad.github.io/.
ByAmil Dravid, Yossi Gandelsman, Kuan-Chieh Wang, Rameen Abdal, Gordon Wetzstein, Alexei A. Efros, Kfir Aberman
20
1
We investigate the space of weights spanned by a large collection of
customized diffusion models. We populate this space by creating a dataset of
over 60,000 models, each of which is a base model fine-tuned to insert a
different person's visual identity. We model the underlying manifold of these
weights as a subspace, which we term weights2weights. We demonstrate three
immediate applications of this space -- sampling, editing, and inversion.
First, as each point in the space corresponds to an identity, sampling a set of
weights from it results in a model encoding a novel identity. Next, we find
linear directions in this space corresponding to semantic edits of the identity
(e.g., adding a beard). These edits persist in appearance across generated
samples. Finally, we show that inverting a single image into this space
reconstructs a realistic identity, even if the input image is out of
distribution (e.g., a painting). Our results indicate that the weight space of
fine-tuned diffusion models behaves as an interpretable latent space of
identities.
ByZhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, Oleksii Kuchaiev
19
3
High-quality preference datasets are essential for training reward models
that can effectively guide large language models (LLMs) in generating
high-quality responses aligned with human preferences. As LLMs become stronger
and better aligned, permissively licensed preference datasets, such as Open
Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for
reward modeling. Methods that distil preference data from proprietary LLMs such
as GPT-4 have restrictions on commercial usage imposed by model providers. To
improve upon both generated responses and attribute labeling quality, we
release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0).
Using a powerful internal base model trained on HelpSteer2, we are able to
achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming
currently listed open and proprietary models, as of June 12th, 2024. Notably,
HelpSteer2 consists of only ten thousand response pairs, an order of magnitude
fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly
efficient for training reward models. Our extensive experiments demonstrate
that reward models trained with HelpSteer2 are effective in aligning LLMs. In
particular, we propose SteerLM 2.0, a model alignment approach that can
effectively make use of the rich multi-attribute score predicted by our reward
models. HelpSteer2 is available at
https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at
https://github.com/NVIDIA/NeMo-Aligner
ByFei Wang, Xingyu Fu, James Y. Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, Tianyi Lorena Yan, Wenjie Jacky Mo, Hsiang-Hui Liu, Pan Lu, Chunyuan Li, Chaowei Xiao, Kai-Wei Chang, Dan Roth, Sheng Zhang, Hoifung Poon, Muhao Chen
19
2
We introduce MuirBench, a comprehensive benchmark that focuses on robust
multi-image understanding capabilities of multimodal LLMs. MuirBench consists
of 12 diverse multi-image tasks (e.g., scene understanding, ordering) that
involve 10 categories of multi-image relations (e.g., multiview, temporal
relations). Comprising 11,264 images and 2,600 multiple-choice questions,
MuirBench is created in a pairwise manner, where each standard instance is
paired with an unanswerable variant that has minimal semantic differences, in
order for a reliable assessment. Evaluated upon 20 recent multi-modal LLMs, our
results reveal that even the best-performing models like GPT-4o and Gemini Pro
find it challenging to solve MuirBench, achieving 68.0% and 49.3% in accuracy.
Open-source multimodal LLMs trained on single images can hardly generalize to
multi-image questions, hovering below 33.3% in accuracy. These results
highlight the importance of MuirBench in encouraging the community to develop
multimodal LLMs that can look beyond a single image, suggesting potential
pathways for future improvements.
Multimodal Large Language Models (mLLMs) are trained on a large amount of
text-image data. While most mLLMs are trained on caption-like data only,
Alayrac et al. [2022] showed that additionally training them on interleaved
sequences of text and images can lead to the emergence of in-context learning
capabilities. However, the dataset they used, M3W, is not public and is only in
English. There have been attempts to reproduce their results but the released
datasets are English-only. In contrast, current multilingual and multimodal
datasets are either composed of caption-like only or medium-scale or fully
private data. This limits mLLM research for the 7,000 other languages spoken in
the world. We therefore introduce mOSCAR, to the best of our knowledge the
first large-scale multilingual and multimodal document corpus crawled from the
web. It covers 163 languages, 315M documents, 214B tokens and 1.2B images. We
carefully conduct a set of filtering and evaluation steps to make sure mOSCAR
is sufficiently safe, diverse and of good quality. We additionally train two
types of multilingual model to prove the benefits of mOSCAR: (1) a model
trained on a subset of mOSCAR and captioning data and (2) a model train on
captioning data only. The model additionally trained on mOSCAR shows a strong
boost in few-shot learning performance across various multilingual image-text
tasks and benchmarks, confirming previous findings for English-only mLLMs.
Computer Science (CS) stands as a testament to the intricacies of human
intelligence, profoundly advancing the development of artificial intelligence
and modern society. However, the current community of large language models
(LLMs) overly focuses on benchmarks for analyzing specific foundational skills
(e.g. mathematics and code generation), neglecting an all-round evaluation of
the computer science field. To bridge this gap, we introduce CS-Bench, the
first bilingual (Chinese-English) benchmark dedicated to evaluating the
performance of LLMs in computer science. CS-Bench comprises approximately 5K
meticulously curated test samples, covering 26 subfields across 4 key areas of
computer science, encompassing various task forms and divisions of knowledge
and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of
over 30 mainstream LLMs, revealing the relationship between CS performance and
model scales. We also quantitatively analyze the reasons for failures in
existing LLMs and highlight directions for improvements, including knowledge
supplementation and CS-specific reasoning. Further cross-capability experiments
show a high correlation between LLMs' capabilities in computer science and
their abilities in mathematics and coding. Moreover, expert LLMs specialized in
mathematics and coding also demonstrate strong performances in several CS
subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM
applications in the CS field and paving new avenues in assessing LLMs' diverse
reasoning capabilities. The CS-Bench data and evaluation code are available at
https://github.com/csbench/csbench.
ByRoman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir
15
2
Current multimodal and multitask foundation models like 4M or UnifiedIO show
promising results, but in practice their out-of-the-box abilities to accept
diverse inputs and perform diverse tasks are limited by the (usually rather
small) number of modalities and tasks they are trained on. In this paper, we
expand upon the capabilities of them by training a single model on tens of
highly diverse modalities and by performing co-training on large-scale
multimodal datasets and text corpora. This includes training on several
semantic and geometric modalities, feature maps from recent state of the art
models like DINOv2 and ImageBind, pseudo labels of specialist models like SAM
and 4DHumans, and a range of new modalities that allow for novel ways to
interact with the model and steer the generation, for example image metadata or
color palettes. A crucial step in this process is performing discrete
tokenization on various modalities, whether they are image-like, neural network
feature maps, vectors, structured data like instance segmentation or human
poses, or data that can be represented as text. Through this, we expand on the
out-of-the-box capabilities of multimodal models and specifically show the
possibility of training one model to solve at least 3x more tasks/modalities
than existing ones and doing so without a loss in performance. This enables
more fine-grained and controllable multimodal generation capabilities and
allows us to study the distillation of models trained on diverse data and
objectives into a unified model. We successfully scale the training to a three
billion parameter model using tens of modalities and different datasets. The
resulting models and training code are open sourced at 4m.epfl.ch.
ByYucheng Han, Rui Wang, Chi Zhang, Juntao Hu, Pei Cheng, Bin Fu, Hanwang Zhang
14
3
Recent advancements in image generation have enabled the creation of
high-quality images from text conditions. However, when facing multi-modal
conditions, such as text combined with reference appearances, existing methods
struggle to balance multiple conditions effectively, typically showing a
preference for one modality over others. To address this challenge, we
introduce EMMA, a novel image generation model accepting multi-modal prompts
built upon the state-of-the-art text-to-image (T2I) diffusion model, ELLA. EMMA
seamlessly incorporates additional modalities alongside text to guide image
generation through an innovative Multi-modal Feature Connector design, which
effectively integrates textual and supplementary modal information using a
special attention mechanism. By freezing all parameters in the original T2I
diffusion model and only adjusting some additional layers, we reveal an
interesting finding that the pre-trained T2I diffusion model can secretly
accept multi-modal prompts. This interesting property facilitates easy
adaptation to different existing frameworks, making EMMA a flexible and
effective tool for producing personalized and context-aware images and even
videos. Additionally, we introduce a strategy to assemble learned EMMA modules
to produce images conditioned on multiple modalities simultaneously,
eliminating the need for additional training with mixed multi-modal prompts.
Extensive experiments demonstrate the effectiveness of EMMA in maintaining high
fidelity and detail in generated images, showcasing its potential as a robust
solution for advanced multi-modal conditional image generation tasks.
We propose to build omni-modal intelligence, which is capable of
understanding any modality and learning universal representations. In specific,
we propose a scalable pretraining paradigm, named Multimodal Context (MiCo),
which can scale up the numbers of modalities and amount of data, together with
the model parameters, in the pretraining process. With MiCo, the pretrained
models show significant emergent abilities in multimodal learning, which are
evaluated on the following tasks: i) single-modality perception benchmarks of
10 different modalities, ii) 25 cross-modality understanding tasks of
retrieval, question-answering, captioning, and iii) 18 multimodal large
language model benchmarks. Our models establish 37 new records for
state-of-the-art performance. We hope that our research could contribute to the
development of omni-modal intelligence. Code and Models are at
https://github.com/invictus717/MiCo
One of the predominant methods for training world models is autoregressive
prediction in the output space of the next element of a sequence. In Natural
Language Processing (NLP), this takes the form of Large Language Models (LLMs)
predicting the next token; in Computer Vision (CV), this takes the form of
autoregressive models predicting the next frame/token/pixel. However, this
approach differs from human cognition in several respects. First, human
predictions about the future actively influence internal cognitive processes.
Second, humans naturally evaluate the plausibility of predictions regarding
future states. Based on this capability, and third, by assessing when
predictions are sufficient, humans allocate a dynamic amount of time to make a
prediction. This adaptive process is analogous to System 2 thinking in
psychology. All these capabilities are fundamental to the success of humans at
high-level reasoning and planning. Therefore, to address the limitations of
traditional autoregressive models lacking these human-like capabilities, we
introduce Energy-Based World Models (EBWM). EBWM involves training an
Energy-Based Model (EBM) to predict the compatibility of a given context and a
predicted future state. In doing so, EBWM enables models to achieve all three
facets of human cognition described. Moreover, we developed a variant of the
traditional autoregressive transformer tailored for Energy-Based models, termed
the Energy-Based Transformer (EBT). Our results demonstrate that EBWM scales
better with data and GPU Hours than traditional autoregressive transformers in
CV, and that EBWM offers promising early scaling in NLP. Consequently, this
approach offers an exciting path toward training future models capable of
System 2 thinking and intelligently searching across state spaces.
Despite the advances in Large Language Models (LLMs), exemplified by models
like GPT-4 and Claude, smaller-scale LLMs such as Llama and Mistral often
struggle with generating in-depth and coherent dialogues. This paper presents a
novel two-step Coarse-to-Fine Actor model to address the inherent limitations
in conversational and analytical capabilities of small-sized LLMs. Our approach
begins with the Policy-based Coarse Actor, employing a technique we term
"Continuous Maximization". The Coarse Actor establishes an enhanced,
knowledge-rich pool adept at aligning with human preference styles in analysis
and reasoning. Through the RLHF process, it employs Continuous Maximization, a
strategy that dynamically and adaptively extends the output length limit,
enabling the generation of more detailed and analytical content. Subsequently,
the Fine Actor refines this analytical content, addressing the generation of
excessively redundant information from the Coarse Actor. We introduce a
"Knowledge Residue Merger" approach, refining the content from the Coarse Actor
and merging it with an existing Instruction model to improve quality,
correctness, and reduce redundancies. We applied our methodology to the popular
Mistral model, creating Mistral-C2F, which has demonstrated exceptional
performance across 11 general language tasks and the MT-Bench Dialogue task,
outperforming similar-scale models and even larger models with 13B and 30B
parameters. Our model has significantly improved conversational and analytical
reasoning abilities.
ByXingyu Fu, Muyu He, Yujie Lu, William Yang Wang, Dan Roth
9
1
We present a novel task and benchmark for evaluating the ability of
text-to-image(T2I) generation models to produce images that fit commonsense in
real life, which we call Commonsense-T2I. Given two adversarial text prompts
containing an identical set of action words with minor differences, such as "a
lightbulb without electricity" v.s. "a lightbulb with electricity", we evaluate
whether T2I models can conduct visual-commonsense reasoning, e.g. produce
images that fit "the lightbulb is unlit" vs. "the lightbulb is lit"
correspondingly. Commonsense-T2I presents an adversarial challenge, providing
pairwise text prompts along with expected outputs. The dataset is carefully
hand-curated by experts and annotated with fine-grained labels, such as
commonsense type and likelihood of the expected outputs, to assist analyzing
model behavior. We benchmark a variety of state-of-the-art (sota) T2I models
and surprisingly find that, there is still a large gap between image synthesis
and real life photos--even the DALL-E 3 model could only achieve 48.92% on
Commonsense-T2I, and the stable diffusion XL model only achieves 24.92%
accuracy. Our experiments show that GPT-enriched prompts cannot solve this
challenge, and we include a detailed analysis about possible reasons for such
deficiency. We aim for Commonsense-T2I to serve as a high-quality evaluation
benchmark for T2I commonsense checking, fostering advancements in real life
image generation.
ByWeixi Feng, Jiachen Li, Michael Saxon, Tsu-jui Fu, Wenhu Chen, William Yang Wang
9
1
Video generation has many unique challenges beyond those of image generation.
The temporal dimension introduces extensive possible variations across frames,
over which consistency and continuity may be violated. In this study, we move
beyond evaluating simple actions and argue that generated videos should
incorporate the emergence of new concepts and their relation transitions like
in real-world videos as time progresses. To assess the Temporal
Compositionality of video generation models, we propose TC-Bench, a benchmark
of meticulously crafted text prompts, corresponding ground truth videos, and
robust evaluation metrics. The prompts articulate the initial and final states
of scenes, effectively reducing ambiguities for frame development and
simplifying the assessment of transition completion. In addition, by collecting
aligned real-world videos corresponding to the prompts, we expand TC-Bench's
applicability from text-conditional models to image-conditional ones that can
perform generative frame interpolation. We also develop new metrics to measure
the completeness of component transitions in generated videos, which
demonstrate significantly higher correlations with human judgments than
existing metrics. Our comprehensive experimental results reveal that most video
generators achieve less than 20% of the compositional changes, highlighting
enormous space for future improvement. Our analysis indicates that current
video generation models struggle to interpret descriptions of compositional
changes and synthesize various components across different time steps.
ByAndrew Jesson, Nicolas Beltran-Velez, Quentin Chu, Sweta Karlekar, Jannik Kossen, Yarin Gal, John P. Cunningham, David Blei
7
1
This work is about estimating the hallucination rate for in-context learning
(ICL) with Generative AI. In ICL, a conditional generative model (CGM) is
prompted with a dataset and asked to make a prediction based on that dataset.
The Bayesian interpretation of ICL assumes that the CGM is calculating a
posterior predictive distribution over an unknown Bayesian model of a latent
parameter and data. With this perspective, we define a hallucination
as a generated prediction that has low-probability under the true latent
parameter. We develop a new method that takes an ICL problem -- that is, a CGM,
a dataset, and a prediction question -- and estimates the probability that a
CGM will generate a hallucination. Our method only requires generating queries
and responses from the model and evaluating its response log probability. We
empirically evaluate our method on synthetic regression and natural language
ICL tasks using large language models.
The default strategy for training single-view Large Reconstruction Models
(LRMs) follows the fully supervised route using large-scale datasets of
synthetic 3D assets or multi-view captures. Although these resources simplify
the training procedure, they are hard to scale up beyond the existing datasets
and they are not necessarily representative of the real distribution of object
shapes. To address these limitations, in this paper, we introduce Real3D, the
first LRM system that can be trained using single-view real-world images.
Real3D introduces a novel self-training framework that can benefit from both
the existing synthetic data and diverse single-view real images. We propose two
unsupervised losses that allow us to supervise LRMs at the pixel- and
semantic-level, even for training examples without ground-truth 3D or novel
views. To further improve performance and scale up the image data, we develop
an automatic data curation approach to collect high-quality examples from
in-the-wild images. Our experiments show that Real3D consistently outperforms
prior work in four diverse evaluation settings that include real and synthetic
data, as well as both in-domain and out-of-domain shapes. Code and model can be
found here: https://hwjiang1510.github.io/Real3D/
ByZayd Muhammad Kawakibi Zuhri, Muhammad Farid Adilazuarda, Ayu Purwarianti, Alham Fikri Aji
6
2
Auto-regressive inference of transformers benefit greatly from Key-Value (KV)
caching, but can lead to major memory bottlenecks as model size, batch size,
and sequence length grow at scale. We introduce Multi-Layer Key-Value (MLKV)
sharing, a novel approach extending KV sharing across transformer layers to
reduce memory usage beyond what was possible with Multi-Query Attention (MQA)
and Grouped-Query Attention (GQA). Evaluations on various NLP benchmarks and
inference metrics using uptrained Pythia-160M variants demonstrate that MLKV
significantly reduces memory usage with minimal performance loss, reducing KV
cache size down to a factor of 6x compared to MQA. These results highlight
MLKV's potential for efficient deployment of transformer models at scale. We
provide code at https://github.com/zaydzuhri/pythia-mlkv
Ultra-low bitrate image compression is a challenging and demanding topic.
With the development of Large Multimodal Models (LMMs), a Cross Modality
Compression (CMC) paradigm of Image-Text-Image has emerged. Compared with
traditional codecs, this semantic-level compression can reduce image data size
to 0.1\% or even lower, which has strong potential applications. However, CMC
has certain defects in consistency with the original image and perceptual
quality. To address this problem, we introduce CMC-Bench, a benchmark of the
cooperative performance of Image-to-Text (I2T) and Text-to-Image (T2I) models
for image compression. This benchmark covers 18,000 and 40,000 images
respectively to verify 6 mainstream I2T and 12 T2I models, including 160,000
subjective preference scores annotated by human experts. At ultra-low bitrates,
this paper proves that the combination of some I2T and T2I models has surpassed
the most advanced visual signal codecs; meanwhile, it highlights where LMMs can
be further optimized toward the compression task. We encourage LMM developers
to participate in this test to promote the evolution of visual signal codec
protocols.
ByDavid Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, Teresa Lynn, Injy Hamed, Aditya Nanda Kishore, Aishik Mandal, Alina Dragonetti, Artem Abzaliev, Atnafu Lambebo Tonja, Bontu Fufa Balcha, Chenxi Whitehouse, Christian Salamea, Dan John Velasco, David Ifeoluwa Adelani, David Le Meur, Emilio Villa-Cueva, Fajri Koto, Fauzan Farooqui, Frederico Belcavello, Ganzorig Batnasan, Gisela Vallejo, Grainne Caulfield, Guido Ivetta, Haiyue Song, Henok Biadglign Ademtew, Hernán Maina, Holy Lovenia, Israel Abebe Azime, Jan Christian Blaise Cruz, Jay Gala, Jiahui Geng, Jesus-German Ortiz-Barajas, Jinheon Baek, Jocelyn Dunstan, Laura Alonso Alemany, Kumaranage Ravindu Yasas Nagasinghe, Luciana Benotti, Luis Fernando D'Haro, Marcelo Viridiano, Marcos Estecha-Garitagoitia, Maria Camila Buitrago Cabrera, Mario Rodríguez-Cantelar, Mélanie Jouitteau, Mihail Mihaylov, Mohamed Fazli Mohamed Imam, Muhammad Farid Adilazuarda, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Naome Etori, Olivier Niyomugisha, Paula Mónica Silva, Pranjal Chitale, Raj Dabre, Rendi Chevi, Ruochen Zhang, Ryandito Diandaru, Samuel Cahyawijaya, Santiago Góngora, Soyeong Jeong, Sukannya Purkayastha, Tatsuki Kuribayashi, Thanmay Jayakumar, Tiago Timponi Torrent, Toqeer Ehsan, Vladimir Araujo, Yova Kementchedjhieva, Zara Burzo, Zheng Wei Lim, Zheng Xin Yong, Oana Ignat, Joan Nwatu, Rada Mihalcea, Thamar Solorio, Alham Fikri Aji
6
1
Visual Question Answering (VQA) is an important task in multimodal AI, and it
is often used to test the ability of vision-language models to understand and
reason on knowledge present in both visual and textual data. However, most of
the current VQA models use datasets that are primarily focused on English and a
few major world languages, with images that are typically Western-centric.
While recent efforts have tried to increase the number of languages covered on
VQA datasets, they still lack diversity in low-resource languages. More
importantly, although these datasets often extend their linguistic range via
translation or some other approaches, they usually keep images the same,
resulting in narrow cultural representation. To address these limitations, we
construct CVQA, a new Culturally-diverse multilingual Visual Question Answering
benchmark, designed to cover a rich set of languages and cultures, where we
engage native speakers and cultural experts in the data collection process. As
a result, CVQA includes culturally-driven images and questions from across 28
countries on four continents, covering 26 languages with 11 scripts, providing
a total of 9k questions. We then benchmark several Multimodal Large Language
Models (MLLMs) on CVQA, and show that the dataset is challenging for the
current state-of-the-art models. This benchmark can serve as a probing
evaluation suite for assessing the cultural capability and bias of multimodal
models and hopefully encourage more research efforts toward increasing cultural
awareness and linguistic diversity in this field.
The rapid advancement of Large Language Models (LLMs) necessitates robust and
challenging benchmarks. Leaderboards like Chatbot Arena rank LLMs based on how
well their responses align with human preferences. However, many tasks such as
those related to emotional intelligence, creative writing, or persuasiveness,
are highly subjective and often lack majoritarian human agreement. Judges may
have irreconcilable disagreements about what constitutes a better response. To
address the challenge of ranking LLMs on highly subjective tasks, we propose a
novel benchmarking framework, the Language Model Council (LMC). The LMC
operates through a democratic process to: 1) formulate a test set through equal
participation, 2) administer the test among council members, and 3) evaluate
responses as a collective jury. We deploy a council of 20 newest LLMs on an
open-ended emotional intelligence task: responding to interpersonal dilemmas.
Our results show that the LMC produces rankings that are more separable,
robust, and less biased than those from any individual LLM judge, and is more
consistent with a human-established leaderboard compared to other benchmarks.
BySumukh K Aithal, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter
5
1
Colloquially speaking, image generation models based upon diffusion processes
are frequently said to exhibit "hallucinations," samples that could never occur
in the training data. But where do such hallucinations come from? In this
paper, we study a particular failure mode in diffusion models, which we term
mode interpolation. Specifically, we find that diffusion models smoothly
"interpolate" between nearby data modes in the training set, to generate
samples that are completely outside the support of the original training
distribution; this phenomenon leads diffusion models to generate artifacts that
never existed in real data (i.e., hallucinations). We systematically study the
reasons for, and the manifestation of this phenomenon. Through experiments on
1D and 2D Gaussians, we show how a discontinuous loss landscape in the
diffusion model's decoder leads to a region where any smooth approximation will
cause such hallucinations. Through experiments on artificial datasets with
various shapes, we show how hallucination leads to the generation of
combinations of shapes that never existed. Finally, we show that diffusion
models in fact know when they go out of support and hallucinate. This is
captured by the high variance in the trajectory of the generated sample towards
the final few backward sampling process. Using a simple metric to capture this
variance, we can remove over 95% of hallucinations at generation time while
retaining 96% of in-support samples. We conclude our exploration by showing the
implications of such hallucination (and its removal) on the collapse (and
stabilization) of recursive training on synthetic data with experiments on
MNIST and 2D Gaussians dataset. We release our code at
https://github.com/locuslab/diffusion-model-hallucination.
ByYufan Zhou, Ruiyi Zhang, Kaizhi Zheng, Nanxuan Zhao, Jiuxiang Gu, Zichao Wang, Xin Eric Wang, Tong Sun
5
2
In subject-driven text-to-image generation, recent works have achieved
superior performance by training the model on synthetic datasets containing
numerous image pairs. Trained on these datasets, generative models can produce
text-aligned images for specific subject from arbitrary testing image in a
zero-shot manner. They even outperform methods which require additional
fine-tuning on testing images. However, the cost of creating such datasets is
prohibitive for most researchers. To generate a single training pair, current
methods fine-tune a pre-trained text-to-image model on the subject image to
capture fine-grained details, then use the fine-tuned model to create images
for the same subject based on creative text prompts. Consequently, constructing
a large-scale dataset with millions of subjects can require hundreds of
thousands of GPU hours. To tackle this problem, we propose Toffee, an efficient
method to construct datasets for subject-driven editing and generation.
Specifically, our dataset construction does not need any subject-level
fine-tuning. After pre-training two generative models, we are able to generate
infinite number of high-quality samples. We construct the first large-scale
dataset for subject-driven image editing and generation, which contains 5
million image pairs, text prompts, and masks. Our dataset is 5 times the size
of previous largest dataset, yet our cost is tens of thousands of GPU hours
lower. To test the proposed dataset, we also propose a model which is capable
of both subject-driven image editing and generation. By simply training the
model on our proposed dataset, it obtains competitive results, illustrating the
effectiveness of the proposed dataset construction framework.
ByDesai Xie, Sai Bi, Zhixin Shu, Kai Zhang, Zexiang Xu, Yi Zhou, Sören Pirk, Arie Kaufman, Xin Sun, Hao Tan
5
1
We present LRM-Zero, a Large Reconstruction Model (LRM) trained entirely on
synthesized 3D data, achieving high-quality sparse-view 3D reconstruction. The
core of LRM-Zero is our procedural 3D dataset, Zeroverse, which is
automatically synthesized from simple primitive shapes with random texturing
and augmentations (e.g., height fields, boolean differences, and wireframes).
Unlike previous 3D datasets (e.g., Objaverse) which are often captured or
crafted by humans to approximate real 3D data, Zeroverse completely ignores
realistic global semantics but is rich in complex geometric and texture details
that are locally similar to or even more intricate than real objects. We
demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse,
can achieve high visual quality in the reconstruction of real-world objects,
competitive with models trained on Objaverse. We also analyze several critical
design choices of Zeroverse that contribute to LRM-Zero's capability and
training stability. Our work demonstrates that 3D reconstruction, one of the
core tasks in 3D vision, can potentially be addressed without the semantics of
real-world objects. The Zeroverse's procedural synthesis code and interactive
visualization are available at: https://desaixie.github.io/lrm-zero/.