While Transformers have been the main architecture behind deep learning's
success in language modeling, state-space models (SSMs) such as Mamba have
recently been shown to match or outperform Transformers at small to medium
scale. We show that these families of models are actually quite closely
related, and develop a rich framework of theoretical connections between SSMs
and variants of attention, connected through various decompositions of a
well-studied class of structured semiseparable matrices. Our state space
duality (SSD) framework allows us to design a new architecture (Mamba-2) whose
core layer is an a refinement of Mamba's selective SSM that is 2-8X faster,
while continuing to be competitive with Transformers on language modeling.
ByChaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, Peixian Chen, Yanwei Li, Shaohui Lin, Sirui Zhao, Ke Li, Tong Xu, Xiawu Zheng, Enhong Chen, Rongrong Ji, Xing Sun
26
2
In the quest for artificial general intelligence, Multi-modal Large Language
Models (MLLMs) have emerged as a focal point in recent advancements. However,
the predominant focus remains on developing their capabilities in static image
understanding. The potential of MLLMs in processing sequential visual data is
still insufficiently explored, highlighting the absence of a comprehensive,
high-quality assessment of their performance. In this paper, we introduce
Video-MME, the first-ever full-spectrum, Multi-Modal Evaluation benchmark of
MLLMs in Video analysis. Our work distinguishes from existing benchmarks
through four key features: 1) Diversity in video types, spanning 6 primary
visual domains with 30 subfields to ensure broad scenario generalizability; 2)
Duration in temporal dimension, encompassing both short-, medium-, and
long-term videos, ranging from 11 seconds to 1 hour, for robust contextual
dynamics; 3) Breadth in data modalities, integrating multi-modal inputs besides
video frames, including subtitles and audios, to unveil the all-round
capabilities of MLLMs; 4) Quality in annotations, utilizing rigorous manual
labeling by expert annotators to facilitate precise and reliable model
assessment. 900 videos with a total of 256 hours are manually selected and
annotated by repeatedly viewing all the video content, resulting in 2,700
question-answer pairs. With Video-MME, we extensively evaluate various
state-of-the-art MLLMs, including GPT-4 series and Gemini 1.5 Pro, as well as
open-source image models like InternVL-Chat-V1.5 and video models like
LLaVA-NeXT-Video. Our experiments reveal that Gemini 1.5 Pro is the
best-performing commercial model, significantly outperforming the open-source
models. Our dataset along with these findings underscores the need for further
improvements in handling longer sequences and multi-modal data. Project Page:
https://video-mme.github.io
ByZachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L. Leavitt, Mansheej Paul
24
1
In this work, we investigate whether small language models can determine
high-quality subsets of large-scale text datasets that improve the performance
of larger language models. While existing work has shown that pruning based on
the perplexity of a larger model can yield high-quality data, we investigate
whether smaller models can be used for perplexity-based pruning and how pruning
is affected by the domain composition of the data being pruned. We demonstrate
that for multiple dataset compositions, perplexity-based pruning of pretraining
data can significantly improve downstream task performance: pruning
based on perplexities computed with a 125 million parameter model improves the
average performance on downstream tasks of a 3 billion parameter model by up to
2.04 and achieves up to a 1.45times reduction in pretraining steps to reach
commensurate baseline performance. Furthermore, we demonstrate that such
perplexity-based data pruning also yields downstream performance gains in the
over-trained and data-constrained regimes.
Diffusion models have emerged as a powerful tool for generating high-quality
images from textual descriptions. Despite their successes, these models often
exhibit limited diversity in the sampled images, particularly when sampling
with a high classifier-free guidance weight. To address this issue, we present
Kaleido, a novel approach that enhances the diversity of samples by
incorporating autoregressive latent priors. Kaleido integrates an
autoregressive language model that encodes the original caption and generates
latent variables, serving as abstract and intermediary representations for
guiding and facilitating the image generation process. In this paper, we
explore a variety of discrete latent representations, including textual
descriptions, detection bounding boxes, object blobs, and visual tokens. These
representations diversify and enrich the input conditions to the diffusion
models, enabling more diverse outputs. Our experimental results demonstrate
that Kaleido effectively broadens the diversity of the generated image samples
from a given textual description while maintaining high image quality.
Furthermore, we show that Kaleido adheres closely to the guidance provided by
the generated latent variables, demonstrating its capability to effectively
control and direct the image generation process.
Current 4D generation methods have achieved noteworthy efficacy with the aid
of advanced diffusion generative models. However, these methods lack multi-view
spatial-temporal modeling and encounter challenges in integrating diverse prior
knowledge from multiple diffusion models, resulting in inconsistent temporal
appearance and flickers. In this paper, we propose a novel 4D generation
pipeline, namely 4Diffusion aimed at generating spatial-temporally consistent
4D content from a monocular video. We first design a unified diffusion model
tailored for multi-view video generation by incorporating a learnable motion
module into a frozen 3D-aware diffusion model to capture multi-view
spatial-temporal correlations. After training on a curated dataset, our
diffusion model acquires reasonable temporal consistency and inherently
preserves the generalizability and spatial consistency of the 3D-aware
diffusion model. Subsequently, we propose 4D-aware Score Distillation Sampling
loss, which is based on our multi-view video diffusion model, to optimize 4D
representation parameterized by dynamic NeRF. This aims to eliminate
discrepancies arising from multiple diffusion models, allowing for generating
spatial-temporally consistent 4D content. Moreover, we devise an anchor loss to
enhance the appearance details and facilitate the learning of dynamic NeRF.
Extensive qualitative and quantitative experiments demonstrate that our method
achieves superior performance compared to previous methods.
Second-order optimizers, maintaining a matrix termed a preconditioner, are
superior to first-order optimizers in both theory and practice. The states
forming the preconditioner and its inverse root restrict the maximum size of
models trained by second-order optimizers. To address this, compressing 32-bit
optimizer states to lower bitwidths has shown promise in reducing memory usage.
However, current approaches only pertain to first-order optimizers. In this
paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit
Shampoo, maintaining performance similar to that of 32-bit ones. We show that
quantizing the eigenvector matrix of the preconditioner in 4-bit Shampoo is
remarkably better than quantizing the preconditioner itself both theoretically
and experimentally. By rectifying the orthogonality of the quantized
eigenvector matrix, we enhance the approximation of the preconditioner's
eigenvector matrix, which also benefits the computation of its inverse 4-th
root. Besides, we find that linear square quantization slightly outperforms
dynamic tree quantization when quantizing second-order optimizer states.
Evaluation on various networks for image classification demonstrates that our
4-bit Shampoo achieves comparable test accuracy to its 32-bit counterpart while
being more memory-efficient. The source code will be made available.