ByMarah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Olatunji Ruwase, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yunan Zhang, Xiren Zhou
257
42
We introduce phi-3-mini, a 3.8 billion parameter language model trained on
3.3 trillion tokens, whose overall performance, as measured by both academic
benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and
GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite
being small enough to be deployed on a phone. The innovation lies entirely in
our dataset for training, a scaled-up version of the one used for phi-2,
composed of heavily filtered web data and synthetic data. The model is also
further aligned for robustness, safety, and chat format. We also provide some
initial parameter-scaling results with a 7B and 14B models trained for 4.8T
tokens, called phi-3-small and phi-3-medium, both significantly more capable
than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on
MT-bench).
ByWei Huang, Xudong Ma, Haotong Qin, Xingyu Zheng, Chengtao Lv, Hong Chen, Jie Luo, Xiaojuan Qi, Xianglong Liu, Michele Magno
45
12
Meta's LLaMA family has become one of the most powerful open-source Large
Language Model (LLM) series. Notably, LLaMA3 models have recently been released
and achieve impressive performance across various with super-large scale
pre-training on over 15T tokens of data. Given the wide application of low-bit
quantization for LLMs in resource-limited scenarios, we explore LLaMA3's
capabilities when quantized to low bit-width. This exploration holds the
potential to unveil new insights and challenges for low-bit quantization of
LLaMA3 and other forthcoming LLMs, especially in addressing performance
degradation problems that suffer in LLM compression. Specifically, we evaluate
the 10 existing post-training quantization and LoRA-finetuning methods of
LLaMA3 on 1-8 bits and diverse datasets to comprehensively reveal LLaMA3's
low-bit quantization performance. Our experiment results indicate that LLaMA3
still suffers non-negligent degradation in these scenarios, especially in
ultra-low bit-width. This highlights the significant performance gap under low
bit-width that needs to be bridged in future developments. We expect that this
empirical study will prove valuable in advancing future models, pushing the
LLMs to lower bit-width with higher accuracy for being practical. Our project
is released on https://github.com/Macaronlin/LLaMA3-Quantization and quantized
LLaMA3 models are released in https://huggingface.co/LLMQ.
ByEric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, Alex Beutel
40
9
Today's LLMs are susceptible to prompt injections, jailbreaks, and other
attacks that allow adversaries to overwrite a model's original instructions
with their own malicious prompts. In this work, we argue that one of the
primary vulnerabilities underlying these attacks is that LLMs often consider
system prompts (e.g., text from an application developer) to be the same
priority as text from untrusted users and third parties. To address this, we
propose an instruction hierarchy that explicitly defines how models should
behave when instructions of different priorities conflict. We then propose a
data generation method to demonstrate this hierarchical instruction following
behavior, which teaches LLMs to selectively ignore lower-privileged
instructions. We apply this method to GPT-3.5, showing that it drastically
increases robustness -- even for attack types not seen during training -- while
imposing minimal degradations on standard capabilities.
ByZhen Zeng, William Watson, Nicole Cho, Saba Rahimi, Shayleen Reynolds, Tucker Balch, Manuela Veloso
34
1
The rapidly evolving field of Robotic Process Automation (RPA) has made
significant strides in automating repetitive processes, yet its effectiveness
diminishes in scenarios requiring spontaneous or unpredictable tasks demanded
by users. This paper introduces a novel approach, FlowMind, leveraging the
capabilities of Large Language Models (LLMs) such as Generative Pretrained
Transformer (GPT), to address this limitation and create an automatic workflow
generation system. In FlowMind, we propose a generic prompt recipe for a
lecture that helps ground LLM reasoning with reliable Application Programming
Interfaces (APIs). With this, FlowMind not only mitigates the common issue of
hallucinations in LLMs, but also eliminates direct interaction between LLMs and
proprietary data or code, thus ensuring the integrity and confidentiality of
information - a cornerstone in financial services. FlowMind further simplifies
user interaction by presenting high-level descriptions of auto-generated
workflows, enabling users to inspect and provide feedback effectively. We also
introduce NCEN-QA, a new dataset in finance for benchmarking question-answering
tasks from N-CEN reports on funds. We used NCEN-QA to evaluate the performance
of workflows generated by FlowMind against baseline and ablation variants of
FlowMind. We demonstrate the success of FlowMind, the importance of each
component in the proposed lecture recipe, and the effectiveness of user
interaction and feedback in FlowMind.
ByYuxi Ren, Xin Xia, Yanzuo Lu, Jiacheng Zhang, Jie Wu, Pan Xie, Xing Wang, Xuefeng Xiao
28
2
Recently, a series of diffusion-aware distillation algorithms have emerged to
alleviate the computational overhead associated with the multi-step inference
process of Diffusion Models (DMs). Current distillation techniques often
dichotomize into two distinct aspects: i) ODE Trajectory Preservation; and ii)
ODE Trajectory Reformulation. However, these approaches suffer from severe
performance degradation or domain shifts. To address these limitations, we
propose Hyper-SD, a novel framework that synergistically amalgamates the
advantages of ODE Trajectory Preservation and Reformulation, while maintaining
near-lossless performance during step compression. Firstly, we introduce
Trajectory Segmented Consistency Distillation to progressively perform
consistent distillation within pre-defined time-step segments, which
facilitates the preservation of the original ODE trajectory from a higher-order
perspective. Secondly, we incorporate human feedback learning to boost the
performance of the model in a low-step regime and mitigate the performance loss
incurred by the distillation process. Thirdly, we integrate score distillation
to further improve the low-step generation capability of the model and offer
the first attempt to leverage a unified LoRA to support the inference process
at all steps. Extensive experiments and user studies demonstrate that Hyper-SD
achieves SOTA performance from 1 to 8 inference steps for both SDXL and SD1.5.
For example, Hyper-SDXL surpasses SDXL-Lightning by +0.68 in CLIP Score and
+0.51 in Aes Score in the 1-step inference.
ByTamar Rott Shaham, Sarah Schwettmann, Franklin Wang, Achyuta Rajaram, Evan Hernandez, Jacob Andreas, Antonio Torralba
22
1
This paper describes MAIA, a Multimodal Automated Interpretability Agent.
MAIA is a system that uses neural models to automate neural model understanding
tasks like feature interpretation and failure mode discovery. It equips a
pre-trained vision-language model with a set of tools that support iterative
experimentation on subcomponents of other models to explain their behavior.
These include tools commonly used by human interpretability researchers: for
synthesizing and editing inputs, computing maximally activating exemplars from
real-world datasets, and summarizing and describing experimental results.
Interpretability experiments proposed by MAIA compose these tools to describe
and explain system behavior. We evaluate applications of MAIA to computer
vision models. We first characterize MAIA's ability to describe (neuron-level)
features in learned representations of images. Across several trained models
and a novel dataset of synthetic vision neurons with paired ground-truth
descriptions, MAIA produces descriptions comparable to those generated by
expert human experimenters. We then show that MAIA can aid in two additional
interpretability tasks: reducing sensitivity to spurious features, and
automatically identifying inputs likely to be mis-classified.
ByYuying Ge, Sijie Zhao, Jinguo Zhu, Yixiao Ge, Kun Yi, Lin Song, Chen Li, Xiaohan Ding, Ying Shan
19
2
The rapid evolution of multimodal foundation model has demonstrated
significant progresses in vision-language understanding and generation, e.g.,
our previous work SEED-LLaMA. However, there remains a gap between its
capability and the real-world applicability, primarily due to the model's
limited capacity to effectively respond to various user instructions and
interact with diverse visual data. In this work, we focus on bridging this gap
through integrating two enhanced features: (1) comprehending images of
arbitrary sizes and ratios, and (2) enabling multi-granularity image
generation. We present a unified and versatile foundation model, namely,
SEED-X, which is able to model multi-granularity visual semantics for
comprehension and generation tasks. Besides the competitive results on public
benchmarks, SEED-X demonstrates its effectiveness in handling real-world
applications across various domains after instruction tuning. We hope that our
work will inspire future research into what can be achieved by versatile
multimodal foundation models in real-world applications. The models, codes, and
datasets will be released in https://github.com/AILab-CVC/SEED-X.
Consistency models have exhibited remarkable capabilities in facilitating
efficient image/video generation, enabling synthesis with minimal sampling
steps. It has proven to be advantageous in mitigating the computational burdens
associated with diffusion models. Nevertheless, the application of consistency
models in music generation remains largely unexplored. To address this gap, we
present Music Consistency Models (MusicCM), which leverages the
concept of consistency models to efficiently synthesize mel-spectrogram for
music clips, maintaining high quality while minimizing the number of sampling
steps. Building upon existing text-to-music diffusion models, the
MusicCM model incorporates consistency distillation and adversarial
discriminator training. Moreover, we find it beneficial to generate extended
coherent music by incorporating multiple diffusion processes with shared
constraints. Experimental results reveal the effectiveness of our model in
terms of computational efficiency, fidelity, and naturalness. Notable,
MusicCM achieves seamless music synthesis with a mere four sampling
steps, e.g., only one second per minute of the music clip, showcasing the
potential for real-time application.
ByChenyang Zhu, Kai Li, Yue Ma, Chunming He, Li Xiu
9
1
This paper introduces MultiBooth, a novel and efficient technique for
multi-concept customization in image generation from text. Despite the
significant advancements in customized generation methods, particularly with
the success of diffusion models, existing methods often struggle with
multi-concept scenarios due to low concept fidelity and high inference cost.
MultiBooth addresses these issues by dividing the multi-concept generation
process into two phases: a single-concept learning phase and a multi-concept
integration phase. During the single-concept learning phase, we employ a
multi-modal image encoder and an efficient concept encoding technique to learn
a concise and discriminative representation for each concept. In the
multi-concept integration phase, we use bounding boxes to define the generation
area for each concept within the cross-attention map. This method enables the
creation of individual concepts within their specified regions, thereby
facilitating the formation of multi-concept images. This strategy not only
improves concept fidelity but also reduces additional inference cost.
MultiBooth surpasses various baselines in both qualitative and quantitative
evaluations, showcasing its superior performance and computational efficiency.
Project Page: https://multibooth.github.io/
ByJunfeng Long, Wenye Yu, Quanyi Li, Zirui Wang, Dahua Lin, Jiangmiao Pang
7
1
Stable locomotion in precipitous environments is an essential capability of
quadruped robots, demanding the ability to resist various external
disturbances. However, recent learning-based policies only use basic domain
randomization to improve the robustness of learned policies, which cannot
guarantee that the robot has adequate disturbance resistance capabilities. In
this paper, we propose to model the learning process as an adversarial
interaction between the actor and a newly introduced disturber and ensure their
optimization with H_{infty} constraint. In contrast to the actor that
maximizes the discounted overall reward, the disturber is responsible for
generating effective external forces and is optimized by maximizing the error
between the task reward and its oracle, i.e., "cost" in each iteration. To keep
joint optimization between the actor and the disturber stable, our H_{infty}
constraint mandates the bound of ratio between the cost to the intensity of the
external forces. Through reciprocal interaction throughout the training phase,
the actor can acquire the capability to navigate increasingly complex physical
disturbances. We verify the robustness of our approach on quadrupedal
locomotion tasks with Unitree Aliengo robot, and also a more challenging task
with Unitree A1 robot, where the quadruped is expected to perform locomotion
merely on its hind legs as if it is a bipedal robot. The simulated quantitative
results show improvement against baselines, demonstrating the effectiveness of
the method and each design choice. On the other hand, real-robot experiments
qualitatively exhibit how robust the policy is when interfering with various
disturbances on various terrains, including stairs, high platforms, slopes, and
slippery terrains. All code, checkpoints, and real-world deployment guidance
will be made public.
ByEric Brachmann, Jamie Wynn, Shuai Chen, Tommaso Cavallari, Áron Monszpart, Daniyar Turmukhambetov, Victor Adrian Prisacariu
6
1
We address the task of estimating camera parameters from a set of images
depicting a scene. Popular feature-based structure-from-motion (SfM) tools
solve this task by incremental reconstruction: they repeat triangulation of
sparse 3D points and registration of more camera views to the sparse point
cloud. We re-interpret incremental structure-from-motion as an iterated
application and refinement of a visual relocalizer, that is, of a method that
registers new views to the current state of the reconstruction. This
perspective allows us to investigate alternative visual relocalizers that are
not rooted in local feature matching. We show that scene coordinate regression,
a learning-based relocalization approach, allows us to build implicit, neural
scene representations from unposed images. Different from other learning-based
reconstruction methods, we do not require pose priors nor sequential inputs,
and we optimize efficiently over thousands of images. Our method, ACE0 (ACE
Zero), estimates camera poses to an accuracy comparable to feature-based SfM,
as demonstrated by novel view synthesis. Project page:
https://nianticlabs.github.io/acezero/