We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code
language model that achieves performance comparable to GPT4-Turbo in
code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained
from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion
tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially
enhances the coding and mathematical reasoning capabilities of DeepSeek-V2,
while maintaining comparable performance in general language tasks. Compared to
DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in
various aspects of code-related tasks, as well as reasoning and general
capabilities. Additionally, DeepSeek-Coder-V2 expands its support for
programming languages from 86 to 338, while extending the context length from
16K to 128K. In standard benchmark evaluations, DeepSeek-Coder-V2 achieves
superior performance compared to closed-source models such as GPT4-Turbo,
Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks.
Accurately estimating depth in 360-degree imagery is crucial for virtual
reality, autonomous navigation, and immersive media applications. Existing
depth estimation methods designed for perspective-view imagery fail when
applied to 360-degree images due to different camera projections and
distortions, whereas 360-degree methods perform inferior due to the lack of
labeled data pairs. We propose a new depth estimation framework that utilizes
unlabeled 360-degree data effectively. Our approach uses state-of-the-art
perspective depth estimation models as teacher models to generate pseudo labels
through a six-face cube projection technique, enabling efficient labeling of
depth in 360-degree images. This method leverages the increasing availability
of large datasets. Our approach includes two main stages: offline mask
generation for invalid regions and an online semi-supervised joint training
regime. We tested our approach on benchmark datasets such as Matterport3D and
Stanford2D3D, showing significant improvements in depth estimation accuracy,
particularly in zero-shot scenarios. Our proposed training pipeline can enhance
any 360 monocular depth estimator and demonstrates effective knowledge transfer
across different camera projections and data types. See our project page for
results: https://albert100121.github.io/Depth-Anywhere/
ByChangyu Chen, Zichen Liu, Chao Du, Tianyu Pang, Qian Liu, Arunesh Sinha, Pradeep Varakantham, Min Lin
41
1
Human alignment in large language models (LLMs) is an active area of
research. A recent groundbreaking work, direct preference optimization (DPO),
has greatly simplified the process from past work in reinforcement learning
from human feedback (RLHF) by bypassing the reward learning stage in RLHF. DPO,
after training, provides an implicit reward model. In this work, we make a
novel observation that this implicit reward model can by itself be used in a
bootstrapping fashion to further align the LLM. Our approach is to use the
rewards from a current LLM model to construct a preference dataset, which is
then used in subsequent DPO rounds. We incorporate refinements that debias the
length of the responses and improve the quality of the preference dataset to
further improve our approach. Our approach, named self-alignment with DPO
ImpliCit rEwards (DICE), shows great improvements in alignment and achieves
superior performance than Gemini Pro on AlpacaEval 2, reaching 27.55%
length-controlled win rate against GPT-4 Turbo, but with only 8B parameters and
no external feedback. Our code is available at https://github.com/sail-sg/dice.
ByByung-Kwan Lee, Sangyun Chung, Chae Won Kim, Beomchan Park, Yong Man Ro
36
2
Large language and vision models (LLVMs) have been driven by the
generalization power of large language models (LLMs) and the advent of visual
instruction tuning. Along with scaling them up directly, these models enable
LLVMs to showcase powerful vision language (VL) performances by covering
diverse tasks via natural language instructions. However, existing open-source
LLVMs that perform comparably to closed-source LLVMs such as GPT-4V are often
considered too large (e.g., 26B, 34B, and 110B parameters), having a larger
number of layers. These large models demand costly, high-end resources for both
training and inference. To address this issue, we present a new efficient LLVM
family with 1.8B, 3.8B, and 7B LLM model sizes, Traversal of Layers (TroL),
which enables the reuse of layers in a token-wise manner. This layer traversing
technique simulates the effect of looking back and retracing the answering
stream while increasing the number of forward propagation layers without
physically adding more layers. We demonstrate that TroL employs a simple layer
traversing approach yet efficiently outperforms the open-source LLVMs with
larger model sizes and rivals the performances of the closed-source LLVMs with
substantial sizes.
We introduce ChatGLM, an evolving family of large language models that we
have been developing over time. This report primarily focuses on the GLM-4
language series, which includes GLM-4, GLM-4-Air, and GLM-4-9B. They represent
our most capable models that are trained with all the insights and lessons
gained from the preceding three generations of ChatGLM. To date, the GLM-4
models are pre-trained on ten trillions of tokens mostly in Chinese and
English, along with a small set of corpus from 24 languages, and aligned
primarily for Chinese and English usage. The high-quality alignment is achieved
via a multi-stage post-training process, which involves supervised fine-tuning
and learning from human feedback. Evaluations show that GLM-4 1) closely rivals
or outperforms GPT-4 in terms of general metrics such as MMLU, GSM8K, MATH,
BBH, GPQA, and HumanEval, 2) gets close to GPT-4-Turbo in instruction following
as measured by IFEval, 3) matches GPT-4 Turbo (128K) and Claude 3 for long
context tasks, and 4) outperforms GPT-4 in Chinese alignments as measured by
AlignBench. The GLM-4 All Tools model is further aligned to understand user
intent and autonomously decide when and which tool(s) touse -- including web
browser, Python interpreter, text-to-image model, and user-defined functions --
to effectively complete complex tasks. In practical applications, it matches
and even surpasses GPT-4 All Tools in tasks like accessing online information
via web browsing and solving math problems using Python interpreter. Over the
course, we have open-sourced a series of models, including ChatGLM-6B (three
generations), GLM-4-9B (128K, 1M), GLM-4V-9B, WebGLM, and CodeGeeX, attracting
over 10 million downloads on Hugging face in the year 2023 alone. The open
models can be accessed through https://github.com/THUDM and
https://huggingface.co/THUDM.
Vision-Language Models (VLMs) have achieved remarkable success in various
multi-modal tasks, but they are often bottlenecked by the limited context
window and high computational cost of processing high-resolution image inputs
and videos. Vision compression can alleviate this problem by reducing the
vision token count. Previous approaches compress vision tokens with external
modules and force LLMs to understand the compressed ones, leading to visual
information loss. However, the LLMs' understanding paradigm of vision tokens is
not fully utilised in the compression learning process. We propose VoCo-LLaMA,
the first approach to compress vision tokens using LLMs. By introducing Vision
Compression tokens during the vision instruction tuning phase and leveraging
attention distillation, our method distill how LLMs comprehend vision tokens
into their processing of VoCo tokens. VoCo-LLaMA facilitates effective vision
compression and improves the computational efficiency during the inference
stage. Specifically, our method achieves minimal performance loss with a
compression ratio of 576times, resulting in up to 94.8% fewer FLOPs and
69.6% acceleration in inference time. Furthermore, through continuous
training using time-series compressed token sequences of video frames,
VoCo-LLaMA demonstrates the ability to understand temporal correlations,
outperforming previous methods on popular video question-answering benchmarks.
Our approach presents a promising way to unlock the full potential of VLMs'
contextual window, enabling more scalable multi-modal applications. The project
page, along with the associated code, can be accessed via
https://yxxxb.github.io/VoCo-LLaMA-page/{this https URL}.
Software agents have emerged as promising tools for addressing complex
software engineering tasks. However, existing works oversimplify software
development workflows by following the waterfall model. Thus, we propose
AgileCoder, a multi-agent system that integrates Agile Methodology (AM) into
the framework. This system assigns specific AM roles such as Product Manager,
Developer, and Tester to different agents, who then collaboratively develop
software based on user inputs. AgileCoder enhances development efficiency by
organizing work into sprints, focusing on incrementally developing software
through sprints. Additionally, we introduce Dynamic Code Graph Generator, a
module that creates a Code Dependency Graph dynamically as updates are made to
the codebase. This allows agents to better comprehend the codebase, leading to
more precise code generation and modifications throughout the software
development process. AgileCoder surpasses existing benchmarks, like ChatDev and
MetaGPT, establishing a new standard and showcasing the capabilities of
multi-agent systems in advanced software engineering environments. Our source
code can be found at https://github.com/FSoft-AI4Code/AgileCoder.
Retrieval Augmented Generation (RAG) enriches the ability of language models
to reason using external context to augment responses for a given user prompt.
This approach has risen in popularity due to practical applications in various
applications of language models in search, question/answering, and chat-bots.
However, the exact nature of how this approach works isn't clearly understood.
In this paper, we mechanistically examine the RAG pipeline to highlight that
language models take shortcut and have a strong bias towards utilizing only the
context information to answer the question, while relying minimally on their
parametric memory. We probe this mechanistic behavior in language models with:
(i) Causal Mediation Analysis to show that the parametric memory is minimally
utilized when answering a question and (ii) Attention Contributions and
Knockouts to show that the last token residual stream do not get enriched from
the subject token in the question, but gets enriched from other informative
tokens in the context. We find this pronounced shortcut behaviour true across
both LLaMa and Phi family of models.
Supervised fine-tuning enhances the problem-solving abilities of language
models across various mathematical reasoning tasks. To maximize such benefits,
existing research focuses on broadening the training set with various data
augmentation techniques, which is effective for standard single-round
question-answering settings. Our work introduces a novel technique aimed at
cultivating a deeper understanding of the training problems at hand, enhancing
performance not only in standard settings but also in more complex scenarios
that require reflective thinking. Specifically, we propose reflective
augmentation, a method that embeds problem reflection into each training
instance. It trains the model to consider alternative perspectives and engage
with abstractions and analogies, thereby fostering a thorough comprehension
through reflective reasoning. Extensive experiments validate the achievement of
our aim, underscoring the unique advantages of our method and its complementary
nature relative to existing augmentation techniques.
ByZhen Huang, Zengzhi Wang, Shijie Xia, Xuefeng Li, Haoyang Zou, Ruijie Xu, Run-Ze Fan, Lyumanshan Ye, Ethan Chern, Yixin Ye, Yikai Zhang, Yuqing Yang, Ting Wu, Binjie Wang, Shichao Sun, Yang Xiao, Yiyuan Li, Fan Zhou, Steffi Chern, Yiwei Qin, Yan Ma, Jiadi Su, Yixiu Liu, Yuxiang Zheng, Shaoting Zhang, Dahua Lin, Yu Qiao, Pengfei Liu
17
2
The evolution of Artificial Intelligence (AI) has been significantly
accelerated by advancements in Large Language Models (LLMs) and Large
Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning
abilities in problem-solving and scientific discovery (i.e., AI4Science) once
exclusive to human intellect. To comprehensively evaluate current models'
performance in cognitive reasoning abilities, we introduce OlympicArena, which
includes 11,163 bilingual problems across both text-only and interleaved
text-image modalities. These challenges encompass a wide range of disciplines
spanning seven fields and 62 international Olympic competitions, rigorously
examined for data leakage. We argue that the challenges in Olympic competition
problems are ideal for evaluating AI's cognitive reasoning due to their
complexity and interdisciplinary nature, which are essential for tackling
complex scientific challenges and facilitating discoveries. Beyond evaluating
performance across various disciplines using answer-only criteria, we conduct
detailed experiments and analyses from multiple perspectives. We delve into the
models' cognitive reasoning abilities, their performance across different
modalities, and their outcomes in process-level evaluations, which are vital
for tasks requiring complex reasoning with lengthy solutions. Our extensive
evaluations reveal that even advanced models like GPT-4o only achieve a 39.97%
overall accuracy, illustrating current AI limitations in complex reasoning and
multimodal integration. Through the OlympicArena, we aim to advance AI towards
superintelligence, equipping it to address more complex challenges in science
and beyond. We also provide a comprehensive set of resources to support AI
research, including a benchmark dataset, an open-source annotation platform, a
detailed evaluation tool, and a leaderboard with automatic submission features.
Safety-aligned language models often exhibit fragile and imbalanced safety
mechanisms, increasing the likelihood of generating unsafe content. In
addition, incorporating new knowledge through editing techniques to language
models can further compromise safety. To address these issues, we propose
SafeInfer, a context-adaptive, decoding-time safety alignment strategy for
generating safe responses to user queries. SafeInfer comprises two phases: the
safety amplification phase, which employs safe demonstration examples to adjust
the model's hidden states and increase the likelihood of safer outputs, and the
safety-guided decoding phase, which influences token selection based on
safety-optimized distributions, ensuring the generated content complies with
ethical guidelines. Further, we present HarmEval, a novel benchmark for
extensive safety evaluations, designed to address potential misuse scenarios in
accordance with the policies of leading AI tech giants.
Language models typically tokenize raw text into sequences of subword
identifiers from a predefined vocabulary, a process inherently sensitive to
typographical errors, length variations, and largely oblivious to the internal
structure of tokens-issues we term the curse of tokenization. In this study, we
delve into these drawbacks and demonstrate that large language models (LLMs)
remain susceptible to these problems. This study systematically investigates
these challenges and their impact on LLMs through three critical research
questions: (1) complex problem solving, (2) token structure probing, and (3)
resilience to typographical variation. Our findings reveal that scaling model
parameters can mitigate the issue of tokenization; however, LLMs still suffer
from biases induced by typos and other text format variations. Our experiments
show that subword regularization such as BPE-dropout can mitigate this issue.
We will release our code and data to facilitate further research.
Ensuring the safe alignment of large language models (LLMs) with human values
is critical as they become integral to applications like translation and
question answering. Current alignment methods struggle with dynamic user
intentions and complex objectives, making models vulnerable to generating
harmful content. We propose Safety Arithmetic, a training-free framework
enhancing LLM safety across different scenarios: Base models, Supervised
fine-tuned models (SFT), and Edited models. Safety Arithmetic involves Harm
Direction Removal to avoid harmful content and Safety Alignment to promote safe
responses. Additionally, we present NoIntentEdit, a dataset highlighting edit
instances that could compromise model safety if used unintentionally. Our
experiments show that Safety Arithmetic significantly improves safety measures,
reduces over-safety, and maintains model utility, outperforming existing
methods in ensuring safe content generation.
ByJoao Monteiro, Pierre-Andre Noel, Etienne Marcotte, Sai Rajeswar, Valentina Zantedeschi, David Vazquez, Nicolas Chapados, Christopher Pal, Perouz Taslakian
16
1
Large Language Models (LLMs) are trained on vast amounts of data, most of
which is automatically scraped from the internet. This data includes
encyclopedic documents that harbor a vast amount of general knowledge (e.g.,
Wikipedia) but also potentially overlap with benchmark datasets used for
evaluating LLMs. Consequently, evaluating models on test splits that might have
leaked into the training set is prone to misleading conclusions. To foster
sound evaluation of language models, we introduce a new test dataset named
RepLiQA, suited for question-answering and topic retrieval tasks. RepLiQA is a
collection of five splits of test sets, four of which have not been released to
the internet or exposed to LLM APIs prior to this publication. Each sample in
RepLiQA comprises (1) a reference document crafted by a human annotator and
depicting an imaginary scenario (e.g., a news article) absent from the
internet; (2) a question about the document's topic; (3) a ground-truth answer
derived directly from the information in the document; and (4) the paragraph
extracted from the reference document containing the answer. As such, accurate
answers can only be generated if a model can find relevant content within the
provided document. We run a large-scale benchmark comprising several
state-of-the-art LLMs to uncover differences in performance across models of
various types and sizes in a context-conditional language modeling setting.
Released splits of RepLiQA can be found here:
https://huggingface.co/datasets/ServiceNow/repliqa.
The advancement of large language models (LLMs) has significantly broadened
the scope of applications in natural language processing, with multi-modal LLMs
extending these capabilities to integrate and interpret visual data. However,
existing benchmarks for visual language models (VLMs) predominantly focus on
single-image inputs, neglecting the crucial aspect of multi-image
understanding. In this paper, we introduce a Multi-Image Relational Benchmark
MIRB, designed to evaluate VLMs' ability to compare, analyze, and reason across
multiple images. Our benchmark encompasses four categories: perception, visual
world knowledge, reasoning, and multi-hop reasoning. Through a comprehensive
evaluation of a wide range of open-source and closed-source models, we
demonstrate that while open-source VLMs were shown to approach the performance
of GPT-4V in single-image tasks, a significant performance gap remains in
multi-image reasoning tasks. Our findings also reveal that even the
state-of-the-art GPT-4V model struggles with our benchmark, underscoring the
need for further research and development in this area. We believe our
contribution of MIRB could serve as a testbed for developing the
next-generation multi-modal models.
ByPanwang Pan, Zhuo Su, Chenguo Lin, Zhen Fan, Yongjie Zhang, Zeming Li, Tingting Shen, Yadong Mu, Yebin Liu
12
1
Despite recent advancements in high-fidelity human reconstruction techniques,
the requirements for densely captured images or time-consuming per-instance
optimization significantly hinder their applications in broader scenarios. To
tackle these issues, we present HumanSplat which predicts the 3D Gaussian
Splatting properties of any human from a single input image in a generalizable
manner. In particular, HumanSplat comprises a 2D multi-view diffusion model and
a latent reconstruction transformer with human structure priors that adeptly
integrate geometric priors and semantic features within a unified framework. A
hierarchical loss that incorporates human semantic information is further
designed to achieve high-fidelity texture modeling and better constrain the
estimated multiple views. Comprehensive experiments on standard benchmarks and
in-the-wild images demonstrate that HumanSplat surpasses existing
state-of-the-art methods in achieving photorealistic novel-view synthesis.
Tabular data -- structured, heterogeneous, spreadsheet-style data with rows
and columns -- is widely used in practice across many domains. However, while
recent foundation models have reduced the need for developing task-specific
datasets and predictors in domains such as language modeling and computer
vision, this transfer learning paradigm has not had similar impact in the
tabular domain. In this work, we seek to narrow this gap and present TabuLa-8B,
a language model for tabular prediction. We define a process for extracting a
large, high-quality training dataset from the TabLib corpus, proposing methods
for tabular data filtering and quality control. Using the resulting dataset,
which comprises over 1.6B rows from 3.1M unique tables, we fine-tune a Llama
3-8B large language model (LLM) for tabular data prediction (classification and
binned regression) using a novel packing and attention scheme for tabular
prediction. Through evaluation across a test suite of 329 datasets, we find
that TabuLa-8B has zero-shot accuracy on unseen tables that is over 15
percentage points (pp) higher than random guessing, a feat that is not possible
with existing state-of-the-art tabular prediction models (e.g. XGBoost,
TabPFN). In the few-shot setting (1-32 shots), without any fine-tuning on the
target datasets, TabuLa-8B is 5-15 pp more accurate than XGBoost and TabPFN
models that are explicitly trained on equal, or even up to 16x more data. We
release our model, code, and data along with the publication of this paper.
To evaluate knowledge in large language models (LLMs), current methods query
the model and then evaluate its generated responses. In this work, we ask
whether evaluation can be done before the model has generated any
text. Concretely, is it possible to estimate how knowledgeable a model is about
a certain entity, only from its internal computation? We study this question
with two tasks: given a subject entity, the goal is to predict (a) the ability
of the model to answer common questions about the entity, and (b) the
factuality of responses generated by the model about the entity. Experiments
with a variety of LLMs show that KEEN, a simple probe trained over internal
subject representations, succeeds at both tasks - strongly correlating with
both the QA accuracy of the model per-subject and FActScore, a recent
factuality metric in open-ended generation. Moreover, KEEN naturally aligns
with the model's hedging behavior and faithfully reflects changes in the
model's knowledge after fine-tuning. Lastly, we show a more interpretable yet
equally performant variant of KEEN, which highlights a small set of tokens that
correlates with the model's lack of knowledge. Being simple and lightweight,
KEEN can be leveraged to identify gaps and clusters of entity knowledge in
LLMs, and guide decisions such as augmenting queries with retrieval.
ByJack Gallifant, Shan Chen, Pedro Moreira, Nikolaj Munch, Mingye Gao, Jackson Pond, Leo Anthony Celi, Hugo Aerts, Thomas Hartvigsen, Danielle Bitterman
8
1
Medical knowledge is context-dependent and requires consistent reasoning
across various natural language expressions of semantically equivalent phrases.
This is particularly crucial for drug names, where patients often use brand
names like Advil or Tylenol instead of their generic equivalents. To study
this, we create a new robustness dataset, RABBITS, to evaluate performance
differences on medical benchmarks after swapping brand and generic drug names
using physician expert annotations.
We assess both open-source and API-based LLMs on MedQA and MedMCQA, revealing
a consistent performance drop ranging from 1-10\%. Furthermore, we identify a
potential source of this fragility as the contamination of test data in widely
used pre-training datasets. All code is accessible at
https://github.com/BittermanLab/RABBITS, and a HuggingFace leaderboard is
available at https://huggingface.co/spaces/AIM-Harvard/rabbits-leaderboard.
Text-to-image (T2I) diffusion models have demonstrated impressive image
generation capabilities. Still, their computational intensity prohibits
resource-constrained organizations from deploying T2I models after fine-tuning
them on their internal target data. While pruning techniques offer a potential
solution to reduce the computational burden of T2I models, static pruning
methods use the same pruned model for all input prompts, overlooking the
varying capacity requirements of different prompts. Dynamic pruning addresses
this issue by utilizing a separate sub-network for each prompt, but it prevents
batch parallelism on GPUs. To overcome these limitations, we introduce Adaptive
Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed
for T2I diffusion models. Central to our approach is a prompt router model,
which learns to determine the required capacity for an input text prompt and
routes it to an architecture code, given a total desired compute budget for
prompts. Each architecture code represents a specialized model tailored to the
prompts assigned to it, and the number of codes is a hyperparameter. We train
the prompt router and architecture codes using contrastive learning, ensuring
that similar prompts are mapped to nearby codes. Further, we employ optimal
transport to prevent the codes from collapsing into a single one. We
demonstrate APTP's effectiveness by pruning Stable Diffusion (SD) V2.1 using
CC3M and COCO as target datasets. APTP outperforms the single-model pruning
baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters
learned by APTP reveals they are semantically meaningful. We also show that
APTP can automatically discover previously empirically found challenging
prompts for SD, e.g., prompts for generating text images, assigning them to
higher capacity codes.
Binarization, which converts weight parameters to binary values, has emerged
as an effective strategy to reduce the size of large language models (LLMs).
However, typical binarization techniques significantly diminish linguistic
effectiveness of LLMs. To address this issue, we introduce a novel binarization
technique called Mixture of Scales (BinaryMoS). Unlike conventional methods,
BinaryMoS employs multiple scaling experts for binary weights, dynamically
merging these experts for each token to adaptively generate scaling factors.
This token-adaptive approach boosts the representational power of binarized
LLMs by enabling contextual adjustments to the values of binary weights.
Moreover, because this adaptive process only involves the scaling factors
rather than the entire weight matrix, BinaryMoS maintains compression
efficiency similar to traditional static binarization methods. Our experimental
results reveal that BinaryMoS surpasses conventional binarization techniques in
various natural language processing tasks and even outperforms 2-bit
quantization methods, all while maintaining similar model size to static
binarization techniques.
ByTianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E. Gonzalez, Ion Stoica
8
1
The rapid evolution of language models has necessitated the development of
more challenging benchmarks. Current static benchmarks often struggle to
consistently distinguish between the capabilities of different models and fail
to align with real-world user preferences. On the other hand, live
crowd-sourced platforms like the Chatbot Arena collect a wide range of natural
prompts and user feedback. However, these prompts vary in sophistication and
the feedback cannot be applied offline to new models. In order to ensure that
benchmarks keep up with the pace of LLM development, we address how one can
evaluate benchmarks on their ability to confidently separate models and their
alignment with human preference. Under these principles, we developed
BenchBuilder, a living benchmark that filters high-quality prompts from live
data sources to enable offline evaluation on fresh, challenging prompts.
BenchBuilder identifies seven indicators of a high-quality prompt, such as the
requirement for domain knowledge, and utilizes an LLM annotator to select a
high-quality subset of prompts from various topic clusters. The LLM evaluation
process employs an LLM judge to ensure a fully automated, high-quality, and
constantly updating benchmark. We apply BenchBuilder on prompts from the
Chatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from
a wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence
intervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with
human preference rankings, all at a cost of only $25 and without human
labelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides
a valuable tool for developers, enabling them to extract high-quality
benchmarks from extensive data with minimal effort.
Direct alignment from preferences (DAP) has emerged as a promising paradigm
for aligning large language models (LLMs) to human desiderata from
pre-collected, offline preference datasets. While recent studies indicate that
existing offline DAP methods can directly benefit from online training samples,
we highlight the need to develop specific online DAP algorithms to fully
harness the power of online training. Specifically, we identify that the
learned LLM should adhere to the proximity of the behavior LLM, which collects
the training samples. To this end, we propose online Preference Optimization in
proximity to the Behavior LLM (BPO), emphasizing the importance of constructing
a proper trust region for LLM alignment.
We conduct extensive experiments to validate the effectiveness and
applicability of our approach by integrating it with various DAP methods,
resulting in significant performance improvements across a wide range of tasks
when training with the same amount of preference data. Even when only
introducing one additional data collection phase, our online BPO improves its
offline DAP baseline from 72.0% to 80.2% on TL;DR and from 82.2% to 89.1% on
Anthropic Helpfulness in terms of win rate against human reference text.
ByJing Gu, Yuwei Fang, Ivan Skorokhodov, Peter Wonka, Xinya Du, Sergey Tulyakov, Xin Eric Wang
5
1
Video editing stands as a cornerstone of digital media, from entertainment
and education to professional communication. However, previous methods often
overlook the necessity of comprehensively understanding both global and local
contexts, leading to inaccurate and inconsistency edits in the spatiotemporal
dimension, especially for long videos. In this paper, we introduce VIA, a
unified spatiotemporal VIdeo Adaptation framework for global and local video
editing, pushing the limits of consistently editing minute-long videos. First,
to ensure local consistency within individual frames, the foundation of VIA is
a novel test-time editing adaptation method, which adapts a pre-trained image
editing model for improving consistency between potential editing directions
and the text instruction, and adapts masked latent variables for precise local
control. Furthermore, to maintain global consistency over the video sequence,
we introduce spatiotemporal adaptation that adapts consistent attention
variables in key frames and strategically applies them across the whole
sequence to realize the editing effects. Extensive experiments demonstrate
that, compared to baseline methods, our VIA approach produces edits that are
more faithful to the source videos, more coherent in the spatiotemporal
context, and more precise in local control. More importantly, we show that VIA
can achieve consistent long video editing in minutes, unlocking the potentials
for advanced video editing tasks over long video sequences.
ByDevichand Budagam, Sankalp KJ, Ashutosh Kumar, Vinija Jain, Aman Chadha
5
1
Assessing the effectiveness of large language models (LLMs) in addressing
diverse tasks is essential for comprehending their strengths and weaknesses.
Conventional evaluation techniques typically apply a single prompting strategy
uniformly across datasets, not considering the varying degrees of task
complexity. We introduce the Hierarchical Prompting Taxonomy (HPT), a taxonomy
that employs a Hierarchical Prompt Framework (HPF) composed of five unique
prompting strategies, arranged from the simplest to the most complex, to assess
LLMs more precisely and to offer a clearer perspective. This taxonomy assigns a
score, called the Hierarchical Prompting Score (HP-Score), to datasets as well
as LLMs based on the rules of the taxonomy, providing a nuanced understanding
of their ability to solve diverse tasks and offering a universal measure of
task complexity. Additionally, we introduce the Adaptive Hierarchical Prompt
framework, which automates the selection of appropriate prompting strategies
for each task. This study compares manual and adaptive hierarchical prompt
frameworks using four instruction-tuned LLMs, namely Llama 3 8B, Phi 3 3.8B,
Mistral 7B, and Gemma 7B, across four datasets: BoolQ, CommonSenseQA (CSQA),
IWSLT-2017 en-fr (IWSLT), and SamSum. Experiments demonstrate the effectiveness
of HPT, providing a reliable way to compare different tasks and LLM
capabilities. This paper leads to the development of a universal evaluation
metric that can be used to evaluate both the complexity of the datasets and the
capabilities of LLMs. The implementation of both manual HPF and adaptive HPF is
publicly available.
ByChen Henry Wu, Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried, Aditi Raghunathan
4
1
Vision-enabled language models (VLMs) are now used to build autonomous
multimodal agents capable of taking actions in real environments. In this
paper, we show that multimodal agents raise new safety risks, even though
attacking agents is more challenging than prior attacks due to limited access
to and knowledge about the environment. Our attacks use adversarial text
strings to guide gradient-based perturbation over one trigger image in the
environment: (1) our captioner attack attacks white-box captioners if they are
used to process images into captions as additional inputs to the VLM; (2) our
CLIP attack attacks a set of CLIP models jointly, which can transfer to
proprietary VLMs. To evaluate the attacks, we curated VisualWebArena-Adv, a set
of adversarial tasks based on VisualWebArena, an environment for web-based
multimodal agent tasks. Within an L-infinity norm of 16/256 on a single
image, the captioner attack can make a captioner-augmented GPT-4V agent execute
the adversarial goals with a 75% success rate. When we remove the captioner or
use GPT-4V to generate its own captions, the CLIP attack can achieve success
rates of 21% and 43%, respectively. Experiments on agents based on other VLMs,
such as Gemini-1.5, Claude-3, and GPT-4o, show interesting differences in their
robustness. Further analysis reveals several key factors contributing to the
attack's success, and we also discuss the implications for defenses as well.
Project page: https://chenwu.io/attack-agent Code and data:
https://github.com/ChenWu98/agent-attack
Large models for text-to-music generation have achieved significant progress,
facilitating the creation of high-quality and varied musical compositions from
provided text prompts. However, input text prompts may not precisely capture
user requirements, particularly when the objective is to generate music that
embodies a specific concept derived from a designated reference collection. In
this paper, we propose a novel method for customized text-to-music generation,
which can capture the concept from a two-minute reference music and generate a
new piece of music conforming to the concept. We achieve this by fine-tuning a
pretrained text-to-music model using the reference music. However, directly
fine-tuning all parameters leads to overfitting issues. To address this
problem, we propose a Pivotal Parameters Tuning method that enables the model
to assimilate the new concept while preserving its original generative
capabilities. Additionally, we identify a potential concept conflict when
introducing multiple concepts into the pretrained model. We present a concept
enhancement strategy to distinguish multiple concepts, enabling the fine-tuned
model to generate music incorporating either individual or multiple concepts
simultaneously. Since we are the first to work on the customized music
generation task, we also introduce a new dataset and evaluation protocol for
the new task. Our proposed Jen1-DreamStyler outperforms several baselines in
both qualitative and quantitative evaluations. Demos will be available at
https://www.jenmusic.ai/research#DreamStyler.
In this paper, we point out suboptimal noise-data mapping leads to slow
training of diffusion models. During diffusion training, current methods
diffuse each image across the entire noise space, resulting in a mixture of all
images at every point in the noise layer. We emphasize that this random mixture
of noise-data mapping complicates the optimization of the denoising function in
diffusion models. Drawing inspiration from the immiscible phenomenon in
physics, we propose Immiscible Diffusion, a simple and effective method to
improve the random mixture of noise-data mapping. In physics, miscibility can
vary according to various intermolecular forces. Thus, immiscibility means that
the mixing of the molecular sources is distinguishable. Inspired by this, we
propose an assignment-then-diffusion training strategy. Specifically, prior to
diffusing the image data into noise, we assign diffusion target noise for the
image data by minimizing the total image-noise pair distance in a mini-batch.
The assignment functions analogously to external forces to separate the
diffuse-able areas of images, thus mitigating the inherent difficulties in
diffusion training. Our approach is remarkably simple, requiring only one line
of code to restrict the diffuse-able area for each image while preserving the
Gaussian distribution of noise. This ensures that each image is projected only
to nearby noise. To address the high complexity of the assignment algorithm, we
employ a quantized-assignment method to reduce the computational overhead to a
negligible level. Experiments demonstrate that our method achieve up to 3x
faster training for consistency models and DDIM on the CIFAR dataset, and up to
1.3x faster on CelebA datasets for consistency models. Besides, we conduct
thorough analysis about the Immiscible Diffusion, which sheds lights on how it
improves diffusion training speed while improving the fidelity.
ByWenkai Yang, Shiqi Shen, Guangyao Shen, Zhi Gong, Yankai Lin
4
2
Superalignment, where humans are weak supervisors of superhuman models, has
become an important and widely discussed issue in the current era of rapid
development of Large Language Models (LLMs). The recent work preliminarily
studies this problem by using weak models to supervise strong models. It
discovers that weakly supervised strong students can consistently outperform
weak teachers towards the alignment target, leading to a weak-to-strong
generalization phenomenon. However, we are concerned that behind such a
promising phenomenon, whether there exists an issue of weak-to-strong
deception, where strong models may deceive weak models by exhibiting
well-aligned in areas known to weak models but producing misaligned behaviors
in cases weak models do not know. We then take an initial step towards
exploring this security issue in a specific but realistic multi-objective
alignment case, where there may be some alignment targets conflicting with each
other (e.g., helpfulness v.s. harmlessness). Such a conflict is likely to cause
strong models to deceive weak models in one alignment dimension to gain high
reward in other alignment dimension. Our experiments on both the reward
modeling task and the preference optimization scenario indicate: (1) the
weak-to-strong deception exists; (2) the deception phenomenon may intensify as
the capability gap between weak and strong models increases. We also discuss
potential solutions and find bootstrapping with an intermediate model can
mitigate the deception to some extent. Our work highlights the urgent need to
pay more attention to the true reliability of superalignment.
In this paper, we introduce a subspace-inspired Low-Rank Adaptation (LoRA)
method, which is computationally efficient, easy to implement, and readily
applicable to large language, multimodal, and diffusion models. Initially, we
equivalently decompose the weights of LoRA into two subspaces, and find that
simply mixing them can enhance performance. To study such a phenomenon, we
revisit it through a fine-grained subspace lens, showing that such modification
is equivalent to employing a fixed mixer to fuse the subspaces. To be more
flexible, we jointly learn the mixer with the original LoRA weights, and term
the method Mixture-of-Subspaces LoRA (MoSLoRA). MoSLoRA consistently
outperforms LoRA on tasks in different modalities, including commonsense
reasoning, visual instruction tuning, and subject-driven text-to-image
generation, demonstrating its effectiveness and robustness. Codes are available
at https://github.com/wutaiqiang/MoSLoRA{github}.