ByGagan Bhatia, El Moatez Billah Nagoudi, Hasan Cavusoglu, Muhammad Abdul-Mageed
81
5
We introduce FinTral, a suite of state-of-the-art multimodal large language
models (LLMs) built upon the Mistral-7b model and tailored for financial
analysis. FinTral integrates textual, numerical, tabular, and image data. We
enhance FinTral with domain-specific pretraining, instruction fine-tuning, and
RLAIF training by exploiting a large collection of textual and visual datasets
we curate for this work. We also introduce an extensive benchmark featuring
nine tasks and 25 datasets for evaluation, including hallucinations in the
financial domain. Our FinTral model trained with direct preference optimization
employing advanced Tools and Retrieval methods, dubbed FinTral-DPO-T&R,
demonstrates an exceptional zero-shot performance. It outperforms ChatGPT-3.5
in all tasks and surpasses GPT-4 in five out of nine tasks, marking a
significant advancement in AI-driven financial technology. We also demonstrate
that FinTral has the potential to excel in real-time analysis and
decision-making in diverse financial contexts.
ByZeyu Lu, Zidong Wang, Di Huang, Chengyue Wu, Xihui Liu, Wanli Ouyang, Lei Bai
48
5
Nature is infinitely resolution-free. In the context of this reality,
existing diffusion models, such as Diffusion Transformers, often face
challenges when processing image resolutions outside of their trained domain.
To overcome this limitation, we present the Flexible Vision Transformer (FiT),
a transformer architecture specifically designed for generating images with
unrestricted resolutions and aspect ratios. Unlike traditional methods that
perceive images as static-resolution grids, FiT conceptualizes images as
sequences of dynamically-sized tokens. This perspective enables a flexible
training strategy that effortlessly adapts to diverse aspect ratios during both
training and inference phases, thus promoting resolution generalization and
eliminating biases induced by image cropping. Enhanced by a meticulously
adjusted network structure and the integration of training-free extrapolation
techniques, FiT exhibits remarkable flexibility in resolution extrapolation
generation. Comprehensive experiments demonstrate the exceptional performance
of FiT across a broad range of resolutions, showcasing its effectiveness both
within and beyond its training resolution distribution. Repository available at
https://github.com/whlzy/FiT.
ByJun Zhan, Junqi Dai, Jiasheng Ye, Yunhua Zhou, Dong Zhang, Zhigeng Liu, Xin Zhang, Ruibin Yuan, Ge Zhang, Linyang Li, Hang Yan, Jie Fu, Tao Gui, Tianxiang Sun, Yugang Jiang, Xipeng Qiu
45
8
We introduce AnyGPT, an any-to-any multimodal language model that utilizes
discrete representations for the unified processing of various modalities,
including speech, text, images, and music. AnyGPT can be trained stably without
any alterations to the current large language model (LLM) architecture or
training paradigms. Instead, it relies exclusively on data-level preprocessing,
facilitating the seamless integration of new modalities into LLMs, akin to the
incorporation of new languages. We build a multimodal text-centric dataset for
multimodal alignment pre-training. Utilizing generative models, we synthesize
the first large-scale any-to-any multimodal instruction dataset. It consists of
108k samples of multi-turn conversations that intricately interweave various
modalities, thus equipping the model to handle arbitrary combinations of
multimodal inputs and outputs. Experimental results demonstrate that AnyGPT is
capable of facilitating any-to-any multimodal conversation while achieving
performance comparable to specialized models across all modalities, proving
that discrete representations can effectively and conveniently unify multiple
modalities within a language model. Demos are shown in
https://junzhan2000.github.io/AnyGPT.github.io/
ByNikhil Bhendawade, Irina Belousova, Qichen Fu, Henry Mason, Mohammad Rastegari, Mahyar Najibi
43
3
Speculative decoding is a prominent technique to speed up the inference of a
large target language model based on predictions of an auxiliary draft model.
While effective, in application-specific settings, it often involves
fine-tuning both draft and target models to achieve high acceptance rates. As
the number of downstream tasks grows, these draft models add significant
complexity to inference systems. We propose Speculative Streaming, a
single-model speculative decoding method that fuses drafting into the target
model by changing the fine-tuning objective from next token prediction to
future n-gram prediction. Speculative Streaming speeds up decoding by 1.8 -
3.1X in a diverse set of tasks, such as Summarization, Structured Queries, and
Meaning Representation, without sacrificing generation quality. Additionally,
Speculative Streaming is parameter-efficient. It achieves on-par/higher
speed-ups than Medusa-style architectures while using ~10000X fewer extra
parameters, making it well-suited for resource-constrained devices.
Model quantification uses low bit-width values to represent the weight
matrices of models, which is a promising approach to reduce both storage and
computational overheads of deploying highly anticipated LLMs. However, existing
quantization methods suffer severe performance degradation when the bit-width
is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to
quantize models. This paper boldly quantizes the weight matrices of LLMs to
1-bit, paving the way for the extremely low bit-width deployment of LLMs. For
this target, we introduce a 1-bit quantization-aware training (QAT) framework
named OneBit, including a novel 1-bit parameter representation method to better
quantize LLMs as well as an effective parameter initialization method based on
matrix decomposition to improve the convergence speed of the QAT framework.
Sufficient experimental results indicate that OneBit achieves good performance
(at least 83% of the non-quantized performance) with robust training processes
when only using 1-bit weight matrices.
ByByung-Kwan Lee, Beomchan Park, Chae Won Kim, Yong Man Ro
22
6
The remarkable success of Large Language Models (LLMs) and instruction tuning
drives the evolution of Vision Language Models (VLMs) towards a versatile
general-purpose model. Yet, it remains unexplored whether current VLMs
genuinely possess quality object-level image understanding capabilities
determined from 'what objects are in the image?' or 'which object corresponds
to a specified bounding box?'. Our findings reveal that the image understanding
capabilities of current VLMs are strongly correlated with their zero-shot
performance on Vision Language (VL) tasks. This suggests that prioritizing
basic image understanding is crucial for VLMs to excel at VL tasks. To enhance
object-level image understanding, we propose Crayon Large Language and Vision
mOdel (CoLLaVO), which incorporates instruction tuning with crayon prompt as a
new visual prompt tuning scheme based on panoptic color maps. Furthermore, we
present a learning strategy of Dual QLoRA to preserve object-level image
understanding without forgetting it during visual instruction tuning, thereby
achieving a significant leap in zero-shot numerous VL benchmarks.
ByJacky Liang, Fei Xia, Wenhao Yu, Andy Zeng, Montserrat Gonzalez Arenas, Maria Attarian, Maria Bauza, Matthew Bennice, Alex Bewley, Adil Dostmohamed, Chuyuan Kelly Fu, Nimrod Gileadi, Marissa Giustina, Keerthana Gopalakrishnan, Leonard Hasenclever, Jan Humplik, Jasmine Hsu, Nikhil Joshi, Ben Jyenis, Chase Kew, Sean Kirmani, Tsang-Wei Edward Lee, Kuang-Huei Lee, Assaf Hurwitz Michaely, Joss Moore, Ken Oslund, Dushyant Rao, Allen Ren, Baruch Tabanpour, Quan Vuong, Ayzaan Wahid, Ted Xiao, Ying Xu, Vincent Zhuang, Peng Xu, Erik Frey, Ken Caluwaerts, Tingnan Zhang, Brian Ichter, Jonathan Tompson, Leila Takayama, Vincent Vanhoucke, Izhak Shafran, Maja Mataric, Dorsa Sadigh, Nicolas Heess, Kanishka Rao, Nik Stewart, Jie Tan, Carolina Parada
22
2
Large language models (LLMs) have been shown to exhibit a wide range of
capabilities, such as writing robot code from language commands -- enabling
non-experts to direct robot behaviors, modify them based on feedback, or
compose them to perform new tasks. However, these capabilities (driven by
in-context learning) are limited to short-term interactions, where users'
feedback remains relevant for only as long as it fits within the context size
of the LLM, and can be forgotten over longer interactions. In this work, we
investigate fine-tuning the robot code-writing LLMs, to remember their
in-context interactions and improve their teachability i.e., how efficiently
they adapt to human inputs (measured by average number of corrections before
the user considers the task successful). Our key observation is that when
human-robot interactions are formulated as a partially observable Markov
decision process (in which human language inputs are observations, and robot
code outputs are actions), then training an LLM to complete previous
interactions can be viewed as training a transition dynamics model -- that can
be combined with classic robotics techniques such as model predictive control
(MPC) to discover shorter paths to success. This gives rise to Language Model
Predictive Control (LMPC), a framework that fine-tunes PaLM 2 to improve its
teachability on 78 tasks across 5 robot embodiments -- improving non-expert
teaching success rates of unseen tasks by 26.9% while reducing the average
number of human corrections from 2.4 to 1.9. Experiments show that LMPC also
produces strong meta-learners, improving the success rate of in-context
learning new tasks on unseen robot embodiments and APIs by 31.5%. See videos,
code, and demos at: https://robot-teaching.github.io/.
ByJun Zhao, Can Zu, Hao Xu, Yi Lu, Wei He, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang
18
1
Large language models (LLMs) have demonstrated impressive performance in
understanding language and executing complex reasoning tasks. However, LLMs
with long context windows have been notorious for their expensive training
costs and high inference latency. Even the most advanced models such as GPT-4
and Claude2 often make mistakes when processing inputs of over 100k tokens, a
phenomenon also known as lost in the middle. In this paper, we propose
LongAgent, a method based on multi-agent collaboration, which scales
LLMs (e.g., LLaMA) to a context of 128K and demonstrates potential superiority
in long-text processing compared to GPT-4. In LongAgent, a leader is
responsible for understanding user intent and directing team members to acquire
information from documents. Due to members' hallucinations, it is non-trivial
for a leader to obtain accurate information from the responses of dozens to
hundreds of members. To address this, we develop an inter-member
communication mechanism to resolve response conflicts caused by hallucinations
through information sharing. Our experimental results indicate that
LongAgent offers a promising alternative for long-text processing. The
agent team instantiated with LLaMA-7B achieves significant improvements in
tasks such as 128k-long text retrieval, multi-hop question answering, compared
to GPT-4.
The quality of finetuning data is crucial for aligning large language models
(LLMs) with human values. Current methods to improve data quality are either
labor-intensive or prone to factual errors caused by LLM hallucinations. This
paper explores elevating the quality of existing instruction data to better
align with human values, introducing a simple and effective approach named
ReAlign, which reformats the responses of instruction data into a format that
better aligns with pre-established criteria and the collated evidence. This
approach minimizes human annotation, hallucination, and the difficulty in
scaling, remaining orthogonal to existing alignment techniques. Experimentally,
ReAlign significantly boosts the general alignment ability, math reasoning,
factuality, and readability of the LLMs.
Encouragingly, without introducing any additional data or advanced training
techniques, and merely by reformatting the response, LLaMA-2-13B's mathematical
reasoning ability on GSM8K can be improved from 46.77% to 56.63% in accuracy.
Additionally, a mere 5% of ReAlign data yields a 67% boost in general alignment
ability measured by the Alpaca dataset. This work highlights the need for
further research into the science and mechanistic interpretability of LLMs. We
have made the associated code and data publicly accessible to support future
studies at https://github.com/GAIR-NLP/ReAlign.
ByAlex Havrilla, Sharath Raparthy, Christoforus Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Roberta Railneau
12
1
State-of-the-art language models can exhibit impressive reasoning refinement
capabilities on math, science or coding tasks. However, recent work
demonstrates that even the best models struggle to identify when and
where to refine without access to external feedback. Outcome-based Reward
Models (ORMs), trained to predict correctness of the final answer
indicating when to refine, offer one convenient solution for deciding when to
refine. Process Based Reward Models (PRMs), trained to predict
correctness of intermediate steps, can then be used to indicate where to
refine. But they are expensive to train, requiring extensive human annotations.
In this paper, we propose Stepwise ORMs (SORMs) which are trained,
only on synthetic data, to approximate the expected future reward of the
optimal policy or V^{star}. More specifically, SORMs are trained to predict
the correctness of the final answer when sampling the current policy many times
(rather than only once as in the case of ORMs). Our experiments show that SORMs
can more accurately detect incorrect reasoning steps compared to ORMs, thus
improving downstream accuracy when doing refinements. We then train
global refinement models, which take only the question and a draft
solution as input and predict a corrected solution, and local
refinement models which also take as input a critique indicating the location
of the first reasoning error. We generate training data for both models
synthetically by reusing data used to train the SORM. We find combining global
and local refinements, using the ORM as a reranker, significantly outperforms
either one individually, as well as a best of three sample baseline. With this
strategy we can improve the accuracy of a LLaMA-2 13B model (already fine-tuned
with RL) on GSM8K from 53\% to 65\% when greedily sampled.
This paper presents a novel method for exerting fine-grained lighting control
during text-driven diffusion-based image generation. While existing diffusion
models already have the ability to generate images under any lighting
condition, without additional guidance these models tend to correlate image
content and lighting. Moreover, text prompts lack the necessary expressional
power to describe detailed lighting setups. To provide the content creator with
fine-grained control over the lighting during image generation, we augment the
text-prompt with detailed lighting information in the form of radiance hints,
i.e., visualizations of the scene geometry with a homogeneous canonical
material under the target lighting. However, the scene geometry needed to
produce the radiance hints is unknown. Our key observation is that we only need
to guide the diffusion process, hence exact radiance hints are not necessary;
we only need to point the diffusion model in the right direction. Based on this
observation, we introduce a three stage method for controlling the lighting
during image generation. In the first stage, we leverage a standard pretrained
diffusion model to generate a provisional image under uncontrolled lighting.
Next, in the second stage, we resynthesize and refine the foreground object in
the generated image by passing the target lighting to a refined diffusion
model, named DiLightNet, using radiance hints computed on a coarse shape of the
foreground object inferred from the provisional image. To retain the texture
details, we multiply the radiance hints with a neural encoding of the
provisional synthesized image before passing it to DiLightNet. Finally, in the
third stage, we resynthesize the background to be consistent with the lighting
on the foreground object. We demonstrate and validate our lighting controlled
diffusion model on a variety of text prompts and lighting conditions.
ByChristian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
11
1
While surface-based view synthesis algorithms are appealing due to their low
computational requirements, they often struggle to reproduce thin structures.
In contrast, more expensive methods that model the scene's geometry as a
volumetric density field (e.g. NeRF) excel at reconstructing fine geometric
detail. However, density fields often represent geometry in a "fuzzy" manner,
which hinders exact localization of the surface. In this work, we modify
density fields to encourage them to converge towards surfaces, without
compromising their ability to reconstruct thin structures. First, we employ a
discrete opacity grid representation instead of a continuous density field,
which allows opacity values to discontinuously transition from zero to one at
the surface. Second, we anti-alias by casting multiple rays per pixel, which
allows occlusion boundaries and subpixel structures to be modelled without
using semi-transparent voxels. Third, we minimize the binary entropy of the
opacity values, which facilitates the extraction of surface geometry by
encouraging opacity values to binarize towards the end of training. Lastly, we
develop a fusion-based meshing strategy followed by mesh simplification and
appearance model fitting. The compact meshes produced by our model can be
rendered in real-time on mobile devices and achieve significantly higher view
synthesis quality compared to existing mesh-based approaches.
Despite vision-language models' (VLMs) remarkable capabilities as versatile
visual assistants, two substantial challenges persist within the existing VLM
frameworks: (1) lacking task diversity in pretraining and visual instruction
tuning, and (2) annotation error and bias in GPT-4 synthesized instruction
tuning data. Both challenges lead to issues such as poor generalizability,
hallucination, and catastrophic forgetting. To address these challenges, we
construct Vision-Flan, the most diverse publicly available visual instruction
tuning dataset to date, comprising 187 diverse tasks and 1,664,261 instances
sourced from academic datasets, and each task is accompanied by an
expert-written instruction. In addition, we propose a two-stage instruction
tuning framework, in which VLMs are firstly finetuned on Vision-Flan and
further tuned on GPT-4 synthesized data. We find this two-stage tuning
framework significantly outperforms the traditional single-stage visual
instruction tuning framework and achieves the state-of-the-art performance
across a wide range of multi-modal evaluation benchmarks. Finally, we conduct
in-depth analyses to understand visual instruction tuning and our findings
reveal that: (1) GPT-4 synthesized data does not substantially enhance VLMs'
capabilities but rather modulates the model's responses to human-preferred
formats; (2) A minimal quantity (e.g., 1,000) of GPT-4 synthesized data can
effectively align VLM responses with human-preference; (3) Visual instruction
tuning mainly helps large-language models (LLMs) to understand visual features.
ByXuelin Qian, Yu Wang, Simian Luo, Yinda Zhang, Ying Tai, Zhenyu Zhang, Chengjie Wang, Xiangyang Xue, Bo Zhao, Tiejun Huang, Yunsheng Wu, Yanwei Fu
9
1
Auto-regressive models have achieved impressive results in 2D image
generation by modeling joint distributions in grid space. In this paper, we
extend auto-regressive models to 3D domains, and seek a stronger ability of 3D
shape generation by improving auto-regressive models at capacity and
scalability simultaneously. Firstly, we leverage an ensemble of publicly
available 3D datasets to facilitate the training of large-scale models. It
consists of a comprehensive collection of approximately 900,000 objects, with
multiple properties of meshes, points, voxels, rendered images, and text
captions. This diverse labeled dataset, termed Objaverse-Mix, empowers our
model to learn from a wide range of object variations. However, directly
applying 3D auto-regression encounters critical challenges of high
computational demands on volumetric grids and ambiguous auto-regressive order
along grid dimensions, resulting in inferior quality of 3D shapes. To this end,
we then present a novel framework Argus3D in terms of capacity. Concretely, our
approach introduces discrete representation learning based on a latent vector
instead of volumetric grids, which not only reduces computational costs but
also preserves essential geometric details by learning the joint distributions
in a more tractable order. The capacity of conditional generation can thus be
realized by simply concatenating various conditioning inputs to the latent
vector, such as point clouds, categories, images, and texts. In addition,
thanks to the simplicity of our model architecture, we naturally scale up our
approach to a larger model with an impressive 3.6 billion parameters, further
enhancing the quality of versatile 3D generation. Extensive experiments on four
generation tasks demonstrate that Argus3D can synthesize diverse and faithful
shapes across multiple categories, achieving remarkable performance.