Language models have shown effectiveness in a variety of software
applications, particularly in tasks related to automatic workflow. These models
possess the crucial ability to call functions, which is essential in creating
AI agents. Despite the high performance of large-scale language models in cloud
environments, they are often associated with concerns over privacy and cost.
Current on-device models for function calling face issues with latency and
accuracy. Our research presents a new method that empowers an on-device model
with 2 billion parameters to surpass the performance of GPT-4 in both accuracy
and latency, and decrease the context length by 95\%. When compared to Llama-7B
with a RAG-based function calling mechanism, our method enhances latency by
35-fold. This method reduces the latency to levels deemed suitable for
deployment across a variety of edge devices in production environments,
aligning with the performance requisites for real-world applications.
ByLifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, Maosong Sun
46
2
We introduce Eurus, a suite of large language models (LLMs) optimized for
reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve
state-of-the-art results among open-source models on a diverse set of
benchmarks covering mathematics, code generation, and logical reasoning
problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a
comprehensive benchmarking across 12 tests covering five tasks, and achieves a
33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging
benchmarks, substantially outperforming existing open-source models by margins
more than 13.3%. The strong performance of Eurus can be primarily attributed to
UltraInteract, our newly-curated large-scale, high-quality alignment dataset
specifically designed for complex reasoning tasks. UltraInteract can be used in
both supervised fine-tuning and preference learning. For each instruction, it
includes a preference tree consisting of (1) reasoning chains with diverse
planning strategies in a unified format, (2) multi-turn interaction
trajectories with the environment and the critique, and (3) pairwise data to
facilitate preference learning. UltraInteract allows us to conduct an in-depth
exploration of preference learning for reasoning tasks. Our investigation
reveals that some well-established preference learning algorithms may be less
suitable for reasoning tasks compared to their effectiveness in general
conversations. Inspired by this, we derive a novel reward modeling objective
which, together with UltraInteract, leads to a strong reward model.
Large Language Models (LLMs) have made significant strides in handling long
sequences exceeding 32K tokens. However, their performance evaluation has
largely been confined to metrics like perplexity and synthetic tasks, which may
not fully capture their abilities in more nuanced, real-world scenarios. This
study introduces a specialized benchmark (LIConBench) focusing on long
in-context learning within the realm of extreme-label classification. We
meticulously selected six datasets with a label range spanning 28 to 174
classes covering different input (few-shot demonstration) length from 2K to
50K. Our benchmark requires LLMs to comprehend the entire input to recognize
the massive label spaces to make correct prediction. We evaluate 13
long-context LLMs on our benchmarks. We find that the long-context LLMs perform
relatively well under the token length of 20K and the performance benefits from
utilizing the long context window. However, after the context window exceeds
20K, most LLMs except GPT-4 will dip dramatically. This suggests a notable gap
in current LLM capabilities for processing and understanding long, context-rich
sequences. Further analysis revealed a tendency among models to favor
predictions for labels presented towards the end at the sequence. Their ability
to reason over multiple pieces in the long sequence is yet to be improved. Our
study reveals that long context understanding and reasoning is still a
challenging task for the existing LLMs. We believe LIConBench could serve as a
more realistic evaluation for the future long context LLMs.
ByMusashi Hinck, Matthew L. Olson, David Cobbley, Shao-Yen Tseng, Vasudev Lal
27
2
We train a suite of multimodal foundation models (MMFM) using the popular
LLaVA framework with the recently released Gemma family of large language
models (LLMs). Of particular interest is the 2B parameter Gemma model, which
provides opportunities to construct capable small-scale MMFMs. In line with
findings from other papers in this space, we test the effect of ablating three
design features: pretraining the connector, utilizing a more powerful image
backbone, and increasing the size of the language backbone. The resulting
models, which we call LLaVA-Gemma, exhibit moderate performance on an array of
evaluations, but fail to improve past the current comparably sized SOTA models.
Closer analysis of performance shows mixed effects; skipping pretraining tends
to reduce performance, larger vision models sometimes improve performance, and
increasing language model size has inconsistent effects. We publicly release
training recipes, code and weights for our models for the LLaVA-Gemma models.
ByKang Min Yoo, Jaegeun Han, Sookyo In, Heewon Jeon, Jisu Jeong, Jaewook Kang, Hyunwook Kim, Kyung-Min Kim, Munhyong Kim, Sungju Kim, Donghyun Kwak, Hanock Kwak, Se Jung Kwon, Bado Lee, Dongsoo Lee, Gichang Lee, Jooho Lee, Baeseong Park, Seongjin Shin, Joonsang Yu, Seolki Baek, Sumin Byeon, Eungsup Cho, Dooseok Choe, Jeesung Han, Youngkyun Jin, Hyein Jun, Jaeseung Jung, Chanwoong Kim, Jinhong Kim, Jinuk Kim, Dokyeong Lee, Dongwook Park, Jeong Min Sohn, Sujung Han, Jiae Heo, Sungju Hong, Mina Jeon, Hyunhoon Jung, Jungeun Jung, Wangkyo Jung, Chungjoon Kim, Hyeri Kim, Jonghyun Kim, Min Young Kim, Soeun Lee, Joonhee Park, Jieun Shin, Sojin Yang, Jungsoon Yoon, Hwaran Lee, Sanghwan Bae, Jeehwan Cha, Donghoon Ham, Youngki Hong, Yunki Hong, Myunggeun Ji, Yeguk Jin, Chansong Jo, Shinyoung Joo, Seunghwan Jung, Hyomin Kim, Jungwhan Kim, Minkyoung Kim, Minseung Kim, Sungdong Kim, Yonghee Kim, Youngjun Kim, Donghyeon Ko, Dughyun Lee, Jaehong Lee, Jieun Lee, Jongjin Lee, Min Young Lee, Yehbin Lee, Taehong Min, Kiyoon Moon, Jaesun Park, Kyuyon Park, Seunghyun Seo, Gyubin Son, Wonjoon Yoo, Myungin You, Doheon Ahn, Homin Ahn, Joohee Ahn, Seongmin Ahn, Chanwoo An, Hyeryun An, Junho An, Sang-Min An, Boram Byun, Jongho Cha, Minji Chang, Seunggyu Chang, Haesong Cho, Youngdo Cho, Dalnim Choi, Daseul Choi, Hyoseok Choi, Minseong Choi, Sangho Choi, Seongjae Choi, Wooyong Choi, Sewhan Chun, Dong Young Go, Chiheon Ham, Danbi Han, Jaemin Han, Mihak Hong, Moonyoung Hong, Sung Bum Hong, Seongchan Hwang, Eunbin Hyun, Jinbae Im, Jaehyung Jang, Jaeni Jang, Sihyeon Jang, Sungwon Jang, Joonha Jeon, Yujin Jeon, Daun Jeong, Joonhyun Jeong, Kyeongseok Jeong, Mini Jeong, Yeji Jeong, Sol Jin, Hanbyeol Jo, Hanju Jo, Minjung Jo, Lee Jonghyun, Chaeyoon Jung, Hyungsik Jung, Jaeuk Jung, Ju Hwan Jung, Kwangsun Jung, Seungjae Jung, Soonwon Ka, Donghan Kang, Soyoung Kang, Taeho Kil, Areum Kim, Beomyoung Kim, Byeongwook Kim, Daehee Kim, Dong-Gyun Kim, Donggook Kim, Donghyun Kim, Euna Kim, Eunchul Kim, Geewook Kim, Gyu Ri Kim, Hanbyul Kim, Heesu Kim, Isaac Kim, Jeonghoon Kim, Jihye Kim, Joonghoon Kim, Minjae Kim, Minsub Kim, Pil Hwan Kim, Sammy Kim, Seokhun Kim, Seonghyeon Kim, Soojin Kim, Soong Kim, Soyoon Kim, Sunyoung Kim, Taeho Kim, Wonho Kim, Yoonsik Kim, You Jin Kim, Yuri Kim, Beomseok Kwon, Ohsung Kwon, Yoo-Hwan Kwon, Anna Lee, Byungwook Lee, Changho Lee, Daun Lee, Dongjae Lee, Ha-Ram Lee, Hodong Lee, Hwiyeong Lee, Hyunmi Lee, Injae Lee, Jaeung Lee, Jeongsang Lee, Jisoo Lee, Joongjae Lee, Juhan Lee, Jung Hyun Lee, Junghoon Lee, Junwoo Lee, Se Yun Lee, Sujin Lee, Sungjae Lee, Sungwoo Lee, Wonjae Lee, Zoo Hyun Lee, Jong Kun Lim, Kun Lim, Taemin Lim, Yuri Min, Nuri Na, Jeongyeon Nam, Kyeong-Min Nam, Yeonseog Noh, Biro Oh, Hyangnam Oh, Jung-Sik Oh, Solgil Oh, Yeontaek Oh, Boyoun Park, Cheonbok Park, Dongju Park, Hyeonjin Park, Hyun Tae Park, Hyunjung Park, Jihye Park, Jooseok Park, Junghwan Park, Jungsoo Park, Miru Park, Sang Hee Park, Seunghyun Park, Taerim Park, Wonkyeong Park, Hyunjoon Ryu, Jeonghun Ryu, Nahyeon Ryu, Soonshin Seo, Suk Min Seo, Yoonjeong Shim, Kyuyong Shin, Wonkwang Shin, Hyun Sim, Mihyun Sim, Woongseob Sim, Hyejin Soh, Bokyoung Son, Hyunjun Son, Seulah Son, Chi-Yun Song, Chiyoung Song, Ka Yeon Song, Minchul Song, Seungmin Song, Jisung Wang, Matt Yeo, Yonggoo Yeo, Myeong Yeon Yi, Moon Bin Yim, Taehwan Yoo, Youngjoon Yoo, Sungmin Yoon, Young Jin Yoon, Hangyeol Yu, Ui Seon Yu, Xingdong Zuo, Jeongin Bae, Joungeun Bae, Hyunsoo Cho, Seonghyun Cho, Yongjin Cho, Taekyoon Choi, Yera Choi, Jiwan Chung, Zhenghui Han, Byeongho Heo, Euisuk Hong, Taebaek Hwang, Seonyeol Im, Sumin Jegal, Sumin Jeon, Yelim Jeong, Yonghyun Jeong, Can Jiang, Juyong Jiang, Jiho Jin, Ara Jo, Younghyun Jo, Hoyoun Jung, Juyoung Jung, Dae Hee Kim, Ginam Kim, Hangyeol Kim, Heeseung Kim, Hyojin Kim, Hyojun Kim, Hyun-Ah Kim, Jeehye Kim, Jin-Hwa Kim, Jiseon Kim, Jonghak Kim, Jung Yoon Kim, Rak Yeong Kim, Seoyoon Kim, Sewon Kim, Sooyoung Kim, Sukyoung Kim, Taeyong Kim, Naeun Ko, Bonseung Koo, Heeyoung Kwak, Haena Kwon, Youngjin Kwon, Boram Lee, Bruce W. Lee, Dagyeong Lee, Erin Lee, Euijin Lee, Ha Gyeong Lee, Hyojin Lee, Hyunjeong Lee, Jeeyoon Lee, Jeonghyun Lee, Jongheok Lee, Joonhyung Lee, Junhyuk Lee, Mingu Lee, Nayeon Lee, Sangkyu Lee, Se Young Lee, Seulgi Lee, Seung Jin Lee, Suhyeon Lee, Yeonjae Lee, Yesol Lee, Youngbeom Lee, Yujin Lee, Shaodong Li, Tianyu Liu, Seong-Eun Moon, Taehong Moon, Max-Lasse Nihlenramstroem, Wonseok Oh, Yuri Oh, Hongbeen Park, Hyekyung Park, Nohil Park, Sangjin Park, Jiwon Ryu, Miru Ryu, Simo Ryu, Ahreum Seo, Hee Seo, Kangdeok Seo, Jamin Shin, Seungyoun Shin, Heetae Sin, Jiangping Wang, Lei Wang, Ning Xiang, Longxiang Xiao, Jing Xu, Seonyeong Yi, Haanju Yoo, Haneul Yoo, Hwanhee Yoo, Liang Yu, Youngjae Yu, Weijie Yuan, Bo Zeng, Qian Zhou, Kyunghyun Cho, Jung-Woo Ha, Joonsuk Park, Jihyun Hwang, Hyoung Jo Kwon, Soonyong Kwon, Jungyeon Lee, Seungho Lee, Seungho Choi, Sang-Woo Lee, Jung Hwa Lim, Nako Sung
25
1
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored
to the Korean language and culture, along with competitive capabilities in
English, math, and coding. HyperCLOVA X was trained on a balanced mix of
Korean, English, and code data, followed by instruction-tuning with
high-quality human-annotated datasets while abiding by strict safety guidelines
reflecting our commitment to responsible AI. The model is evaluated across
various benchmarks, including comprehensive reasoning, knowledge, commonsense,
factuality, coding, math, chatting, instruction-following, and harmlessness, in
both Korean and English. HyperCLOVA X exhibits strong reasoning capabilities in
Korean backed by a deep understanding of the language and cultural nuances.
Further analysis of the inherent bilingual nature and its extension to
multilingualism highlights the model's cross-lingual proficiency and strong
generalization ability to untargeted languages, including machine translation
between several language pairs and cross-lingual inference tasks. We believe
that HyperCLOVA X can provide helpful guidance for regions or countries in
developing their sovereign LLMs.
ByHao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, Ceyuan Yang
24
1
Controllability plays a crucial role in video generation since it allows
users to create desired content. However, existing models largely overlooked
the precise control of camera pose that serves as a cinematic language to
express deeper narrative nuances. To alleviate this issue, we introduce
CameraCtrl, enabling accurate camera pose control for text-to-video(T2V)
models. After precisely parameterizing the camera trajectory, a plug-and-play
camera module is then trained on a T2V model, leaving others untouched.
Additionally, a comprehensive study on the effect of various datasets is also
conducted, suggesting that videos with diverse camera distribution and similar
appearances indeed enhance controllability and generalization. Experimental
results demonstrate the effectiveness of CameraCtrl in achieving precise and
domain-adaptive camera control, marking a step forward in the pursuit of
dynamic and customized video storytelling from textual and camera pose inputs.
Our project website is at: https://hehao13.github.io/projects-CameraCtrl/.
We study the scaling properties of latent diffusion models (LDMs) with an
emphasis on their sampling efficiency. While improved network architecture and
inference algorithms have shown to effectively boost sampling efficiency of
diffusion models, the role of model size -- a critical determinant of sampling
efficiency -- has not been thoroughly examined. Through empirical analysis of
established text-to-image diffusion models, we conduct an in-depth
investigation into how model size influences sampling efficiency across varying
sampling steps. Our findings unveil a surprising trend: when operating under a
given inference budget, smaller models frequently outperform their larger
equivalents in generating high-quality results. Moreover, we extend our study
to demonstrate the generalizability of the these findings by applying various
diffusion samplers, exploring diverse downstream tasks, evaluating
post-distilled models, as well as comparing performance relative to training
compute. These findings open up new pathways for the development of LDM scaling
strategies which can be employed to enhance generative capabilities within
limited inference budgets.
ByAdrian Mirza, Nawaf Alampara, Sreekanth Kunchapu, Benedict Emoekabu, Aswanth Krishnan, Mara Wilhelmi, Macjonathan Okereke, Juliane Eberhardt, Amir Mohammad Elahi, Maximilian Greiner, Caroline T. Holick, Tanya Gupta, Mehrdad Asgari, Christina Glaubitz, Lea C. Klepsch, Yannik Köster, Jakob Meyer, Santiago Miret, Tim Hoffmann, Fabian Alexander Kreth, Michael Ringleb, Nicole Roesner, Ulrich S. Schubert, Leanne M. Stafast, Dinga Wonanke, Michael Pieler, Philippe Schwaller, Kevin Maik Jablonka
19
1
Large language models (LLMs) have gained widespread interest due to their
ability to process human language and perform tasks on which they have not been
explicitly trained. This is relevant for the chemical sciences, which face the
problem of small and diverse datasets that are frequently in the form of text.
LLMs have shown promise in addressing these issues and are increasingly being
harnessed to predict chemical properties, optimize reactions, and even design
and conduct experiments autonomously. However, we still have only a very
limited systematic understanding of the chemical reasoning capabilities of
LLMs, which would be required to improve models and mitigate potential harms.
Here, we introduce "ChemBench," an automated framework designed to rigorously
evaluate the chemical knowledge and reasoning abilities of state-of-the-art
LLMs against the expertise of human chemists. We curated more than 7,000
question-answer pairs for a wide array of subfields of the chemical sciences,
evaluated leading open and closed-source LLMs, and found that the best models
outperformed the best human chemists in our study on average. The models,
however, struggle with some chemical reasoning tasks that are easy for human
experts and provide overconfident, misleading predictions, such as about
chemicals' safety profiles. These findings underscore the dual reality that,
although LLMs demonstrate remarkable proficiency in chemical tasks, further
research is critical to enhancing their safety and utility in chemical
sciences. Our findings also indicate a need for adaptations to chemistry
curricula and highlight the importance of continuing to develop evaluation
frameworks to improve safe and useful LLMs.
ByRisto Luukkonen, Jonathan Burdge, Elaine Zosa, Aarne Talman, Ville Komulainen, Väinö Hatanpää, Peter Sarlin, Sampo Pyysalo
15
1
The pretraining of state-of-the-art large language models now requires
trillions of words of text, which is orders of magnitude more than available
for the vast majority of languages. While including text in more than one
language is an obvious way to acquire more pretraining data, multilinguality is
often seen as a curse, and most model training efforts continue to focus
near-exclusively on individual large languages. We believe that multilinguality
can be a blessing and that it should be possible to substantially improve over
the capabilities of monolingual models for small languages through multilingual
training. In this study, we introduce Poro 34B, a 34 billion parameter model
trained for 1 trillion tokens of Finnish, English, and programming languages,
and demonstrate that a multilingual training approach can produce a model that
not only substantially advances over the capabilities of existing models for
Finnish, but also excels in translation and is competitive in its class in
generating English and programming languages. We release the model parameters,
scripts, and data under open licenses at
https://huggingface.co/LumiOpen/Poro-34B.
ByYunzhi Zhang, Zizhang Li, Amit Raj, Andreas Engelhardt, Yuanzhen Li, Tingbo Hou, Jiajun Wu, Varun Jampani
10
1
We propose 3D Congealing, a novel problem of 3D-aware alignment for 2D images
capturing semantically similar objects. Given a collection of unlabeled
Internet images, our goal is to associate the shared semantic parts from the
inputs and aggregate the knowledge from 2D images to a shared 3D canonical
space. We introduce a general framework that tackles the task without assuming
shape templates, poses, or any camera parameters. At its core is a canonical 3D
representation that encapsulates geometric and semantic information. The
framework optimizes for the canonical representation together with the pose for
each input image, and a per-image coordinate map that warps 2D pixel
coordinates to the 3D canonical frame to account for the shape matching. The
optimization procedure fuses prior knowledge from a pre-trained image
generative model and semantic information from input images. The former
provides strong knowledge guidance for this under-constraint task, while the
latter provides the necessary information to mitigate the training data bias
from the pre-trained model. Our framework can be used for various tasks such as
correspondence matching, pose estimation, and image editing, achieving strong
results on real-world image datasets under challenging illumination conditions
and on in-the-wild online image collections.
ByZhiyuan He, Aashish Gottipati, Lili Qiu, Francis Y. Yan, Xufang Luo, Kenuo Xu, Yuqing Yang
8
1
We present LLM-ABR, the first system that utilizes the generative
capabilities of large language models (LLMs) to autonomously design adaptive
bitrate (ABR) algorithms tailored for diverse network characteristics.
Operating within a reinforcement learning framework, LLM-ABR empowers LLMs to
design key components such as states and neural network architectures. We
evaluate LLM-ABR across diverse network settings, including broadband,
satellite, 4G, and 5G. LLM-ABR consistently outperforms default ABR algorithms.