ByOpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Mądry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea Voss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O'Connell, Ian O'Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Johannes Heidecke, John Schulman, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Peter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Rajan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchandani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agarwal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shirong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stewart Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, Yury Malkov
85
5
GPT-4o is an autoregressive omni model that accepts as input any combination
of text, audio, image, and video, and generates any combination of text, audio,
and image outputs. It's trained end-to-end across text, vision, and audio,
meaning all inputs and outputs are processed by the same neural network. GPT-4o
can respond to audio inputs in as little as 232 milliseconds, with an average
of 320 milliseconds, which is similar to human response time in conversation.
It matches GPT-4 Turbo performance on text in English and code, with
significant improvement on text in non-English languages, while also being much
faster and 50\% cheaper in the API. GPT-4o is especially better at vision and
audio understanding compared to existing models. In line with our commitment to
building AI safely and consistent with our voluntary commitments to the White
House, we are sharing the GPT-4o System Card, which includes our Preparedness
Framework evaluations. In this System Card, we provide a detailed look at
GPT-4o's capabilities, limitations, and safety evaluations across multiple
categories, focusing on speech-to-speech while also evaluating text and image
capabilities, and measures we've implemented to ensure the model is safe and
aligned. We also include third-party assessments on dangerous capabilities, as
well as discussion of potential societal impacts of GPT-4o's text and vision
capabilities.
ByKrzysztof Ociepa, Łukasz Flis, Krzysztof Wróbel, Adrian Gwoździej, Remigiusz Kinas
47
2
We introduce Bielik 7B v0.1, a 7-billion-parameter generative text model for
Polish language processing. Trained on curated Polish corpora, this model
addresses key challenges in language model development through innovative
techniques. These include Weighted Instruction Cross-Entropy Loss, which
balances the learning of different instruction types, and Adaptive Learning
Rate, which dynamically adjusts the learning rate based on training progress.
To evaluate performance, we created the Open PL LLM Leaderboard and Polish
MT-Bench, novel frameworks assessing various NLP tasks and conversational
abilities. Bielik 7B v0.1 demonstrates significant improvements, achieving a 9
percentage point increase in average score compared to Mistral-7B-v0.1 on the
RAG Reader task. It also excels in the Polish MT-Bench, particularly in
Reasoning (6.15/10) and Role-playing (7.83/10) categories. This model
represents a substantial advancement in Polish language AI, offering a powerful
tool for diverse linguistic applications and setting new benchmarks in the
field.
ByChien Van Nguyen, Xuan Shen, Ryan Aponte, Yu Xia, Samyadeep Basu, Zhengmian Hu, Jian Chen, Mihir Parmar, Sasidhar Kunapuli, Joe Barrow, Junda Wu, Ashish Singh, Yu Wang, Jiuxiang Gu, Franck Dernoncourt, Nesreen K. Ahmed, Nedim Lipka, Ruiyi Zhang, Xiang Chen, Tong Yu, Sungchul Kim, Hanieh Deilamsalehy, Namyong Park, Mike Rimer, Zhehao Zhang, Huanrui Yang, Ryan A. Rossi, Thien Huu Nguyen
46
3
Small Language Models (SLMs) have become increasingly important due to their
efficiency and performance to perform various language tasks with minimal
computational resources, making them ideal for various settings including
on-device, mobile, edge devices, among many others. In this article, we present
a comprehensive survey on SLMs, focusing on their architectures, training
techniques, and model compression techniques. We propose a novel taxonomy for
categorizing the methods used to optimize SLMs, including model compression,
pruning, and quantization techniques. We summarize the benchmark datasets that
are useful for benchmarking SLMs along with the evaluation metrics commonly
used. Additionally, we highlight key open challenges that remain to be
addressed. Our survey aims to serve as a valuable resource for researchers and
practitioners interested in developing and deploying small yet efficient
language models.
Digital agents capable of automating complex computer tasks have attracted
considerable attention due to their immense potential to enhance human-computer
interaction. However, existing agent methods exhibit deficiencies in their
generalization and specialization capabilities, especially in handling
open-ended computer tasks in real-world environments. Inspired by the rich
functionality of the App store, we present AgentStore, a scalable platform
designed to dynamically integrate heterogeneous agents for automating computer
tasks. AgentStore empowers users to integrate third-party agents, allowing the
system to continuously enrich its capabilities and adapt to rapidly evolving
operating systems. Additionally, we propose a novel core MetaAgent
with the AgentToken strategy to efficiently manage diverse agents and
utilize their specialized and generalist abilities for both domain-specific and
system-wide tasks. Extensive experiments on three challenging benchmarks
demonstrate that AgentStore surpasses the limitations of previous systems with
narrow capabilities, particularly achieving a significant improvement from
11.21\% to 23.85\% on the OSWorld benchmark, more than doubling the previous
results. Comprehensive quantitative and qualitative results further demonstrate
AgentStore's ability to enhance agent systems in both generalization and
specialization, underscoring its potential for developing the specialized
generalist computer assistant. All our codes will be made publicly available in
https://chengyou-jia.github.io/AgentStore-Home.
ByQintong Zhang, Victor Shea-Jay Huang, Bin Wang, Junyuan Zhang, Zhengren Wang, Hao Liang, Shawn Wang, Matthieu Lin, Wentao Zhang, Conghui He
30
3
Document parsing is essential for converting unstructured and semi-structured
documents-such as contracts, academic papers, and invoices-into structured,
machine-readable data. Document parsing extract reliable structured data from
unstructured inputs, providing huge convenience for numerous applications.
Especially with recent achievements in Large Language Models, document parsing
plays an indispensable role in both knowledge base construction and training
data generation. This survey presents a comprehensive review of the current
state of document parsing, covering key methodologies, from modular pipeline
systems to end-to-end models driven by large vision-language models. Core
components such as layout detection, content extraction (including text,
tables, and mathematical expressions), and multi-modal data integration are
examined in detail. Additionally, this paper discusses the challenges faced by
modular document parsing systems and vision-language models in handling complex
layouts, integrating multiple modules, and recognizing high-density text. It
emphasizes the importance of developing larger and more diverse datasets and
outlines future research directions.
ByHaozhe Liu, Shikun Liu, Zijian Zhou, Mengmeng Xu, Yanping Xie, Xiao Han, Juan C. Pérez, Ding Liu, Kumara Kahatapitiya, Menglin Jia, Jui-Chieh Wu, Sen He, Tao Xiang, Jürgen Schmidhuber, Juan-Manuel Pérez-Rúa
23
2
We introduce MarDini, a new family of video diffusion models that integrate
the advantages of masked auto-regression (MAR) into a unified diffusion model
(DM) framework. Here, MAR handles temporal planning, while DM focuses on
spatial generation in an asymmetric network design: i) a MAR-based planning
model containing most of the parameters generates planning signals for each
masked frame using low-resolution input; ii) a lightweight generation model
uses these signals to produce high-resolution frames via diffusion de-noising.
MarDini's MAR enables video generation conditioned on any number of masked
frames at any frame positions: a single model can handle video interpolation
(e.g., masking middle frames), image-to-video generation (e.g., masking from
the second frame onward), and video expansion (e.g., masking half the frames).
The efficient design allocates most of the computational resources to the
low-resolution planning model, making computationally expensive but important
spatio-temporal attention feasible at scale. MarDini sets a new
state-of-the-art for video interpolation; meanwhile, within few inference
steps, it efficiently generates videos on par with those of much more expensive
advanced image-to-video models.
ByJiajie Zhang, Zhongni Hou, Xin Lv, Shulin Cao, Zhenyu Hou, Yilin Niu, Lei Hou, Yuxiao Dong, Ling Feng, Juanzi Li
19
2
Though significant advancements have been achieved in developing long-context
large language models (LLMs), the compromised quality of LLM-synthesized data
for supervised fine-tuning (SFT) often affects the long-context performance of
SFT models and leads to inherent limitations. In principle, reinforcement
learning (RL) with appropriate reward signals can further enhance models'
capacities. However, how to obtain reliable rewards in long-context scenarios
remains unexplored. To this end, we propose LongReward, a novel method that
utilizes an off-the-shelf LLM to provide rewards for long-context model
responses from four human-valued dimensions: helpfulness, logicality,
faithfulness, and completeness, each with a carefully designed assessment
pipeline. By combining LongReward and offline RL algorithm DPO, we are able to
effectively improve long-context SFT models. Our experiments indicate that
LongReward not only significantly improves models' long-context performance but
also enhances their ability to follow short instructions. We also find that
long-context DPO with LongReward and conventional short-context DPO can be used
together without hurting either one's performance.
ByHaocheng Xi, Han Cai, Ligeng Zhu, Yao Lu, Kurt Keutzer, Jianfei Chen, Song Han
19
5
FP8 training has emerged as a promising method for improving training
efficiency. Existing frameworks accelerate training by applying FP8 computation
to linear layers while leaving optimizer states and activations in higher
precision, which fails to fully optimize memory usage. This paper introduces
COAT (Compressing Optimizer States and Activations for FP8 Training), a novel
FP8 training framework designed to significantly reduce memory footprint when
training large models. COAT addresses current limitations through two key
innovations: (1) Dynamic Range Expansion, which aligns optimizer state
distributions more closely with the FP8 representation range, thereby reducing
quantization error, and (2) Mixed-Granularity Activation Quantization, which
optimizes activation memory using a combination of per-tensor and per-group
quantization strategies. Experiments demonstrate that COAT effectively reduces
end-to-end training memory footprint by 1.54x compared to BF16 while achieving
nearly lossless performance across various tasks, such as Large Language Model
pretraining and fine-tuning and Vision Language Model training. COAT also
achieves a 1.43x end-to-end training speedup compared to BF16, performing on
par with or surpassing TransformerEngine's speedup. COAT enables efficient
full-parameter training of large models on fewer GPUs, and facilitates doubling
the batch size in distributed training settings, providing a practical solution
for scaling large-scale model training. The code is available at
https://github.com/NVlabs/COAT.
Image restoration (IR) in real-world scenarios presents significant
challenges due to the lack of high-capacity models and comprehensive datasets.
To tackle these issues, we present a dual strategy: GenIR, an innovative data
curation pipeline, and DreamClear, a cutting-edge Diffusion Transformer
(DiT)-based image restoration model. GenIR, our pioneering contribution, is a
dual-prompt learning pipeline that overcomes the limitations of existing
datasets, which typically comprise only a few thousand images and thus offer
limited generalizability for larger models. GenIR streamlines the process into
three stages: image-text pair construction, dual-prompt based fine-tuning, and
data generation & filtering. This approach circumvents the laborious data
crawling process, ensuring copyright compliance and providing a cost-effective,
privacy-safe solution for IR dataset construction. The result is a large-scale
dataset of one million high-quality images. Our second contribution,
DreamClear, is a DiT-based image restoration model. It utilizes the generative
priors of text-to-image (T2I) diffusion models and the robust perceptual
capabilities of multi-modal large language models (MLLMs) to achieve
photorealistic restoration. To boost the model's adaptability to diverse
real-world degradations, we introduce the Mixture of Adaptive Modulator (MoAM).
It employs token-wise degradation priors to dynamically integrate various
restoration experts, thereby expanding the range of degradations the model can
address. Our exhaustive experiments confirm DreamClear's superior performance,
underlining the efficacy of our dual strategy for real-world image restoration.
Code and pre-trained models will be available at:
https://github.com/shallowdream204/DreamClear.
We introduce a novel training-free spatial grounding technique for
text-to-image generation using Diffusion Transformers (DiT). Spatial grounding
with bounding boxes has gained attention for its simplicity and versatility,
allowing for enhanced user control in image generation. However, prior
training-free approaches often rely on updating the noisy image during the
reverse diffusion process via backpropagation from custom loss functions, which
frequently struggle to provide precise control over individual bounding boxes.
In this work, we leverage the flexibility of the Transformer architecture,
demonstrating that DiT can generate noisy patches corresponding to each
bounding box, fully encoding the target object and allowing for fine-grained
control over each region. Our approach builds on an intriguing property of DiT,
which we refer to as semantic sharing. Due to semantic sharing, when a smaller
patch is jointly denoised alongside a generatable-size image, the two become
"semantic clones". Each patch is denoised in its own branch of the generation
process and then transplanted into the corresponding region of the original
noisy image at each timestep, resulting in robust spatial grounding for each
bounding box. In our experiments on the HRS and DrawBench benchmarks, we
achieve state-of-the-art performance compared to previous training-free spatial
grounding approaches.
Search engines enable the retrieval of unknown information with texts.
However, traditional methods fall short when it comes to understanding
unfamiliar visual content, such as identifying an object that the model has
never seen before. This challenge is particularly pronounced for large
vision-language models (VLMs): if the model has not been exposed to the object
depicted in an image, it struggles to generate reliable answers to the user's
question regarding that image. Moreover, as new objects and events continuously
emerge, frequently updating VLMs is impractical due to heavy computational
burdens. To address this limitation, we propose Vision Search Assistant, a
novel framework that facilitates collaboration between VLMs and web agents.
This approach leverages VLMs' visual understanding capabilities and web agents'
real-time information access to perform open-world Retrieval-Augmented
Generation via the web. By integrating visual and textual representations
through this collaboration, the model can provide informed responses even when
the image is novel to the system. Extensive experiments conducted on both
open-set and closed-set QA benchmarks demonstrate that the Vision Search
Assistant significantly outperforms the other models and can be widely applied
to existing VLMs.
ByHanshi Sun, Momin Haider, Ruiqi Zhang, Huitao Yang, Jiahao Qiu, Ming Yin, Mengdi Wang, Peter Bartlett, Andrea Zanette
10
2
The safe and effective deployment of Large Language Models (LLMs) involves a
critical step called alignment, which ensures that the model's responses are in
accordance with human preferences. Prevalent alignment techniques, such as DPO,
PPO and their variants, align LLMs by changing the pre-trained model weights
during a phase called post-training. While predominant, these post-training
methods add substantial complexity before LLMs can be deployed. Inference-time
alignment methods avoid the complex post-training step and instead bias the
generation towards responses that are aligned with human preferences. The
best-known inference-time alignment method, called Best-of-N, is as effective
as the state-of-the-art post-training procedures. Unfortunately, Best-of-N
requires vastly more resources at inference time than standard decoding
strategies, which makes it computationally not viable. In this work, we
introduce Speculative Rejection, a computationally-viable inference-time
alignment algorithm. It generates high-scoring responses according to a given
reward model, like Best-of-N does, while being between 16 to 32 times more
computationally efficient.
ByHanyu Wang, Saksham Suri, Yixuan Ren, Hao Chen, Abhinav Shrivastava
9
2
We present LARP, a novel video tokenizer designed to overcome limitations in
current video tokenization methods for autoregressive (AR) generative models.
Unlike traditional patchwise tokenizers that directly encode local visual
patches into discrete tokens, LARP introduces a holistic tokenization scheme
that gathers information from the visual content using a set of learned
holistic queries. This design allows LARP to capture more global and semantic
representations, rather than being limited to local patch-level information.
Furthermore, it offers flexibility by supporting an arbitrary number of
discrete tokens, enabling adaptive and efficient tokenization based on the
specific requirements of the task. To align the discrete token space with
downstream AR generation tasks, LARP integrates a lightweight AR transformer as
a training-time prior model that predicts the next token on its discrete latent
space. By incorporating the prior model during training, LARP learns a latent
space that is not only optimized for video reconstruction but is also
structured in a way that is more conducive to autoregressive generation.
Moreover, this process defines a sequential order for the discrete tokens,
progressively pushing them toward an optimal configuration during training,
ensuring smoother and more accurate AR generation at inference time.
Comprehensive experiments demonstrate LARP's strong performance, achieving
state-of-the-art FVD on the UCF101 class-conditional video generation
benchmark. LARP enhances the compatibility of AR models with videos and opens
up the potential to build unified high-fidelity multimodal large language
models (MLLMs).
ByShih-Yang Liu, Huck Yang, Chein-Yi Wang, Nai Chit Fung, Hongxu Yin, Charbel Sakr, Saurav Muralidharan, Kwang-Ting Cheng, Jan Kautz, Yu-Chiang Frank Wang, Pavlo Molchanov, Min-Hung Chen
7
2
In this work, we re-formulate the model compression problem into the
customized compensation problem: Given a compressed model, we aim to introduce
residual low-rank paths to compensate for compression errors under customized
requirements from users (e.g., tasks, compression ratios), resulting in greater
flexibility in adjusting overall capacity without being constrained by specific
compression formats. However, naively applying SVD to derive residual paths
causes suboptimal utilization of the low-rank representation capacity. Instead,
we propose Training-free Eigenspace Low-Rank Approximation (EoRA), a method
that directly minimizes compression-induced errors without requiring
gradient-based training, achieving fast optimization in minutes using a small
amount of calibration data. EoRA projects compression errors into the
eigenspace of input activations, leveraging eigenvalues to effectively
prioritize the reconstruction of high-importance error components. Moreover,
EoRA can be seamlessly integrated with fine-tuning and quantization to further
improve effectiveness and efficiency. EoRA consistently outperforms previous
methods in compensating errors for compressed LLaMA2/3 models on various tasks,
such as language generation, commonsense reasoning, and math reasoning tasks
(e.g., 31.31%/12.88% and 9.69% improvements on ARC-Easy/ARC-Challenge and
MathQA when compensating LLaMA3-8B that is quantized to 4-bit and pruned to 2:4
sparsity). EoRA offers a scalable, training-free solution to compensate for
compression errors, making it a powerful tool to deploy LLMs in various
capacity and efficiency requirements.
ByLawrence Jang, Yinheng Li, Charles Ding, Justin Lin, Paul Pu Liang, Dan Zhao, Rogerio Bonatti, Kazuhito Koishida
6
2
Videos are often used to learn or extract the necessary information to
complete tasks in ways different than what text and static imagery alone can
provide. However, many existing agent benchmarks neglect long-context video
understanding, instead focusing on text or static image inputs. To bridge this
gap, we introduce VideoWebArena (VideoWA), a benchmark for evaluating the
capabilities of long-context multimodal agents for video understanding. VideoWA
consists of 2,021 web agent tasks based on manually crafted video tutorials,
which total almost four hours of content. For our benchmark, we define a
taxonomy of long-context video-based agent tasks with two main areas of focus:
skill retention and factual retention. While skill retention tasks evaluate
whether an agent can use a given human demonstration to complete a task
efficiently, the factual retention task evaluates whether an agent can retrieve
instruction-relevant information from a video to complete a task. We find that
the best model achieves 13.3% success on factual retention tasks and 45.8% on
factual retention QA pairs, far below human performance at 73.9% and 79.3%,
respectively. On skill retention tasks, long-context models perform worse with
tutorials than without, exhibiting a 5% performance decrease in WebArena tasks
and a 10.3% decrease in VisualWebArena tasks. Our work highlights the need to
improve the agentic abilities of long-context multimodal models and provides a
testbed for future development with long-context video agents.
BySangmin Bae, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Seungyeon Kim, Tal Schuster
6
3
Large language models (LLMs) are expensive to deploy. Parameter sharing
offers a possible path towards reducing their size and cost, but its
effectiveness in modern LLMs remains fairly limited. In this work, we revisit
"layer tying" as form of parameter sharing in Transformers, and introduce novel
methods for converting existing LLMs into smaller "Recursive Transformers" that
share parameters across layers, with minimal loss of performance. Here, our
Recursive Transformers are efficiently initialized from standard pretrained
Transformers, but only use a single block of unique layers that is then
repeated multiple times in a loop. We further improve performance by
introducing Relaxed Recursive Transformers that add flexibility to the layer
tying constraint via depth-wise low-rank adaptation (LoRA) modules, yet still
preserve the compactness of the overall model. We show that our recursive
models (e.g., recursive Gemma 1B) outperform both similar-sized vanilla
pretrained models (such as TinyLlama 1.1B and Pythia 1B) and knowledge
distillation baselines -- and can even recover most of the performance of the
original "full-size" model (e.g., Gemma 2B with no shared parameters). Finally,
we propose Continuous Depth-wise Batching, a promising new inference paradigm
enabled by the Recursive Transformer when paired with early exiting. In a
theoretical analysis, we show that this has the potential to lead to
significant (2-3x) gains in inference throughput.
BySergio Burdisso, Srikanth Madikeri, Petr Motlicek
5
2
Efficiently deriving structured workflows from unannotated dialogs remains an
underexplored and formidable challenge in computational linguistics. Automating
this process could significantly accelerate the manual design of workflows in
new domains and enable the grounding of large language models in
domain-specific flowcharts, enhancing transparency and controllability. In this
paper, we introduce Dialog2Flow (D2F) embeddings, which differ from
conventional sentence embeddings by mapping utterances to a latent space where
they are grouped according to their communicative and informative functions
(i.e., the actions they represent). D2F allows for modeling dialogs as
continuous trajectories in a latent space with distinct action-related regions.
By clustering D2F embeddings, the latent space is quantized, and dialogs can be
converted into sequences of region/action IDs, facilitating the extraction of
the underlying workflow. To pre-train D2F, we build a comprehensive dataset by
unifying twenty task-oriented dialog datasets with normalized per-turn action
annotations. We also introduce a novel soft contrastive loss that leverages the
semantic information of these actions to guide the representation learning
process, showing superior performance compared to standard supervised
contrastive loss. Evaluation against various sentence embeddings, including
dialog-specific ones, demonstrates that D2F yields superior qualitative and
quantitative results across diverse domains.
ByMuhammad Zubair Irshad, Mauro Comi, Yen-Chen Lin, Nick Heppert, Abhinav Valada, Rares Ambrus, Zsolt Kira, Jonathan Tremblay
5
2
Neural Fields have emerged as a transformative approach for 3D scene
representation in computer vision and robotics, enabling accurate inference of
geometry, 3D semantics, and dynamics from posed 2D data. Leveraging
differentiable rendering, Neural Fields encompass both continuous implicit and
explicit neural representations enabling high-fidelity 3D reconstruction,
integration of multi-modal sensor data, and generation of novel viewpoints.
This survey explores their applications in robotics, emphasizing their
potential to enhance perception, planning, and control. Their compactness,
memory efficiency, and differentiability, along with seamless integration with
foundation and generative models, make them ideal for real-time applications,
improving robot adaptability and decision-making. This paper provides a
thorough review of Neural Fields in robotics, categorizing applications across
various domains and evaluating their strengths and limitations, based on over
200 papers. First, we present four key Neural Fields frameworks: Occupancy
Networks, Signed Distance Fields, Neural Radiance Fields, and Gaussian
Splatting. Second, we detail Neural Fields' applications in five major robotics
domains: pose estimation, manipulation, navigation, physics, and autonomous
driving, highlighting key works and discussing takeaways and open challenges.
Finally, we outline the current limitations of Neural Fields in robotics and
propose promising directions for future research. Project page:
https://robonerf.github.io
This research tests the role of Large Language Models (LLMs) as formal second
opinion tools in professional decision-making, particularly focusing on complex
medical cases where even experienced physicians seek peer consultation. The
work analyzed 183 challenging medical cases from Medscape over a 20-month
period, testing multiple LLMs' performance against crowd-sourced physician
responses. A key finding was the high overall score possible in the latest
foundational models (>80% accuracy compared to consensus opinion), which
exceeds most human metrics reported on the same clinical cases (450 pages of
patient profiles, test results). The study rates the LLMs' performance
disparity between straightforward cases (>81% accuracy) and complex scenarios
(43% accuracy), particularly in these cases generating substantial debate among
human physicians. The research demonstrates that LLMs may be valuable as
generators of comprehensive differential diagnoses rather than as primary
diagnostic tools, potentially helping to counter cognitive biases in clinical
decision-making, reduce cognitive loads, and thus remove some sources of
medical error. The inclusion of a second comparative legal dataset (Supreme
Court cases, N=21) provides added empirical context to the AI use to foster
second opinions, though these legal challenges proved considerably easier for
LLMs to analyze. In addition to the original contributions of empirical
evidence for LLM accuracy, the research aggregated a novel benchmark for others
to score highly contested question and answer reliability between both LLMs and
disagreeing human practitioners. These results suggest that the optimal
deployment of LLMs in professional settings may differ substantially from
current approaches that emphasize automation of routine tasks.
Given the high cost of collecting robotic data in the real world, sample
efficiency is a consistently compelling pursuit in robotics. In this paper, we
introduce SGRv2, an imitation learning framework that enhances sample
efficiency through improved visual and action representations. Central to the
design of SGRv2 is the incorporation of a critical inductive bias-action
locality, which posits that robot's actions are predominantly influenced by the
target object and its interactions with the local environment. Extensive
experiments in both simulated and real-world settings demonstrate that action
locality is essential for boosting sample efficiency. SGRv2 excels in RLBench
tasks with keyframe control using merely 5 demonstrations and surpasses the RVT
baseline in 23 of 26 tasks. Furthermore, when evaluated on ManiSkill2 and
MimicGen using dense control, SGRv2's success rate is 2.54 times that of SGR.
In real-world environments, with only eight demonstrations, SGRv2 can perform a
variety of tasks at a markedly higher success rate compared to baseline models.
Project website: http://sgrv2-robot.github.io
ByWenshuai Zhao, Yi Zhao, Joni Pajarinen, Michael Muehlebach
1
2
Imitation learning from human motion capture (MoCap) data provides a
promising way to train humanoid robots. However, due to differences in
morphology, such as varying degrees of joint freedom and force limits, exact
replication of human behaviors may not be feasible for humanoid robots.
Consequently, incorporating physically infeasible MoCap data in training
datasets can adversely affect the performance of the robot policy. To address
this issue, we propose a bi-level optimization-based imitation learning
framework that alternates between optimizing both the robot policy and the
target MoCap data. Specifically, we first develop a generative latent dynamics
model using a novel self-consistent auto-encoder, which learns sparse and
structured motion representations while capturing desired motion patterns in
the dataset. The dynamics model is then utilized to generate reference motions
while the latent representation regularizes the bi-level motion imitation
process. Simulations conducted with a realistic model of a humanoid robot
demonstrate that our method enhances the robot policy by modifying reference
motions to be physically consistent.