ByYushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li
67
6
Current long context large language models (LLMs) can process inputs up to
100,000 tokens, yet struggle to generate outputs exceeding even a modest length
of 2,000 words. Through controlled experiments, we find that the model's
effective generation length is inherently bounded by the sample it has seen
during supervised fine-tuning (SFT). In other words, their output limitation is
due to the scarcity of long-output examples in existing SFT datasets. To
address this, we introduce AgentWrite, an agent-based pipeline that decomposes
ultra-long generation tasks into subtasks, enabling off-the-shelf LLMs to
generate coherent outputs exceeding 20,000 words. Leveraging AgentWrite, we
construct LongWriter-6k, a dataset containing 6,000 SFT data with output
lengths ranging from 2k to 32k words. By incorporating this dataset into model
training, we successfully scale the output length of existing models to over
10,000 words while maintaining output quality. We also develop LongBench-Write,
a comprehensive benchmark for evaluating ultra-long generation capabilities.
Our 9B parameter model, further improved through DPO, achieves state-of-the-art
performance on this benchmark, surpassing even much larger proprietary models.
In general, our work demonstrates that existing long context LLM already
possesses the potential for a larger output window--all you need is data with
extended output during model alignment to unlock this capability. Our code &
models are at: https://github.com/THUDM/LongWriter.
ByImagen-Team-Google, Jason Baldridge, Jakob Bauer, Mukul Bhutani, Nicole Brichtova, Andrew Bunner, Kelvin Chan, Yichang Chen, Sander Dieleman, Yuqing Du, Zach Eaton-Rosen, Hongliang Fei, Nando de Freitas, Yilin Gao, Evgeny Gladchenko, Sergio Gómez Colmenarejo, Mandy Guo, Alex Haig, Will Hawkins, Hexiang Hu, Huilian Huang, Tobenna Peter Igwe, Christos Kaplanis, Siavash Khodadadeh, Yelin Kim, Ksenia Konyushkova, Karol Langner, Eric Lau, Shixin Luo, Soňa Mokrá, Henna Nandwani, Yasumasa Onoe, Aäron van den Oord, Zarana Parekh, Jordi Pont-Tuset, Hang Qi, Rui Qian, Deepak Ramachandran, Poorva Rane, Abdullah Rashwan, Ali Razavi, Robert Riachi, Hansa Srinivasan, Srivatsan Srinivasan, Robin Strudel, Benigno Uria, Oliver Wang, Su Wang, Austin Waters, Chris Wolff, Auriel Wright, Zhisheng Xiao, Hao Xiong, Keyang Xu, Marc van Zee, Junlin Zhang, Katie Zhang, Wenlei Zhou, Konrad Zolna, Ola Aboubakar, Canfer Akbulut, Oscar Akerlund, Isabela Albuquerque, Nina Anderson, Marco Andreetto, Lora Aroyo, Ben Bariach, David Barker, Sherry Ben, Dana Berman, Courtney Biles, Irina Blok, Pankil Botadra, Jenny Brennan, Karla Brown, John Buckley, Rudy Bunel, Elie Bursztein, Christina Butterfield, Ben Caine, Viral Carpenter, Norman Casagrande, Ming-Wei Chang, Solomon Chang, Shamik Chaudhuri, Tony Chen, John Choi, Dmitry Churbanau, Nathan Clement, Matan Cohen, Forrester Cole, Mikhail Dektiarev, Vincent Du, Praneet Dutta, Tom Eccles, Ndidi Elue, Ashley Feden, Shlomi Fruchter, Frankie Garcia, Roopal Garg, Weina Ge, Ahmed Ghazy, Bryant Gipson, Andrew Goodman, Dawid Górny, Sven Gowal, Khyatti Gupta, Yoni Halpern, Yena Han, Susan Hao, Jamie Hayes, Amir Hertz, Ed Hirst, Tingbo Hou, Heidi Howard, Mohamed Ibrahim, Dirichi Ike-Njoku, Joana Iljazi, Vlad Ionescu, William Isaac, Reena Jana, Gemma Jennings, Donovon Jenson, Xuhui Jia, Kerry Jones, Xiaoen Ju, Ivana Kajic, Christos Kaplanis, Burcu Karagol Ayan, Jacob Kelly, Suraj Kothawade, Christina Kouridi, Ira Ktena, Jolanda Kumakaw, Dana Kurniawan, Dmitry Lagun, Lily Lavitas, Jason Lee, Tao Li, Marco Liang, Maggie Li-Calis, Yuchi Liu, Javier Lopez Alberca, Peggy Lu, Kristian Lum, Yukun Ma, Chase Malik, John Mellor, Inbar Mosseri, Tom Murray, Aida Nematzadeh, Paul Nicholas, João Gabriel Oliveira, Guillermo Ortiz-Jimenez, Michela Paganini, Tom Le Paine, Roni Paiss, Alicia Parrish, Anne Peckham, Vikas Peswani, Igor Petrovski, Tobias Pfaff, Alex Pirozhenko, Ryan Poplin, Utsav Prabhu, Yuan Qi, Matthew Rahtz, Cyrus Rashtchian, Charvi Rastogi, Amit Raul, Ali Razavi, Sylvestre-Alvise Rebuffi, Susanna Ricco, Felix Riedel, Dirk Robinson, Pankaj Rohatgi, Bill Rosgen, Sarah Rumbley, Moonkyung Ryu, Anthony Salgado, Sahil Singla, Florian Schroff, Candice Schumann, Tanmay Shah, Brendan Shillingford, Kaushik Shivakumar, Dennis Shtatnov, Zach Singer, Evgeny Sluzhaev, Valerii Sokolov, Thibault Sottiaux, Florian Stimberg, Brad Stone, David Stutz, Yu-Chuan Su, Eric Tabellion, Shuai Tang, David Tao, Kurt Thomas, Gregory Thornton, Andeep Toor, Cristian Udrescu, Aayush Upadhyay, Cristina Vasconcelos, Alex Vasiloff, Andrey Voynov, Amanda Walker, Luyu Wang, Miaosen Wang, Simon Wang, Stanley Wang, Qifei Wang, Yuxiao Wang, Ágoston Weisz, Olivia Wiles, Chenxia Wu, Xingyu Federico Xu, Andrew Xue, Jianbo Yang, Luo Yu, Mete Yurtoglu, Ali Zand, Han Zhang, Jiageng Zhang, Catherine Zhao, Adilet Zhaxybay, Miao Zhou, Shengqi Zhu, Zhenkai Zhu, Dawn Bloxwich, Mahyar Bordbar, Luis C. Cobo, Eli Collins, Shengyang Dai, Tulsee Doshi, Anca Dragan, Douglas Eck, Demis Hassabis, Sissie Hsiao, Tom Hume, Koray Kavukcuoglu, Helen King, Jack Krawczyk, Yeqing Li, Kathy Meier-Hellstern, Andras Orban, Yury Pinsky, Amar Subramanya, Oriol Vinyals, Ting Yu, Yori Zwols
62
10
We introduce Imagen 3, a latent diffusion model that generates high quality
images from text prompts. We describe our quality and responsibility
evaluations. Imagen 3 is preferred over other state-of-the-art (SOTA) models at
the time of evaluation. In addition, we discuss issues around safety and
representation, as well as methods we used to minimize the potential harm of
our models.
Large language model (LLM) agents have shown great potential in solving
real-world software engineering (SWE) problems. The most advanced open-source
SWE agent can resolve over 27% of real GitHub issues in SWE-Bench Lite.
However, these sophisticated agent frameworks exhibit varying strengths,
excelling in certain tasks while underperforming in others. To fully harness
the diversity of these agents, we propose DEI (Diversity Empowered
Intelligence), a framework that leverages their unique expertise. DEI functions
as a meta-module atop existing SWE agent frameworks, managing agent collectives
for enhanced problem-solving. Experimental results show that a DEI-guided
committee of agents is able to surpass the best individual agent's performance
by a large margin. For instance, a group of open-source SWE agents, with a
maximum individual resolve rate of 27.3% on SWE-Bench Lite, can achieve a 34.3%
resolve rate with DEI, making a 25% improvement and beating most closed-source
solutions. Our best-performing group excels with a 55% resolve rate, securing
the highest ranking on SWE-Bench Lite. Our findings contribute to the growing
body of research on collaborative AI systems and their potential to solve
complex software engineering challenges.
The rapid growth of scientific literature imposes significant challenges for
researchers endeavoring to stay updated with the latest advancements in their
fields and delve into new areas. We introduce OpenResearcher, an innovative
platform that leverages Artificial Intelligence (AI) techniques to accelerate
the research process by answering diverse questions from researchers.
OpenResearcher is built based on Retrieval-Augmented Generation (RAG) to
integrate Large Language Models (LLMs) with up-to-date, domain-specific
knowledge. Moreover, we develop various tools for OpenResearcher to understand
researchers' queries, search from the scientific literature, filter retrieved
information, provide accurate and comprehensive answers, and self-refine these
answers. OpenResearcher can flexibly use these tools to balance efficiency and
effectiveness. As a result, OpenResearcher enables researchers to save time and
increase their potential to discover new insights and drive scientific
breakthroughs. Demo, video, and code are available at:
https://github.com/GAIR-NLP/OpenResearcher.
ByZihan Qiu, Zeyu Huang, Shuang Cheng, Yizhi Zhou, Zili Wang, Ivan Titov, Jie Fu
32
2
The scaling of large language models (LLMs) has revolutionized their
capabilities in various tasks, yet this growth must be matched with efficient
computational strategies. The Mixture-of-Experts (MoE) architecture stands out
for its ability to scale model size without significantly increasing training
costs. Despite their advantages, current MoE models often display parameter
inefficiency. For instance, a pre-trained MoE-based LLM with 52 billion
parameters might perform comparably to a standard model with 6.7 billion
parameters. Being a crucial part of MoE, current routers in different layers
independently assign tokens without leveraging historical routing information,
potentially leading to suboptimal token-expert combinations and the parameter
inefficiency problem. To alleviate this issue, we introduce the Layerwise
Recurrent Router for Mixture-of-Experts (RMoE). RMoE leverages a Gated
Recurrent Unit (GRU) to establish dependencies between routing decisions across
consecutive layers. Such layerwise recurrence can be efficiently parallelly
computed for input tokens and introduces negotiable costs. Our extensive
empirical evaluations demonstrate that RMoE-based language models consistently
outperform a spectrum of baseline models. Furthermore, RMoE integrates a novel
computation stage orthogonal to existing methods, allowing seamless
compatibility with other MoE architectures. Our analyses attribute RMoE's gains
to its effective cross-layer information sharing, which also improves expert
selection and diversity. Our code is at https://github.com/qiuzh20/RMoE
The development of large language models leads to the formation of a
pre-train-then-align paradigm, in which the model is typically pre-trained on a
large text corpus and undergoes a tuning stage to align the model with human
preference or downstream tasks. In this work, we investigate the relationship
between pre-training and fine-tuning by fine-tuning multiple intermediate
pre-trained model checkpoints. Our results on 18 datasets suggest that i)
continual pre-training improves the model in a latent way that unveils after
fine-tuning; ii) with extra fine-tuning, the datasets that the model does not
demonstrate capability gain much more than those that the model performs well
during the pre-training stage; iii) although model benefits significantly
through supervised fine-tuning, it may forget previously known domain knowledge
and the tasks that are not seen during fine-tuning; iv) the model resembles
high sensitivity to evaluation prompts after supervised fine-tuning, but this
sensitivity can be alleviated by more pre-training.
The ability to distill object-centric abstractions from intricate visual
scenes underpins human-level generalization. Despite the significant progress
in object-centric learning methods, learning object-centric representations in
the 3D physical world remains a crucial challenge. In this work, we propose
SlotLifter, a novel object-centric radiance model addressing scene
reconstruction and decomposition jointly via slot-guided feature lifting. Such
a design unites object-centric learning representations and image-based
rendering methods, offering state-of-the-art performance in scene decomposition
and novel-view synthesis on four challenging synthetic and four complex
real-world datasets, outperforming existing 3D object-centric learning methods
by a large margin. Through extensive ablative studies, we showcase the efficacy
of designs in SlotLifter, revealing key insights for potential future
directions.
ByNursena Koprucu, Meher Shashwat Nigam, Shicheng Xu, Biruk Abere, Gabriele Dominici, Andrew Rodriguez, Sharvaree Vadgam, Berfin Inal, Alberto Tono
11
2
Inspired by Geoffrey Hinton emphasis on generative modeling, To recognize
shapes, first learn to generate them, we explore the use of 3D diffusion models
for object classification. Leveraging the density estimates from these models,
our approach, the Diffusion Classifier for 3D Objects (DC3DO), enables
zero-shot classification of 3D shapes without additional training. On average,
our method achieves a 12.5 percent improvement compared to its multiview
counterparts, demonstrating superior multimodal reasoning over discriminative
approaches. DC3DO employs a class-conditional diffusion model trained on
ShapeNet, and we run inferences on point clouds of chairs and cars. This work
highlights the potential of generative models in 3D object classification.
Large language models (LLMs) have demonstrated prowess in a wide range of
tasks. However, many LLMs exhibit significant performance discrepancies between
high- and low-resource languages. To mitigate this challenge, we present
FuxiTranyu, an open-source multilingual LLM, which is designed to satisfy the
need of the research community for balanced and high-performing multilingual
capabilities. FuxiTranyu-8B, the base model with 8 billion parameters, is
trained from scratch on a meticulously balanced multilingual data repository
that contains 600 billion tokens covering 43 natural languages and 16
programming languages. In addition to the base model, we also develop two
instruction-tuned models: FuxiTranyu-8B-SFT that is fine-tuned on a diverse
multilingual instruction dataset, and FuxiTranyu-8B-DPO that is further refined
with DPO on a preference dataset for enhanced alignment ability. Extensive
experiments on a wide range of multilingual benchmarks demonstrate the
competitive performance of FuxiTranyu against existing multilingual LLMs, e.g.,
BLOOM-7B, PolyLM-13B, Llama-2-Chat-7B and Mistral-7B-Instruct. Interpretability
analyses at both the neuron and representation level suggest that FuxiTranyu is
able to learn consistent multilingual representations across different
languages. To promote further research into multilingual LLMs and their working
mechanisms, we release both the base and instruction-tuned FuxiTranyu models
together with 58 pretraining checkpoints at HuggingFace and Github.
ByZhengtong Xu, Raghava Uppuluri, Xinwei Zhang, Cael Fitch, Philip Glen Crandall, Wan Shou, Dongyi Wang, Yu She
10
2
UniT is a novel approach to tactile representation learning, using VQVAE to
learn a compact latent space and serve as the tactile representation. It uses
tactile images obtained from a single simple object to train the representation
with transferability and generalizability. This tactile representation can be
zero-shot transferred to various downstream tasks, including perception tasks
and manipulation policy learning. Our benchmarking on an in-hand 3D pose
estimation task shows that UniT outperforms existing visual and tactile
representation learning methods. Additionally, UniT's effectiveness in policy
learning is demonstrated across three real-world tasks involving diverse
manipulated objects and complex robot-object-environment interactions. Through
extensive experimentation, UniT is shown to be a simple-to-train,
plug-and-play, yet widely effective method for tactile representation learning.
For more details, please refer to our open-source repository
https://github.com/ZhengtongXu/UniT and the project website
https://zhengtongxu.github.io/unifiedtactile.github.io/.
Movie screenplay summarization is challenging, as it requires an
understanding of long input contexts and various elements unique to movies.
Large language models have shown significant advancements in document
summarization, but they often struggle with processing long input contexts.
Furthermore, while television transcripts have received attention in recent
studies, movie screenplay summarization remains underexplored. To stimulate
research in this area, we present a new dataset, MovieSum, for abstractive
summarization of movie screenplays. This dataset comprises 2200 movie
screenplays accompanied by their Wikipedia plot summaries. We manually
formatted the movie screenplays to represent their structural elements.
Compared to existing datasets, MovieSum possesses several distinctive features:
(1) It includes movie screenplays, which are longer than scripts of TV
episodes. (2) It is twice the size of previous movie screenplay datasets. (3)
It provides metadata with IMDb IDs to facilitate access to additional external
knowledge. We also show the results of recently released large language models
applied to summarization on our dataset to provide a detailed baseline.
ByKamyar Zeinalipour, Neda Jamshidi, Monica Bianchini, Marco Maggini, Marco Gori
8
1
Pre-trained LLMs have demonstrated substantial capabilities across a range of
conventional natural language processing (NLP) tasks, such as summarization and
entity recognition. In this paper, we explore the application of LLMs in the
generation of high-quality protein sequences. Specifically, we adopt a suite of
pre-trained LLMs, including Mistral-7B1, Llama-2-7B2, Llama-3-8B3, and
gemma-7B4, to produce valid protein sequences. All of these models are publicly
available.5 Unlike previous work in this field, our approach utilizes a
relatively small dataset comprising 42,000 distinct human protein sequences. We
retrain these models to process protein-related data, ensuring the generation
of biologically feasible protein structures. Our findings demonstrate that even
with limited data, the adapted models exhibit efficiency comparable to
established protein-focused models such as ProGen varieties, ProtGPT2, and
ProLLaMA, which were trained on millions of protein sequences. To validate and
quantify the performance of our models, we conduct comparative analyses
employing standard metrics such as pLDDT, RMSD, TM-score, and REU. Furthermore,
we commit to making the trained versions of all four models publicly available,
fostering greater transparency and collaboration in the field of computational
biology.
ByIretiayo Akinola, Jie Xu, Jan Carius, Dieter Fox, Yashraj Narang
8
2
For both humans and robots, the sense of touch, known as tactile sensing, is
critical for performing contact-rich manipulation tasks. Three key challenges
in robotic tactile sensing are 1) interpreting sensor signals, 2) generating
sensor signals in novel scenarios, and 3) learning sensor-based policies. For
visuotactile sensors, interpretation has been facilitated by their close
relationship with vision sensors (e.g., RGB cameras). However, generation is
still difficult, as visuotactile sensors typically involve contact,
deformation, illumination, and imaging, all of which are expensive to simulate;
in turn, policy learning has been challenging, as simulation cannot be
leveraged for large-scale data collection. We present TacSL
(taxel), a library for GPU-based visuotactile sensor simulation and
learning. TacSL can be used to simulate visuotactile images and
extract contact-force distributions over 200times faster than the prior
state-of-the-art, all within the widely-used Isaac Gym simulator. Furthermore,
TacSL provides a learning toolkit containing multiple sensor models,
contact-intensive training environments, and online/offline algorithms that can
facilitate policy learning for sim-to-real applications. On the algorithmic
side, we introduce a novel online reinforcement-learning algorithm called
asymmetric actor-critic distillation (\sysName), designed to effectively and
efficiently learn tactile-based policies in simulation that can transfer to the
real world. Finally, we demonstrate the utility of our library and algorithms
by evaluating the benefits of distillation and multimodal sensing for
contact-rich manip ulation tasks, and most critically, performing sim-to-real
transfer. Supplementary videos and results are at
https://iakinola23.github.io/tacsl/.
Diffusion-based text-to-image generation models have significantly advanced
the field of art content synthesis. However, current portrait stylization
methods generally require either model fine-tuning based on examples or the
employment of DDIM Inversion to revert images to noise space, both of which
substantially decelerate the image generation process. To overcome these
limitations, this paper presents an inversion-free portrait stylization
framework based on diffusion models that accomplishes content and style feature
fusion in merely four sampling steps. We observed that Latent Consistency
Models employing consistency distillation can effectively extract
representative Consistency Features from noisy images. To blend the Consistency
Features extracted from both content and style images, we introduce a Style
Enhancement Attention Control technique that meticulously merges content and
style features within the attention space of the target image. Moreover, we
propose a feature merging strategy to amalgamate redundant features in
Consistency Features, thereby reducing the computational load of attention
control. Extensive experiments have validated the effectiveness of our proposed
framework in enhancing stylization efficiency and fidelity. The code is
available at https://github.com/liujin112/ZePo.
A general disentanglement-based speaker anonymization system typically
separates speech into content, speaker, and prosody features using individual
encoders. This paper explores how to adapt such a system when a new speech
attribute, for example, emotion, needs to be preserved to a greater extent.
While existing systems are good at anonymizing speaker embeddings, they are not
designed to preserve emotion. Two strategies for this are examined. First, we
show that integrating emotion embeddings from a pre-trained emotion encoder can
help preserve emotional cues, even though this approach slightly compromises
privacy protection. Alternatively, we propose an emotion compensation strategy
as a post-processing step applied to anonymized speaker embeddings. This
conceals the original speaker's identity and reintroduces the emotional traits
lost during speaker embedding anonymization. Specifically, we model the emotion
attribute using support vector machines to learn separate boundaries for each
emotion. During inference, the original speaker embedding is processed in two
ways: one, by an emotion indicator to predict emotion and select the
emotion-matched SVM accurately; and two, by a speaker anonymizer to conceal
speaker characteristics. The anonymized speaker embedding is then modified
along the corresponding SVM boundary towards an enhanced emotional direction to
save the emotional cues. The proposed strategies are also expected to be useful
for adapting a general disentanglement-based speaker anonymization system to
preserve other target paralinguistic attributes, with potential for a range of
downstream tasks.