Inference with Transformer-based Large Language Models (LLMs) on long
sequences is both costly and slow due to the quadratic complexity of the
self-attention mechanism. We introduce Star Attention, a two-phase block-sparse
approximation that improves computational efficiency by sharding attention
across multiple hosts while minimizing communication overhead. In the first
phase, the context is processed using blockwise-local attention across hosts,
in parallel. In the second phase, query and response tokens attend to all prior
cached tokens through sequence-global attention. Star Attention integrates
seamlessly with most Transformer-based LLMs trained with global attention,
reducing memory requirements and inference time by up to 11x while preserving
95-100% of accuracy.
This paper presents a critical examination of current approaches to
replicating OpenAI's O1 model capabilities, with particular focus on the
widespread but often undisclosed use of knowledge distillation techniques.
While our previous work explored the fundamental technical path to O1
replication, this study reveals how simple distillation from O1's API, combined
with supervised fine-tuning, can achieve superior performance on complex
mathematical reasoning tasks. Through extensive experiments, we show that a
base model fine-tuned on simply tens of thousands of samples O1-distilled
long-thought chains outperforms O1-preview on the American Invitational
Mathematics Examination (AIME) with minimal technical complexity. Moreover, our
investigation extends beyond mathematical reasoning to explore the
generalization capabilities of O1-distilled models across diverse tasks:
hallucination, safety and open-domain QA. Notably, despite training only on
mathematical problem-solving data, our models demonstrated strong
generalization to open-ended QA tasks and became significantly less susceptible
to sycophancy after fine-tuning. We deliberately make this finding public to
promote transparency in AI research and to challenge the current trend of
obscured technical claims in the field. Our work includes: (1) A detailed
technical exposition of the distillation process and its effectiveness, (2) A
comprehensive benchmark framework for evaluating and categorizing O1
replication attempts based on their technical transparency and reproducibility,
(3) A critical discussion of the limitations and potential risks of
over-relying on distillation approaches, our analysis culminates in a crucial
bitter lesson: while the pursuit of more capable AI systems is important, the
development of researchers grounded in first-principles thinking is paramount.
We present Material Anything, a fully-automated, unified diffusion framework
designed to generate physically-based materials for 3D objects. Unlike existing
methods that rely on complex pipelines or case-specific optimizations, Material
Anything offers a robust, end-to-end solution adaptable to objects under
diverse lighting conditions. Our approach leverages a pre-trained image
diffusion model, enhanced with a triple-head architecture and rendering loss to
improve stability and material quality. Additionally, we introduce confidence
masks as a dynamic switcher within the diffusion model, enabling it to
effectively handle both textured and texture-less objects across varying
lighting conditions. By employing a progressive material generation strategy
guided by these confidence masks, along with a UV-space material refiner, our
method ensures consistent, UV-ready material outputs. Extensive experiments
demonstrate our approach outperforms existing methods across a wide range of
object categories and lighting conditions.
ByDawei Li, Bohan Jiang, Liangjie Huang, Alimohammad Beigi, Chengshuai Zhao, Zhen Tan, Amrita Bhattacharjee, Yuxuan Jiang, Canyu Chen, Tianhao Wu, Kai Shu, Lu Cheng, Huan Liu
39
2
Assessment and evaluation have long been critical challenges in artificial
intelligence (AI) and natural language processing (NLP). However, traditional
methods, whether matching-based or embedding-based, often fall short of judging
subtle attributes and delivering satisfactory results. Recent advancements in
Large Language Models (LLMs) inspire the "LLM-as-a-judge" paradigm, where LLMs
are leveraged to perform scoring, ranking, or selection across various tasks
and applications. This paper provides a comprehensive survey of LLM-based
judgment and assessment, offering an in-depth overview to advance this emerging
field. We begin by giving detailed definitions from both input and output
perspectives. Then we introduce a comprehensive taxonomy to explore
LLM-as-a-judge from three dimensions: what to judge, how to judge and where to
judge. Finally, we compile benchmarks for evaluating LLM-as-a-judge and
highlight key challenges and promising directions, aiming to provide valuable
insights and inspire future research in this promising research area. Paper
list and more resources about LLM-as-a-judge can be found at
https://github.com/llm-as-a-judge/Awesome-LLM-as-a-judge and
https://llm-as-a-judge.github.io.
ByTianbin Li, Yanzhou Su, Wei Li, Bin Fu, Zhe Chen, Ziyan Huang, Guoan Wang, Chenglong Ma, Ying Chen, Ming Hu, Yanjun Li, Pengcheng Chen, Xiaowei Hu, Zhongying Deng, Yuanfeng Ji, Jin Ye, Yu Qiao, Junjun He
38
2
Despite significant advancements in general artificial intelligence, such as
GPT-4, their effectiveness in the medical domain (general medical AI, GMAI)
remains constrained due to the absence of specialized medical knowledge. To
address this challenge, we present GMAI-VL-5.5M, a comprehensive multimodal
medical dataset created by converting hundreds of specialized medical datasets
into meticulously constructed image-text pairs. This dataset features
comprehensive task coverage, diverse modalities, and high-quality image-text
data. Building upon this multimodal dataset, we propose GMAI-VL, a general
medical vision-language model with a progressively three-stage training
strategy. This approach significantly enhances the model's ability by
integrating visual and textual information, thereby improving its ability to
process multimodal data and support accurate diagnosis and clinical
decision-making. Experimental evaluations demonstrate that GMAI-VL achieves
state-of-the-art results across a wide range of multimodal medical tasks, such
as visual question answering and medical image diagnosis. Our contributions
include the development of the GMAI-VL-5.5M dataset, the introduction of the
GMAI-VL model, and the establishment of new benchmarks in multiple medical
domains. Code and dataset will be released at
https://github.com/uni-medical/GMAI-VL.
ByChaehun Shin, Jooyoung Choi, Heeseung Kim, Sungroh Yoon
33
2
Subject-driven text-to-image generation aims to produce images of a new
subject within a desired context by accurately capturing both the visual
characteristics of the subject and the semantic content of a text prompt.
Traditional methods rely on time- and resource-intensive fine-tuning for
subject alignment, while recent zero-shot approaches leverage on-the-fly image
prompting, often sacrificing subject alignment. In this paper, we introduce
Diptych Prompting, a novel zero-shot approach that reinterprets as an
inpainting task with precise subject alignment by leveraging the emergent
property of diptych generation in large-scale text-to-image models. Diptych
Prompting arranges an incomplete diptych with the reference image in the left
panel, and performs text-conditioned inpainting on the right panel. We further
prevent unwanted content leakage by removing the background in the reference
image and improve fine-grained details in the generated subject by enhancing
attention weights between the panels during inpainting. Experimental results
confirm that our approach significantly outperforms zero-shot image prompting
methods, resulting in images that are visually preferred by users.
Additionally, our method supports not only subject-driven generation but also
stylized image generation and subject-driven image editing, demonstrating
versatility across diverse image generation applications. Project page:
https://diptychprompting.github.io/
ByYoel Zimmermann, Adib Bazgir, Zartashia Afzal, Fariha Agbere, Qianxiang Ai, Nawaf Alampara, Alexander Al-Feghali, Mehrad Ansari, Dmytro Antypov, Amro Aswad, Jiaru Bai, Viktoriia Baibakova, Devi Dutta Biswajeet, Erik Bitzek, Joshua D. Bocarsly, Anna Borisova, Andres M Bran, L. Catherine Brinson, Marcel Moran Calderon, Alessandro Canalicchio, Victor Chen, Yuan Chiang, Defne Circi, Benjamin Charmes, Vikrant Chaudhary, Zizhang Chen, Min-Hsueh Chiu, Judith Clymo, Kedar Dabhadkar, Nathan Daelman, Archit Datar, Matthew L. Evans, Maryam Ghazizade Fard, Giuseppe Fisicaro, Abhijeet Sadashiv Gangan, Janine George, Jose D. Cojal Gonzalez, Michael Götte, Ankur K. Gupta, Hassan Harb, Pengyu Hong, Abdelrahman Ibrahim, Ahmed Ilyas, Alishba Imran, Kevin Ishimwe, Ramsey Issa, Kevin Maik Jablonka, Colin Jones, Tyler R. Josephson, Greg Juhasz, Sarthak Kapoor, Rongda Kang, Ghazal Khalighinejad, Sartaaj Khan, Sascha Klawohn, Suneel Kuman, Alvin Noe Ladines, Sarom Leang, Magdalena Lederbauer, Sheng-Lun Mark Liao, Hao Liu, Xuefeng Liu, Stanley Lo, Sandeep Madireddy, Piyush Ranjan Maharana, Shagun Maheshwari, Soroush Mahjoubi, José A. Márquez, Rob Mills, Trupti Mohanty, Bernadette Mohr, Seyed Mohamad Moosavi, Alexander Moßhammer, Amirhossein D. Naghdi, Aakash Naik, Oleksandr Narykov, Hampus Näsström, Xuan Vu Nguyen, Xinyi Ni, Dana O'Connor, Teslim Olayiwola, Federico Ottomano, Aleyna Beste Ozhan, Sebastian Pagel, Chiku Parida, Jaehee Park, Vraj Patel, Elena Patyukova, Martin Hoffmann Petersen, Luis Pinto, José M. Pizarro, Dieter Plessers, Tapashree Pradhan, Utkarsh Pratiush, Charishma Puli, Andrew Qin, Mahyar Rajabi, Francesco Ricci, Elliot Risch, Martiño Ríos-García, Aritra Roy, Tehseen Rug, Hasan M Sayeed, Markus Scheidgen, Mara Schilling-Wilhelmi, Marcel Schloz, Fabian Schöppach, Julia Schumann, Philippe Schwaller, Marcus Schwarting, Samiha Sharlin, Kevin Shen, Jiale Shi, Pradip Si, Jennifer D'Souza, Taylor Sparks, Suraj Sudhakar, Leopold Talirz, Dandan Tang, Olga Taran, Carla Terboven, Mark Tropin, Anastasiia Tsymbal, Katharina Ueltzen, Pablo Andres Unzueta, Archit Vasan, Tirtha Vinchurkar, Trung Vo, Gabriel Vogel, Christoph Völker, Jan Weinreich, Faradawn Yang, Mohd Zaki, Chi Zhang, Sylvester Zhang, Weijie Zhang, Ruijie Zhu, Shang Zhu, Jan Janssen, Ian Foster, Ben Blaiszik
30
2
Here, we present the outcomes from the second Large Language Model (LLM)
Hackathon for Applications in Materials Science and Chemistry, which engaged
participants across global hybrid locations, resulting in 34 team submissions.
The submissions spanned seven key application areas and demonstrated the
diverse utility of LLMs for applications in (1) molecular and material property
prediction; (2) molecular and material design; (3) automation and novel
interfaces; (4) scientific communication and education; (5) research data
management and automation; (6) hypothesis generation and evaluation; and (7)
knowledge extraction and reasoning from scientific literature. Each team
submission is presented in a summary table with links to the code and as brief
papers in the appendix. Beyond team results, we discuss the hackathon event and
its hybrid format, which included physical hubs in Toronto, Montreal, San
Francisco, Berlin, Lausanne, and Tokyo, alongside a global online hub to enable
local and virtual collaboration. Overall, the event highlighted significant
improvements in LLM capabilities since the previous year's hackathon,
suggesting continued expansion of LLMs for applications in materials science
and chemistry research. These outcomes demonstrate the dual utility of LLMs as
both multipurpose models for diverse machine learning tasks and platforms for
rapid prototyping custom applications in scientific research.
ByDuong H. Le, Tuan Pham, Sangho Lee, Christopher Clark, Aniruddha Kembhavi, Stephan Mandt, Ranjay Krishna, Jiasen Lu
28
2
We introduce OneDiffusion, a versatile, large-scale diffusion model that
seamlessly supports bidirectional image synthesis and understanding across
diverse tasks. It enables conditional generation from inputs such as text,
depth, pose, layout, and semantic maps, while also handling tasks like image
deblurring, upscaling, and reverse processes such as depth estimation and
segmentation. Additionally, OneDiffusion allows for multi-view generation,
camera pose estimation, and instant personalization using sequential image
inputs. Our model takes a straightforward yet effective approach by treating
all tasks as frame sequences with varying noise scales during training,
allowing any frame to act as a conditioning image at inference time. Our
unified training framework removes the need for specialized architectures,
supports scalable multi-task training, and adapts smoothly to any resolution,
enhancing both generalization and scalability. Experimental results demonstrate
competitive performance across tasks in both generation and prediction such as
text-to-image, multiview generation, ID preservation, depth estimation and
camera pose estimation despite relatively small training dataset. Our code and
checkpoint are freely available at https://github.com/lehduong/OneDiffusion
Multi-Head Mixture-of-Experts (MH-MoE) demonstrates superior performance by
using the multi-head mechanism to collectively attend to information from
various representation spaces within different experts. In this paper, we
present a novel implementation of MH-MoE that maintains both FLOPs and
parameter parity with sparse Mixture of Experts models. Experimental results on
language models show that the new implementation yields quality improvements
over both vanilla MoE and fine-grained MoE models. Additionally, our
experiments demonstrate that MH-MoE is compatible with 1-bit Large Language
Models (LLMs) such as BitNet.
ByJunlong Cheng, Bin Fu, Jin Ye, Guoan Wang, Tianbin Li, Haoyu Wang, Ruoyu Li, He Yao, Junren Chen, JingWen Li, Yanzhou Su, Min Zhu, Junjun He
23
2
Interactive Medical Image Segmentation (IMIS) has long been constrained by
the limited availability of large-scale, diverse, and densely annotated
datasets, which hinders model generalization and consistent evaluation across
different models. In this paper, we introduce the IMed-361M benchmark dataset,
a significant advancement in general IMIS research. First, we collect and
standardize over 6.4 million medical images and their corresponding ground
truth masks from multiple data sources. Then, leveraging the strong object
recognition capabilities of a vision foundational model, we automatically
generated dense interactive masks for each image and ensured their quality
through rigorous quality control and granularity management. Unlike previous
datasets, which are limited by specific modalities or sparse annotations,
IMed-361M spans 14 modalities and 204 segmentation targets, totaling 361
million masks-an average of 56 masks per image. Finally, we developed an IMIS
baseline network on this dataset that supports high-quality mask generation
through interactive inputs, including clicks, bounding boxes, text prompts, and
their combinations. We evaluate its performance on medical image segmentation
tasks from multiple perspectives, demonstrating superior accuracy and
scalability compared to existing interactive segmentation models. To facilitate
research on foundational models in medical computer vision, we release the
IMed-361M and model at https://github.com/uni-medical/IMIS-Bench.
ByJin Ye, Ying Chen, Yanjun Li, Haoyu Wang, Zhongying Deng, Ziyan Huang, Yanzhou Su, Chenglong Ma, Yuanfeng Ji, Junjun He
19
2
Computed Tomography (CT) is one of the most popular modalities for medical
imaging. By far, CT images have contributed to the largest publicly available
datasets for volumetric medical segmentation tasks, covering full-body
anatomical structures. Large amounts of full-body CT images provide the
opportunity to pre-train powerful models, e.g., STU-Net pre-trained in a
supervised fashion, to segment numerous anatomical structures. However, it
remains unclear in which conditions these pre-trained models can be transferred
to various downstream medical segmentation tasks, particularly segmenting the
other modalities and diverse targets. To address this problem, a large-scale
benchmark for comprehensive evaluation is crucial for finding these conditions.
Thus, we collected 87 public datasets varying in modality, target, and sample
size to evaluate the transfer ability of full-body CT pre-trained models. We
then employed a representative model, STU-Net with multiple model scales, to
conduct transfer learning across modalities and targets. Our experimental
results show that (1) there may be a bottleneck effect concerning the dataset
size in fine-tuning, with more improvement on both small- and large-scale
datasets than medium-size ones. (2) Models pre-trained on full-body CT
demonstrate effective modality transfer, adapting well to other modalities such
as MRI. (3) Pre-training on the full-body CT not only supports strong
performance in structure detection but also shows efficacy in lesion detection,
showcasing adaptability across target tasks. We hope that this large-scale open
evaluation of transfer learning can direct future research in volumetric
medical image segmentation.
Visual tokenizers are fundamental to image generation. They convert visual
data into discrete tokens, enabling transformer-based models to excel at image
generation. Despite their success, VQ-based tokenizers like VQGAN face
significant limitations due to constrained vocabulary sizes. Simply expanding
the codebook often leads to training instability and diminishing performance
gains, making scalability a critical challenge. In this work, we introduce
Factorized Quantization (FQ), a novel approach that revitalizes VQ-based
tokenizers by decomposing a large codebook into multiple independent
sub-codebooks. This factorization reduces the lookup complexity of large
codebooks, enabling more efficient and scalable visual tokenization. To ensure
each sub-codebook captures distinct and complementary information, we propose a
disentanglement regularization that explicitly reduces redundancy, promoting
diversity across the sub-codebooks. Furthermore, we integrate representation
learning into the training process, leveraging pretrained vision models like
CLIP and DINO to infuse semantic richness into the learned representations.
This design ensures our tokenizer captures diverse semantic levels, leading to
more expressive and disentangled representations. Experiments show that the
proposed FQGAN model substantially improves the reconstruction quality of
visual tokenizers, achieving state-of-the-art performance. We further
demonstrate that this tokenizer can be effectively adapted into auto-regressive
image generation. https://showlab.github.io/FQGAN
ByZun Wang, Jialu Li, Han Lin, Jaehong Yoon, Mohit Bansal
19
2
Storytelling video generation (SVG) has recently emerged as a task to create
long, multi-motion, multi-scene videos that consistently represent the story
described in the input text script. SVG holds great potential for diverse
content creation in media and entertainment; however, it also presents
significant challenges: (1) objects must exhibit a range of fine-grained,
complex motions, (2) multiple objects need to appear consistently across
scenes, and (3) subjects may require multiple motions with seamless transitions
within a single scene. To address these challenges, we propose DreamRunner, a
novel story-to-video generation method: First, we structure the input script
using a large language model (LLM) to facilitate both coarse-grained scene
planning as well as fine-grained object-level layout and motion planning. Next,
DreamRunner presents retrieval-augmented test-time adaptation to capture target
motion priors for objects in each scene, supporting diverse motion
customization based on retrieved videos, thus facilitating the generation of
new videos with complex, scripted motions. Lastly, we propose a novel
spatial-temporal region-based 3D attention and prior injection module SR3AI for
fine-grained object-motion binding and frame-by-frame semantic control. We
compare DreamRunner with various SVG baselines, demonstrating state-of-the-art
performance in character consistency, text alignment, and smooth transitions.
Additionally, DreamRunner exhibits strong fine-grained condition-following
ability in compositional text-to-video generation, significantly outperforming
baselines on T2V-ComBench. Finally, we validate DreamRunner's robust ability to
generate multi-object interactions with qualitative examples.
AdamW has been the default optimizer for transformer pretraining. For many
years, our community searches for faster and more stable optimizers with only
constraint positive outcomes. In this work, we propose a single-line
modification in Pytorch to any momentum-based optimizer, which we rename
Cautious Optimizer, e.g. C-AdamW and C-Lion. Our theoretical result shows that
this modification preserves Adam's Hamiltonian function and it does not break
the convergence guarantee under the Lyapunov analysis. In addition, a whole new
family of optimizers is revealed by our theoretical insight. Among them, we
pick the simplest one for empirical experiments, showing speed-up on Llama and
MAE pretraining up to 1.47times. Code is available at
https://github.com/kyleliang919/C-Optim
ByWang Bill Zhu, Deqing Fu, Kai Sun, Yi Lu, Zhaojiang Lin, Seungwhan Moon, Kanika Narang, Mustafa Canim, Yue Liu, Anuj Kumar, Xin Luna Dong
18
2
We hypothesize that a user's visual history with images reflecting their
daily life, offers valuable insights into their interests and preferences, and
can be leveraged for personalization. Among the many challenges to achieve this
goal, the foremost is the diversity and noises in the visual history,
containing images not necessarily related to a recommendation task, not
necessarily reflecting the user's interest, or even not necessarily
preference-relevant. Existing recommendation systems either rely on
task-specific user interaction logs, such as online shopping history for
shopping recommendations, or focus on text signals. We propose a novel
approach, VisualLens, that extracts, filters, and refines image
representations, and leverages these signals for personalization. We created
two new benchmarks with task-agnostic visual histories, and show that our
method improves over state-of-the-art recommendations by 5-10% on Hit@3, and
improves over GPT-4o by 2-5%. Our approach paves the way for personalized
recommendations in scenarios where traditional methods fail.
While high-quality texture maps are essential for realistic 3D asset
rendering, few studies have explored learning directly in the texture space,
especially on large-scale datasets. In this work, we depart from the
conventional approach of relying on pre-trained 2D diffusion models for
test-time optimization of 3D textures. Instead, we focus on the fundamental
problem of learning in the UV texture space itself. For the first time, we
train a large diffusion model capable of directly generating high-resolution
texture maps in a feed-forward manner. To facilitate efficient learning in
high-resolution UV spaces, we propose a scalable network architecture that
interleaves convolutions on UV maps with attention layers on point clouds.
Leveraging this architectural design, we train a 700 million parameter
diffusion model that can generate UV texture maps guided by text prompts and
single-view images. Once trained, our model naturally supports various extended
applications, including text-guided texture inpainting, sparse-view texture
completion, and text-driven texture synthesis. Project page is at
http://cvmi-lab.github.io/TEXGen/.
ByCarlo Alberto Barbano, Luca Molinaro, Emanuele Aiello, Marco Grangetto
16
3
We present a way to learn novel concepts by only using their textual
description. We call this method Knowledge Transfer. Similarly to human
perception, we leverage cross-modal interaction to introduce new concepts. We
hypothesize that in a pre-trained visual encoder there are enough low-level
features already learned (e.g. shape, appearance, color) that can be used to
describe previously unknown high-level concepts. Provided with a textual
description of the novel concept, our method works by aligning the known
low-level features of the visual encoder to its high-level textual description.
We show that Knowledge Transfer can successfully introduce novel concepts in
multimodal models, in a very efficient manner, by only requiring a single
description of the target concept. Our approach is compatible with both
separate textual and visual encoders (e.g. CLIP) and shared parameters across
modalities. We also show that, following the same principle, Knowledge Transfer
can improve concepts already known by the model. Leveraging Knowledge Transfer
we improve zero-shot performance across different tasks such as classification,
segmentation, image-text retrieval, and captioning.
The transition from x86 to ARM architecture is becoming increasingly common
across various domains, primarily driven by ARM's energy efficiency and
improved performance across traditional sectors. However, this ISA shift poses
significant challenges, mainly due to the extensive legacy ecosystem of x86
software and lack of portability across proprietary ecosystems and software
stacks. This paper introduces CRT, a lightweight LLM-based transpiler that
automatically converts x86 assembly to ARM assembly. Our approach bridges the
fundamental architectural gap between x86's CISC-based and ARM's RISC-based
computing paradigms while preserving program semantics and optimizing
performance. We evaluate CRT on diverse real-world applications, achieving
79.25% translation accuracy from x86 to ARMv5 on our comprehensive test suite,
and an 88.68% accuracy from x86 to RISC-V. In practical deployments on Apple M2
hardware (ARMv8), our transpiled code achieves 1.73times speedup compared to
Apple's Rosetta 2 virtualization engine, while delivering 2.41times memory
efficiency and 1.47times better energy consumption. Through testing and
analysis, we show that CRT successfully navigates the CISC/RISC divide and
generates correctly executable RISC code despite machine ``language'' barriers.
We release our code, models, training datasets, and benchmarks at:
https://ahmedheakl.github.io/asm2asm/.
ByHyojun Go, Byeongjun Park, Jiho Jang, Jin-Young Kim, Soonwoo Kwon, Changick Kim
11
2
Text-based generation and editing of 3D scenes hold significant potential for
streamlining content creation through intuitive user interactions. While recent
advances leverage 3D Gaussian Splatting (3DGS) for high-fidelity and real-time
rendering, existing methods are often specialized and task-focused, lacking a
unified framework for both generation and editing. In this paper, we introduce
SplatFlow, a comprehensive framework that addresses this gap by enabling direct
3DGS generation and editing. SplatFlow comprises two main components: a
multi-view rectified flow (RF) model and a Gaussian Splatting Decoder
(GSDecoder). The multi-view RF model operates in latent space, generating
multi-view images, depths, and camera poses simultaneously, conditioned on text
prompts, thus addressing challenges like diverse scene scales and complex
camera trajectories in real-world settings. Then, the GSDecoder efficiently
translates these latent outputs into 3DGS representations through a
feed-forward 3DGS method. Leveraging training-free inversion and inpainting
techniques, SplatFlow enables seamless 3DGS editing and supports a broad range
of 3D tasks-including object editing, novel view synthesis, and camera pose
estimation-within a unified framework without requiring additional complex
pipelines. We validate SplatFlow's capabilities on the MVImgNet and DL3DV-7K
datasets, demonstrating its versatility and effectiveness in various 3D
generation, editing, and inpainting-based tasks.
ByAshmal Vayani, Dinura Dissanayake, Hasindri Watawana, Noor Ahsan, Nevasini Sasikumar, Omkar Thawakar, Henok Biadglign Ademtew, Yahya Hmaiti, Amandeep Kumar, Kartik Kuckreja, Mykola Maslych, Wafa Al Ghallabi, Mihail Mihaylov, Chao Qin, Abdelrahman M Shaker, Mike Zhang, Mahardika Krisna Ihsani, Amiel Esplana, Monil Gokani, Shachar Mirkin, Harsh Singh, Ashay Srivastava, Endre Hamerlik, Fathinah Asma Izzati, Fadillah Adamsyah Maani, Sebastian Cavada, Jenny Chim, Rohit Gupta, Sanjay Manjunath, Kamila Zhumakhanova, Feno Heriniaina Rabevohitra, Azril Amirudin, Muhammad Ridzuan, Daniya Kareem, Ketan More, Kunyang Li, Pramesh Shakya, Muhammad Saad, Amirpouya Ghasemaghaei, Amirbek Djanibekov, Dilshod Azizov, Branislava Jankovic, Naman Bhatia, Alvaro Cabrera, Johan Obando-Ceron, Olympiah Otieno, Fabian Farestam, Muztoba Rabbani, Sanoojan Baliah, Santosh Sanjeev, Abduragim Shtanchaev, Maheen Fatima, Thao Nguyen, Amrin Kareem, Toluwani Aremu, Nathan Xavier, Amit Bhatkal, Hawau Toyin, Aman Chadha, Hisham Cholakkal, Rao Muhammad Anwer, Michael Felsberg, Jorma Laaksonen, Thamar Solorio, Monojit Choudhury, Ivan Laptev, Mubarak Shah, Salman Khan, Fahad Khan
10
2
Existing Large Multimodal Models (LMMs) generally focus on only a few regions
and languages. As LMMs continue to improve, it is increasingly important to
ensure they understand cultural contexts, respect local sensitivities, and
support low-resource languages, all while effectively integrating corresponding
visual cues. In pursuit of culturally diverse global multimodal models, our
proposed All Languages Matter Benchmark (ALM-bench) represents the largest and
most comprehensive effort to date for evaluating LMMs across 100 languages.
ALM-bench challenges existing models by testing their ability to understand and
reason about culturally diverse images paired with text in various languages,
including many low-resource languages traditionally underrepresented in LMM
research. The benchmark offers a robust and nuanced evaluation framework
featuring various question formats, including true/false, multiple choice, and
open-ended questions, which are further divided into short and long-answer
categories. ALM-bench design ensures a comprehensive assessment of a model's
ability to handle varied levels of difficulty in visual and linguistic
reasoning. To capture the rich tapestry of global cultures, ALM-bench carefully
curates content from 13 distinct cultural aspects, ranging from traditions and
rituals to famous personalities and celebrations. Through this, ALM-bench not
only provides a rigorous testing ground for state-of-the-art open and
closed-source LMMs but also highlights the importance of cultural and
linguistic inclusivity, encouraging the development of models that can serve
diverse global populations effectively. Our benchmark is publicly available.
It has been well-known that Chain-of-Thought can remarkably enhance LLMs'
performance on complex tasks. However, because it also introduces slower
inference speeds and higher computational costs, many researches have attempted
to use implicit CoT, which does not need LLMs to explicitly generate the
intermediate steps. But there is still gap between their efficacy and typical
explicit CoT methods. This leaves us a doubt that, does implicit CoT really
equal to explicit CoT? Therefore, in this study, we address this question
through experiments. We probe the information of intermediate steps from the
model's hidden states when it is performing implicit CoT. The results
surprisingly indicate that LLMs hardly think about intermediate steps,
suggesting they may just rely on experience rather than strict step-by-step
reasoning. Moreover, we find LLMs' implicit reasoning capabilities are
susceptible and unstable, reaffirming the necessity of explicit CoT to
effectively support complex tasks.
Modern sequence models (e.g., Transformers, linear RNNs, etc.) emerged as
dominant backbones of recent deep learning frameworks, mainly due to their
efficiency, representational power, and/or ability to capture long-range
dependencies. Adopting these sequence models for graph-structured data has
recently gained popularity as the alternative to Message Passing Neural
Networks (MPNNs). There is, however, a lack of a common foundation about what
constitutes a good graph sequence model, and a mathematical description of the
benefits and deficiencies in adopting different sequence models for learning on
graphs. To this end, we first present Graph Sequence Model (GSM), a unifying
framework for adopting sequence models for graphs, consisting of three main
steps: (1) Tokenization, which translates the graph into a set of sequences;
(2) Local Encoding, which encodes local neighborhoods around each node; and (3)
Global Encoding, which employs a scalable sequence model to capture long-range
dependencies within the sequences. This framework allows us to understand,
evaluate, and compare the power of different sequence model backbones in graph
tasks. Our theoretical evaluations of the representation power of Transformers
and modern recurrent models through the lens of global and local graph tasks
show that there are both negative and positive sides for both types of models.
Building on this observation, we present GSM++, a fast hybrid model that uses
the Hierarchical Affinity Clustering (HAC) algorithm to tokenize the graph into
hierarchical sequences, and then employs a hybrid architecture of Transformer
to encode these sequences. Our theoretical and experimental results support the
design of GSM++, showing that GSM++ outperforms baselines in most benchmark
evaluations.
This research introduces a novel evaluation framework designed to assess
large language models' (LLMs) ability to acknowledge uncertainty on 675
fundamentally unsolvable problems. Using a curated dataset of graduate-level
grand challenge questions with intentionally unknowable answers, we evaluated
twelve state-of-the-art LLMs, including both open and closed-source models, on
their propensity to admit ignorance rather than generate plausible but
incorrect responses. The best models scored in 62-68% accuracy ranges for
admitting the problem solution was unknown in fields ranging from biology to
philosophy and mathematics. We observed an inverse relationship between problem
difficulty and model accuracy, with GPT-4 demonstrating higher rates of
uncertainty acknowledgment on more challenging problems (35.8%) compared to
simpler ones (20.0%). This pattern indicates that models may be more prone to
generate speculative answers when problems appear more tractable. The study
also revealed significant variations across problem categories, with models
showing difficulty in acknowledging uncertainty in invention and NP-hard
problems while performing relatively better on philosophical and psychological
challenges. These results contribute to the growing body of research on
artificial general intelligence (AGI) assessment by highlighting the importance
of uncertainty recognition as a critical component of future machine
intelligence evaluation. This impossibility test thus extends previous
theoretical frameworks for universal intelligence testing by providing
empirical evidence of current limitations in LLMs' ability to recognize their
own knowledge boundaries, suggesting new directions for improving model
training architectures and evaluation approaches.
ByCharlie Snell, Eric Wallace, Dan Klein, Sergey Levine
7
2
A fundamental open challenge in modern LLM scaling is the lack of
understanding around emergent capabilities. In particular, language model
pretraining loss is known to be highly predictable as a function of compute.
However, downstream capabilities are far less predictable -- sometimes even
exhibiting emergent jumps -- which makes it challenging to anticipate the
capabilities of future models. In this work, we first pose the task of
emergence prediction: given access to current LLMs that have random few-shot
accuracy on a task, can we predict whether future models (GPT-N+1) will have
non-trivial accuracy on that task? We then discover a simple insight for this
problem: finetuning LLMs on a given task can shift the point in scaling at
which emergence occurs towards less capable models. To operationalize this
insight, we can finetune LLMs with varying amounts of data and fit a parametric
function that predicts when emergence will occur (i.e., "emergence laws"). We
validate this approach using four standard NLP benchmarks where large-scale
open-source LLMs already demonstrate emergence (MMLU, GSM8K, CommonsenseQA, and
CoLA). Using only small-scale LLMs, we find that, in some cases, we can
accurately predict whether models trained with up to 4x more compute have
emerged. Finally, we present a case study of two realistic uses for emergence
prediction.
We study open-world part segmentation in 3D: segmenting any part in any
object based on any text query. Prior methods are limited in object categories
and part vocabularies. Recent advances in AI have demonstrated effective
open-world recognition capabilities in 2D. Inspired by this progress, we
propose an open-world, direct-prediction model for 3D part segmentation that
can be applied zero-shot to any object. Our approach, called Find3D, trains a
general-category point embedding model on large-scale 3D assets from the
internet without any human annotation. It combines a data engine, powered by
foundation models for annotating data, with a contrastive training method. We
achieve strong performance and generalization across multiple datasets, with up
to a 3x improvement in mIoU over the next best method. Our model is 6x to over
300x faster than existing baselines. To encourage research in general-category
open-world 3D part segmentation, we also release a benchmark for general
objects and parts. Project website: https://ziqi-ma.github.io/find3dsite/
ByYicheng Yang, Pengxiang Li, Lu Zhang, Liqian Ma, Ping Hu, Siyu Du, Yunzhi Zhuge, Xu Jia, Huchuan Lu
7
3
Subject-driven image inpainting has emerged as a popular task in image
editing alongside recent advancements in diffusion models. Previous methods
primarily focus on identity preservation but struggle to maintain the
editability of inserted objects. In response, this paper introduces DreamMix, a
diffusion-based generative model adept at inserting target objects into given
scenes at user-specified locations while concurrently enabling arbitrary
text-driven modifications to their attributes. In particular, we leverage
advanced foundational inpainting models and introduce a disentangled
local-global inpainting framework to balance precise local object insertion
with effective global visual coherence. Additionally, we propose an Attribute
Decoupling Mechanism (ADM) and a Textual Attribute Substitution (TAS) module to
improve the diversity and discriminative capability of the text-based attribute
guidance, respectively. Extensive experiments demonstrate that DreamMix
effectively balances identity preservation and attribute editability across
various application scenarios, including object insertion, attribute editing,
and small object inpainting. Our code is publicly available at
https://github.com/mycfhs/DreamMix.
Category-Agnostic Pose Estimation (CAPE) localizes keypoints across diverse
object categories with a single model, using one or a few annotated support
images. Recent works have shown that using a pose graph (i.e., treating
keypoints as nodes in a graph rather than isolated points) helps handle
occlusions and break symmetry. However, these methods assume a static pose
graph with equal-weight edges, leading to suboptimal results. We introduce
EdgeCape, a novel framework that overcomes these limitations by predicting the
graph's edge weights which optimizes localization. To further leverage
structural priors, we propose integrating Markovian Structural Bias, which
modulates the self-attention interaction between nodes based on the number of
hops between them. We show that this improves the model's ability to capture
global spatial dependencies. Evaluated on the MP-100 benchmark, which includes
100 categories and over 20K images, EdgeCape achieves state-of-the-art results
in the 1-shot setting and leads among similar-sized methods in the 5-shot
setting, significantly improving keypoint localization accuracy. Our code is
publicly available.