Long Phan, Alice Gatti, Ziwen Han, Nathaniel Li, Josephina Hu, Hugh Zhang, Sean Shi, Michael Choi, Anish Agrawal, Arnav Chopra, Adam Khoja, Ryan Kim, Jason Hausenloy, Oliver Zhang, Mantas Mazeika, Daron Anderson, Tung Nguyen, Mobeen Mahmood, Fiona Feng, Steven Y. Feng, Haoran Zhao, Michael Yu, Varun Gangal, Chelsea Zou, Zihan Wang, Jessica P. Wang, Pawan Kumar, Oleksandr Pokutnyi, Robert Gerbicz, Serguei Popov, John-Clark Levin, Mstyslav Kazakov, Johannes Schmitt, Geoff Galgon, Alvaro Sanchez, Yongki Lee, Will Yeadon, Scott Sauers, Marc Roth, Chidozie Agu, Søren Riis, Fabian Giska, Saiteja Utpala, Zachary Giboney, Gashaw M. Goshu, Joan of Arc Xavier, Sarah-Jane Crowson, Mohinder Maheshbhai Naiya, Noah Burns, Lennart Finke, Zerui Cheng, Hyunwoo Park, Francesco Fournier-Facio, John Wydallis, Mark Nandor, Ankit Singh, Tim Gehrunger, Jiaqi Cai, Ben McCarty, Darling Duclosel, Jungbae Nam, Jennifer Zampese, Ryan G. Hoerr, Aras Bacho, Gautier Abou Loume, Abdallah Galal, Hangrui Cao, Alexis C Garretson, Damien Sileo, Qiuyu Ren, Doru Cojoc, Pavel Arkhipov, Usman Qazi, Lianghui Li, Sumeet Motwani, Christian Schroeder de Witt, Edwin Taylor, Johannes Veith, Eric Singer, Taylor D. Hartman, Paolo Rissone, Jaehyeok Jin, Jack Wei Lun Shi, Chris G. Willcocks, Joshua Robinson, Aleksandar Mikov, Ameya Prabhu, Longke Tang, Xavier Alapont, Justine Leon Uro, Kevin Zhou, Emily de Oliveira Santos, Andrey Pupasov Maksimov, Edward Vendrow, Kengo Zenitani, Julien Guillod, Yuqi Li, Joshua Vendrow, Vladyslav Kuchkin, Ng Ze-An, Pierre Marion, Denis Efremov, Jayson Lynch, Kaiqu Liang, Andrew Gritsevskiy, Dakotah Martinez, Ben Pageler, Nick Crispino, Dimitri Zvonkine, Natanael Wildner Fraga, Saeed Soori, Ori Press, Henry Tang, Julian Salazar, Sean R. Green, Lina Brüssel, Moon Twayana, Aymeric Dieuleveut, T. Ryan Rogers, Wenjin Zhang, Bikun Li, Jinzhou Yang, Arun Rao, Gabriel Loiseau, Mikhail Kalinin, Marco Lukas, Ciprian Manolescu, Subrata Mishra, Ariel Ghislain Kemogne Kamdoum, Tobias Kreiman, Tad Hogg, Alvin Jin, Carlo Bosio, Gongbo Sun, Brian P Coppola, Tim Tarver, Haline Heidinger, Rafael Sayous, Stefan Ivanov, Joseph M Cavanagh, Jiawei Shen, Joseph Marvin Imperial, Philippe Schwaller, Shaipranesh Senthilkuma, Andres M Bran, Ali Dehghan, Andres Algaba, Brecht Verbeken, David Noever, Ragavendran P V, Lisa Schut, Ilia Sucholutsky, Evgenii Zheltonozhskii, Derek Lim, Richard Stanley, Shankar Sivarajan, Tong Yang, John Maar, Julian Wykowski, Martí Oller, Jennifer Sandlin, Anmol Sahu, Yuzheng Hu, Sara Fish, Nasser Heydari, Archimedes Apronti, Kaivalya Rawal, Tobias Garcia Vilchis, Yuexuan Zu, Martin Lackner, James Koppel, Jeremy Nguyen, Daniil S. Antonenko, Steffi Chern, Bingchen Zhao, Pierrot Arsene, Alan Goldfarb, Sergey Ivanov, Rafał Poświata, Chenguang Wang, Daofeng Li, Donato Crisostomi, Andrea Achilleos, Benjamin Myklebust, Archan Sen, David Perrella, Nurdin Kaparov, Mark H Inlow, Allen Zang, Elliott Thornley, Daniil Orel, Vladislav Poritski, Shalev Ben-David, Zachary Berger, Parker Whitfill, Michael Foster, Daniel Munro, Linh Ho, Dan Bar Hava, Aleksey Kuchkin, Robert Lauff, David Holmes, Frank Sommerhage, Keith Schneider, Zakayo Kazibwe, Nate Stambaugh, Mukhwinder Singh, Ilias Magoulas, Don Clarke, Dae Hyun Kim, Felipe Meneguitti Dias, Veit Elser, Kanu Priya Agarwal, Victor Efren Guadarrama Vilchis, Immo Klose, Christoph Demian, Ujjwala Anantheswaran, Adam Zweiger, Guglielmo Albani, Jeffery Li, Nicolas Daans, Maksim Radionov, Václav Rozhoň, Ziqiao Ma, Christian Stump, Mohammed Berkani, Jacob Platnick, Volodymyr Nevirkovets, Luke Basler, Marco Piccardo, Ferenc Jeanplong, Niv Cohen, Josef Tkadlec, Paul Rosu, Piotr Padlewski, Stanislaw Barzowski, Kyle Montgomery, Aline Menezes, Arkil Patel, Zixuan Wang, Jamie Tucker-Foltz, Jack Stade, Tom Goertzen, Fereshteh Kazemi, Jeremiah Milbauer, John Arnold Ambay, Abhishek Shukla, Yan Carlos Leyva Labrador, Alan Givré, Hew Wolff, Vivien Rossbach, Muhammad Fayez Aziz, Younesse Kaddar, Yanxu Chen, Robin Zhang, Jiayi Pan, Antonio Terpin, Niklas Muennighoff, Hailey Schoelkopf, Eric Zheng, Avishy Carmi, Adam Jones, Jainam Shah, Ethan D. L. Brown, Kelin Zhu, Max Bartolo, Richard Wheeler, Andrew Ho, Shaul Barkan, Jiaqi Wang, Martin Stehberger, Egor Kretov, Kaustubh Sridhar, Zienab EL-Wasif, Anji Zhang, Daniel Pyda, Joanna Tam, David M. Cunningham, Vladimir Goryachev, Demosthenes Patramanis, Michael Krause, Andrew Redenti, Daniel Bugas, David Aldous, Jesyin Lai, Shannon Coleman, Mohsen Bahaloo, Jiangnan Xu, Sangwon Lee, Sandy Zhao, Ning Tang, Michael K. Cohen, Micah Carroll, Orr Paradise, Jan Hendrik Kirchner, Stefan Steinerberger, Maksym Ovchynnikov, Jason O. Matos, Adithya Shenoy, Benedito Alves de Oliveira Junior, Michael Wang, Yuzhou Nie, Paolo Giordano, Philipp Petersen, Anna Sztyber-Betley, Priti Shukla, Jonathan Crozier, Antonella Pinto, Shreyas Verma, Prashant Joshi, Zheng-Xin Yong, Allison Tee, Jérémy Andréoletti, Orion Weller, Raghav Singhal, Gang Zhang, Alexander Ivanov, Seri Khoury, Hamid Mostaghimi, Kunvar Thaman, Qijia Chen, Tran Quoc Khánh, Jacob Loader, Stefano Cavalleri, Hannah Szlyk, Zachary Brown, Jonathan Roberts, William Alley, Kunyang Sun, Ryan Stendall, Max Lamparth, Anka Reuel, Ting Wang, Hanmeng Xu, Sreenivas Goud Raparthi, Pablo Hernández-Cámara, Freddie Martin, Dmitry Malishev, Thomas Preu, Tomek Korbak, Marcus Abramovitch, Dominic Williamson, Ziye Chen, Biró Bálint, M Saiful Bari, Peyman Kassani, Zihao Wang, Behzad Ansarinejad, Laxman Prasad Goswami, Yewen Sun, Hossam Elgnainy, Daniel Tordera, George Balabanian, Earth Anderson, Lynna Kvistad, Alejandro José Moyano, Rajat Maheshwari, Ahmad Sakor, Murat Eron, Isaac C. McAlister, Javier Gimenez, Innocent Enyekwe, Andrew Favre D. O., Shailesh Shah, Xiaoxiang Zhou, Firuz Kamalov, Ronald Clark, Sherwin Abdoli, Tim Santens, Khalida Meer, Harrison K Wang, Kalyan Ramakrishnan, Evan Chen, Alessandro Tomasiello, G. Bruno De Luca, Shi-Zhuo Looi, Vinh-Kha Le, Noam Kolt, Niels Mündler, Avi Semler, Emma Rodman, Jacob Drori, Carl J Fossum, Milind Jagota, Ronak Pradeep, Honglu Fan, Tej Shah, Jonathan Eicher, Michael Chen, Kushal Thaman, William Merrill, Carter Harris, Jason Gross, Ilya Gusev, Asankhaya Sharma, Shashank Agnihotri, Pavel Zhelnov, Siranut Usawasutsakorn, Mohammadreza Mofayezi, Sergei Bogdanov, Alexander Piperski, Marc Carauleanu, David K. Zhang, Dylan Ler, Roman Leventov, Ignat Soroko, Thorben Jansen, Pascal Lauer, Joshua Duersch, Vage Taamazyan, Wiktor Morak, Wenjie Ma, William Held, Tran Đuc Huy, Ruicheng Xian, Armel Randy Zebaze, Mohanad Mohamed, Julian Noah Leser, Michelle X Yuan, Laila Yacar, Johannes Lengler, Hossein Shahrtash, Edson Oliveira, Joseph W. Jackson, Daniel Espinosa Gonzalez, Andy Zou, Muthu Chidambaram, Timothy Manik, Hector Haffenden, Dashiell Stander, Ali Dasouqi, Alexander Shen, Emilien Duc, Bita Golshani, David Stap, Mikalai Uzhou, Alina Borisovna Zhidkovskaya, Lukas Lewark, Mátyás Vincze, Dustin Wehr, Colin Tang, Zaki Hossain, Shaun Phillips, Jiang Muzhen, Fredrik Ekström, Angela Hammon, Oam Patel, Nicolas Remy, Faraz Farhidi, George Medley, Forough Mohammadzadeh, Madellene Peñaflor, Haile Kassahun, Alena Friedrich, Claire Sparrow, Taom Sakal, Omkar Dhamane, Ali Khajegili Mirabadi, Eric Hallman, Mike Battaglia, Mohammad Maghsoudimehrabani, Hieu Hoang, Alon Amit, Dave Hulbert, Roberto Pereira, Simon Weber, Stephen Mensah, Nathan Andre, Anton Peristyy, Chris Harjadi, Himanshu Gupta, Stephen Malina, Samuel Albanie, Will Cai, Mustafa Mehkary, Frank Reidegeld, Anna-Katharina Dick, Cary Friday, Jasdeep Sidhu, Wanyoung Kim, Mariana Costa, Hubeyb Gurdogan, Brian Weber, Harsh Kumar, Tong Jiang, Arunim Agarwal, Chiara Ceconello, Warren S. Vaz, Chao Zhuang, Haon Park, Andrew R. Tawfeek, Daattavya Aggarwal, Michael Kirchhof, Linjie Dai, Evan Kim, Johan Ferret, Yuzhou Wang, Minghao Yan, Krzysztof Burdzy, Lixin Zhang, Antonio Franca, Diana T. Pham, Kang Yong Loh, Joshua Robinson, Shreen Gul, Gunjan Chhablani, Zhehang Du, Adrian Cosma, Colin White, Robin Riblet, Prajvi Saxena, Jacob Votava, Vladimir Vinnikov, Ethan Delaney, Shiv Halasyamani, Syed M. Shahid, Jean-Christophe Mourrat, Lavr Vetoshkin, Renas Bacho, Vincent Ginis, Aleksandr Maksapetyan, Florencia de la Rosa, Xiuyu Li, Guillaume Malod, Leon Lang, Julien Laurendeau, Fatimah Adesanya, Julien Portier, Lawrence Hollom, Victor Souza, Yuchen Anna Zhou, Yiğit Yalın, Gbenga Daniel Obikoya, Luca Arnaboldi, Rai, Filippo Bigi, Kaniuar Bacho, Pierre Clavier, Gabriel Recchia, Mara Popescu, Nikita Shulga, Ngefor Mildred Tanwie, Thomas C. H. Lux, Ben Rank, Colin Ni, Alesia Yakimchyk, Huanxu, Liu, Olle Häggström, Emil Verkama, Himanshu Narayan, Hans Gundlach, Leonor Brito-Santana, Brian Amaro, Vivek Vajipey, Rynaa Grover, Yiyang Fan, Gabriel Poesia Reis e Silva, Linwei Xin, Yosi Kratish, Jakub Łucki, Wen-Ding Li, Justin Xu, Kevin Joseph Scaria, Freddie Vargus, Farzad Habibi, Long, Lian, Emanuele Rodolà, Jules Robins, Vincent Cheng, Declan Grabb, Ida Bosio, Tony Fruhauff, Ido Akov, Eve J. Y. Lo, Hao Qi, Xi Jiang, Ben Segev, Jingxuan Fan, Sarah Martinson, Erik Y. Wang, Kaylie Hausknecht, Michael P. Brenner, Mao Mao, Yibo Jiang, Xinyu Zhang, David Avagian, Eshawn Jessica Scipio, Muhammad Rehan Siddiqi, Alon Ragoler, Justin Tan, Deepakkumar Patil, Rebeka Plecnik, Aaron Kirtland, Roselynn Grace Montecillo, Stephane Durand, Omer Faruk Bodur, Zahra Adoul, Mohamed Zekry, Guillaume Douville, Ali Karakoc, Tania C. B. Santos, Samir Shamseldeen, Loukmane Karim, Anna Liakhovitskaia, Nate Resman, Nicholas Farina, Juan Carlos Gonzalez, Gabe Maayan, Sarah Hoback, Rodrigo De Oliveira Pena, Glen Sherman, Hodjat Mariji, Rasoul Pouriamanesh, Wentao Wu, Gözdenur Demir, Sandra Mendoza, Ismail Alarab, Joshua Cole, Danyelle Ferreira, Bryan Johnson, Hsiaoyun Milliron, Mohammad Safdari, Liangti Dai, Siriphan Arthornthurasuk, Alexey Pronin, Jing Fan, Angel Ramirez-Trinidad, Ashley Cartwright, Daphiny Pottmaier, Omid Taheri, David Outevsky, Stanley Stepanic, Samuel Perry, Luke Askew, Raúl Adrián Huerta Rodríguez, Abdelkader Dendane, Sam Ali, Ricardo Lorena, Krishnamurthy Iyer, Sk Md Salauddin, Murat Islam, Juan Gonzalez, Josh Ducey, Russell Campbell, Maja Somrak, Vasilios Mavroudis, Eric Vergo, Juehang Qin, Benjámin Borbás, Eric Chu, Jack Lindsey, Anil Radhakrishnan, Antoine Jallon, I. M. J. McInnis, Alex Hoover, Sören Möller, Song Bian, John Lai, Tejal Patwardhan, Summer Yue, Alexandr Wang, Dan Hendrycks
753
Benchmarks are important tools for tracking the rapid advancements in large
language model (LLM) capabilities. However, benchmarks are not keeping pace in
difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like
MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In
response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at
the frontier of human knowledge, designed to be the final closed-ended academic
benchmark of its kind with broad subject coverage. HLE consists of 3,000
questions across dozens of subjects, including mathematics, humanities, and the
natural sciences. HLE is developed globally by subject-matter experts and
consists of multiple-choice and short-answer questions suitable for automated
grading. Each question has a known solution that is unambiguous and easily
verifiable, but cannot be quickly answered via internet retrieval.
State-of-the-art LLMs demonstrate low accuracy and calibration on HLE,
highlighting a significant gap between current LLM capabilities and the expert
human frontier on closed-ended academic questions. To inform research and
policymaking upon a clear understanding of model capabilities, we publicly
release HLE at https://lastexam.ai.
This paper introduces an approach for training o1-like RAG models that
retrieve and reason over relevant information step by step before generating
the final answer. Conventional RAG methods usually perform a single retrieval
step before the generation process, which limits their effectiveness in
addressing complex queries due to imperfect retrieval results. In contrast, our
proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the
model to dynamically reformulate the query based on the evolving state. To
train CoRAG effectively, we utilize rejection sampling to automatically
generate intermediate retrieval chains, thereby augmenting existing RAG
datasets that only provide the correct final answer. At test time, we propose
various decoding strategies to scale the model's test-time compute by
controlling the length and number of sampled retrieval chains. Experimental
results across multiple benchmarks validate the efficacy of CoRAG, particularly
in multi-hop question answering tasks, where we observe more than 10 points
improvement in EM score compared to strong baselines. On the KILT benchmark,
CoRAG establishes a new state-of-the-art performance across a diverse range of
knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to
understand the scaling behavior of CoRAG, laying the groundwork for future
research aimed at developing factual and grounded foundation models.
Critiques are important for enhancing the performance of Large Language
Models (LLMs), enabling both self-improvement and constructive feedback for
others by identifying flaws and suggesting improvements. However, evaluating
the critique capabilities of LLMs presents a significant challenge due to the
open-ended nature of the task. In this work, we introduce a new benchmark
designed to assess the critique capabilities of LLMs. Unlike existing
benchmarks, which typically function in an open-loop fashion, our approach
employs a closed-loop methodology that evaluates the quality of corrections
generated from critiques. Moreover, the benchmark incorporates features such as
self-critique, cross-critique, and iterative critique, which are crucial for
distinguishing the abilities of advanced reasoning models from more classical
ones. We implement this benchmark using eight challenging reasoning tasks. We
have several interesting findings. First, despite demonstrating comparable
performance in direct chain-of-thought generation, classical LLMs significantly
lag behind the advanced reasoning-based model o1-mini across all critique
scenarios. Second, in self-critique and iterative critique settings, classical
LLMs may even underperform relative to their baseline capabilities. We hope
that this benchmark will serve as a valuable resource to guide future
advancements. The code and data are available at
https://github.com/tangzhy/RealCritic.
With the rapid iteration of Multi-modality Large Language Models (MLLMs) and
the evolving demands of the field, the number of benchmarks produced annually
has surged into the hundreds. The rapid growth has inevitably led to
significant redundancy among benchmarks. Therefore, it is crucial to take a
step back and critically assess the current state of redundancy and propose
targeted principles for constructing effective MLLM benchmarks. In this paper,
we focus on redundancy from three key perspectives: 1) Redundancy of benchmark
capability dimensions, 2) Redundancy in the number of test questions, and 3)
Cross-benchmark redundancy within specific domains. Through the comprehensive
analysis over hundreds of MLLMs' performance across more than 20 benchmarks, we
aim to quantitatively measure the level of redundancy lies in existing MLLM
evaluations, provide valuable insights to guide the future development of MLLM
benchmarks, and offer strategies to refine and address redundancy issues
effectively.
What if artificial intelligence could not only solve problems for which it
was trained but also learn to teach itself to solve new problems (i.e.,
meta-learn)? In this study, we demonstrate that a pre-trained transformer
fine-tuned with reinforcement learning over multiple episodes develops the
ability to solve problems that it has never encountered before - an emergent
ability called In-Context Reinforcement Learning (ICRL). This powerful
meta-learner not only excels in solving unseen in-distribution environments
with remarkable sample efficiency, but also shows strong performance in
out-of-distribution environments. In addition, we show that it exhibits
robustness to the quality of its training data, seamlessly stitches together
behaviors from its context, and adapts to non-stationary environments. These
behaviors demonstrate that an RL-trained transformer can iteratively improve
upon its own solutions, making it an excellent general-purpose problem solver.
Shaofei Wang, Tomas Simon, Igor Santesteban, Timur Bagautdinov, Junxuan Li, Vasu Agrawal, Fabian Prada, Shoou-I Yu, Pace Nalbone, Matt Gramlich, Roman Lubachersky, Chenglei Wu, Javier Romero, Jason Saragih, Michael Zollhoefer, Andreas Geiger, Siyu Tang, Shunsuke Saito
102
We propose Relightable Full-Body Gaussian Codec Avatars, a new approach for
modeling relightable full-body avatars with fine-grained details including face
and hands. The unique challenge for relighting full-body avatars lies in the
large deformations caused by body articulation and the resulting impact on
appearance caused by light transport. Changes in body pose can dramatically
change the orientation of body surfaces with respect to lights, resulting in
both local appearance changes due to changes in local light transport
functions, as well as non-local changes due to occlusion between body parts. To
address this, we decompose the light transport into local and non-local
effects. Local appearance changes are modeled using learnable zonal harmonics
for diffuse radiance transfer. Unlike spherical harmonics, zonal harmonics are
highly efficient to rotate under articulation. This allows us to learn diffuse
radiance transfer in a local coordinate frame, which disentangles the local
radiance transfer from the articulation of the body. To account for non-local
appearance changes, we introduce a shadow network that predicts shadows given
precomputed incoming irradiance on a base mesh. This facilitates the learning
of non-local shadowing between the body parts. Finally, we use a deferred
shading approach to model specular radiance transfer and better capture
reflections and highlights such as eye glints. We demonstrate that our approach
successfully models both the local and non-local light transport required for
relightable full-body avatars, with a superior generalization ability under
novel illumination conditions and unseen poses.
Healthcare systems continuously generate vast amounts of electronic health
records (EHRs), commonly stored in the Fast Healthcare Interoperability
Resources (FHIR) standard. Despite the wealth of information in these records,
their complexity and volume make it difficult for users to retrieve and
interpret crucial health insights. Recent advances in Large Language Models
(LLMs) offer a solution, enabling semantic question answering (QA) over medical
data, allowing users to interact with their health records more effectively.
However, ensuring privacy and compliance requires edge and private deployments
of LLMs.
This paper proposes a novel approach to semantic QA over EHRs by first
identifying the most relevant FHIR resources for a user query (Task1) and
subsequently answering the query based on these resources (Task2). We explore
the performance of privately hosted, fine-tuned LLMs, evaluating them against
benchmark models such as GPT-4 and GPT-4o. Our results demonstrate that
fine-tuned LLMs, while 250x smaller in size, outperform GPT-4 family models by
0.55% in F1 score on Task1 and 42% on Meteor Task in Task2. Additionally, we
examine advanced aspects of LLM usage, including sequential fine-tuning, model
self-evaluation (narcissistic evaluation), and the impact of training data size
on performance. The models and datasets are available here:
https://huggingface.co/genloop
Akashah Shabbir, Mohammed Zumri, Mohammed Bennamoun, Fahad S. Khan, Salman Khan
82
Recent advances in large multimodal models (LMMs) have recognized
fine-grained grounding as an imperative factor of visual understanding and
dialogue. However, the benefits of such representation in LMMs are limited to
the natural image domain, and these models perform poorly for remote sensing
(RS). The distinct overhead viewpoint, scale variation, and presence of small
objects in high-resolution RS imagery present a unique challenge in
region-level comprehension. Moreover, the development of the grounding
conversation capability of LMMs within RS is hindered by the lack of granular,
RS domain-specific grounded data. Addressing these limitations, we propose
GeoPixel - the first end-to-end high resolution RS-LMM that supports
pixel-level grounding. This capability allows fine-grained visual perception by
generating interleaved masks in conversation. GeoPixel supports up to 4K HD
resolution in any aspect ratio, ideal for high-precision RS image analysis. To
support the grounded conversation generation (GCG) in RS imagery, we curate a
visually grounded dataset GeoPixelD through a semi-automated pipeline that
utilizes set-of-marks prompting and spatial priors tailored for RS data to
methodically control the data generation process. GeoPixel demonstrates
superior performance in pixel-level comprehension, surpassing existing LMMs in
both single-target and multi-target segmentation tasks. Our methodological
ablation studies validate the effectiveness of each component in the overall
architecture. Our code and data will be publicly released.
Yang You, Yixin Li, Congyue Deng, Yue Wang, Leonidas Guibas
62
Vision foundation models, particularly the ViT family, have revolutionized
image understanding by providing rich semantic features. However, despite their
success in 2D comprehension, their abilities on grasping 3D spatial
relationships are still unclear. In this work, we evaluate and enhance the 3D
awareness of ViT-based models. We begin by systematically assessing their
ability to learn 3D equivariant features, specifically examining the
consistency of semantic embeddings across different viewpoints. Our findings
indicate that improved 3D equivariance leads to better performance on various
downstream tasks, including pose estimation, tracking, and semantic transfer.
Building on this insight, we propose a simple yet effective finetuning strategy
based on 3D correspondences, which significantly enhances the 3D correspondence
understanding of existing vision models. Remarkably, even finetuning on a
single object for just one iteration results in substantial performance gains.
All code and resources will be made publicly available to support further
advancements in 3D-aware vision models. Our code is available at
https://github.com/qq456cvb/3DCorrEnhance.
Virtual try-on (VTON) technology has gained attention due to its potential to
transform online retail by enabling realistic clothing visualization of images
and videos. However, most existing methods struggle to achieve high-quality
results across image and video try-on tasks, especially in long video
scenarios. In this work, we introduce CatV2TON, a simple and effective
vision-based virtual try-on (V2TON) method that supports both image and video
try-on tasks with a single diffusion transformer model. By temporally
concatenating garment and person inputs and training on a mix of image and
video datasets, CatV2TON achieves robust try-on performance across static and
dynamic settings. For efficient long-video generation, we propose an
overlapping clip-based inference strategy that uses sequential frame guidance
and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with
reduced resource demands. We also present ViViD-S, a refined video try-on
dataset, achieved by filtering back-facing frames and applying 3D mask
smoothing for enhanced temporal consistency. Comprehensive experiments
demonstrate that CatV2TON outperforms existing methods in both image and video
try-on tasks, offering a versatile and reliable solution for realistic virtual
try-ons across diverse scenarios.
In the image acquisition process, various forms of degradation, including
noise, haze, and rain, are frequently introduced. These degradations typically
arise from the inherent limitations of cameras or unfavorable ambient
conditions. To recover clean images from degraded versions, numerous
specialized restoration methods have been developed, each targeting a specific
type of degradation. Recently, all-in-one algorithms have garnered significant
attention by addressing different types of degradations within a single model
without requiring prior information of the input degradation type. However,
these methods purely operate in the spatial domain and do not delve into the
distinct frequency variations inherent to different degradation types. To
address this gap, we propose an adaptive all-in-one image restoration network
based on frequency mining and modulation. Our approach is motivated by the
observation that different degradation types impact the image content on
different frequency subbands, thereby requiring different treatments for each
restoration task. Specifically, we first mine low- and high-frequency
information from the input features, guided by the adaptively decoupled spectra
of the degraded image. The extracted features are then modulated by a
bidirectional operator to facilitate interactions between different frequency
components. Finally, the modulated features are merged into the original input
for a progressively guided restoration. With this approach, the model achieves
adaptive reconstruction by accentuating the informative frequency subbands
according to different input degradations. Extensive experiments demonstrate
that the proposed method achieves state-of-the-art performance on different
image restoration tasks, including denoising, dehazing, deraining, motion
deblurring, and low-light image enhancement. Our code is available at
https://github.com/c-yn/AdaIR.
Kang Liao, Zongsheng Yue, Zhouxia Wang, Chen Change Loy
32
Although learning-based image restoration methods have made significant
progress, they still struggle with limited generalization to real-world
scenarios due to the substantial domain gap caused by training on synthetic
data. Existing methods address this issue by improving data synthesis
pipelines, estimating degradation kernels, employing deep internal learning,
and performing domain adaptation and regularization. Previous domain adaptation
methods have sought to bridge the domain gap by learning domain-invariant
knowledge in either feature or pixel space. However, these techniques often
struggle to extend to low-level vision tasks within a stable and compact
framework. In this paper, we show that it is possible to perform domain
adaptation via the noise space using diffusion models. In particular, by
leveraging the unique property of how auxiliary conditional inputs influence
the multi-step denoising process, we derive a meaningful diffusion loss that
guides the restoration model in progressively aligning both restored synthetic
and real-world outputs with a target clean distribution. We refer to this
method as denoising as adaptation. To prevent shortcuts during joint training,
we present crucial strategies such as channel-shuffling layer and
residual-swapping contrastive learning in the diffusion model. They implicitly
blur the boundaries between conditioned synthetic and real data and prevent the
reliance of the model on easily distinguishable features. Experimental results
on three classical image restoration tasks, namely denoising, deblurring, and
deraining, demonstrate the effectiveness of the proposed method.