ChatPaper.aiChatPaper.ai
ホーム

arXiv

HuggingFace

料金プランアカウントワークスペース

•
•

•
•

•
•

•
•

•
•

Footer

Company name

ChatPaper.ai: Your advanced AI reading assistant.

Contact us: [email protected]

X (Twitter)

Products

  • AI Search
  • AI Mind Map
  • Arxiv Summary
  • Huggingface Summary

Support

  • FAQ
  • Contact

Company

  • Blog
  • Privacy Policy
  • Terms of Service

Available Languages

  • 🇬🇧English
  • 🇨🇳中文简体
  • 🇭🇰繁體中文
  • 🇯🇵日本語
  • 🇰🇷한국어
  • 🇩🇪Deutsch
  • 🇫🇷Français
  • 🇷🇺Русский
  • 🇪🇸Español

© 2025 chatpaper.ai All rights reserved.

AI研究論文デイリー

翻訳付きの日次キュレーションされたAI研究論文

Depth Anything V2
Depth Anything V2

Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao•Jun 13, 2024•10314

1枚の画像は16x16パッチ以上の価値を持つ:個々のピクセルにおけるTransformerの探求
An Image is Worth More Than 16x16 Patches: Exploring Transformers on Individual Pixels

Duy-Kien Nguyen, Mahmoud Assran, Unnat Jain, Martin R. Oswald, Cees G. M. Snoek, Xinlei Chen•Jun 13, 2024•522

トランスフォーマーとニューラルアルゴリズム推論器の融合
Transformers meet Neural Algorithmic Reasoners

Wilfried Bounsi, Borja Ibarz, Andrew Dudzik, Jessica B. Hamrick, Larisa Markeeva, Alex Vitvitskyi, Razvan Pascanu, Petar Veličković•Jun 13, 2024•451

OpenVLA: オープンソースの視覚-言語-行動モデル
OpenVLA: An Open-Source Vision-Language-Action Model

Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, Chelsea Finn•Jun 13, 2024•401

Samba: 効率的な無制限コンテキスト言語モデリングのためのシンプルなハイブリッド状態空間モデル
Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling

Liliang Ren, Yang Liu, Yadong Lu, Yelong Shen, Chen Liang, Weizhu Chen•Jun 11, 2024•394

画像生成における歪みの軽減:マルチレゾリューション拡散モデルによるアプローチ
Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models

Qihao Liu, Zhanpeng Zeng, Ju He, Qihang Yu, Xiaohui Shen, Liang-Chieh Chen•Jun 13, 2024•301

時系列推論におけるLLM評価のためのベンチマーク:Test of Time
Test of Time: A Benchmark for Evaluating LLMs on Temporal Reasoning

Bahare Fatemi, Mehran Kazemi, Anton Tsitsulin, Karishma Malkan, Jinyeong Yim, John Palowitch, Sungyong Seo, Jonathan Halcrow, Bryan Perozzi•Jun 13, 2024•281

DiTFastAttn:拡散Transformerモデルのための注意機構圧縮
DiTFastAttn: Attention Compression for Diffusion Transformer Models

Zhihang Yuan, Pu Lu, Hanling Zhang, Xuefei Ning, Linfeng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, Yu Wang•Jun 12, 2024•261

ビジュアルスケッチパッド:マルチモーダル言語モデルのための視覚的思考連鎖としてのスケッチ
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models

Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Ranjay Krishna•Jun 13, 2024•221

カスタマイズされた拡散モデルの重み空間の解釈
Interpreting the Weight Space of Customized Diffusion Models

Amil Dravid, Yossi Gandelsman, Kuan-Chieh Wang, Rameen Abdal, Gordon Wetzstein, Alexei A. Efros, Kfir Aberman•Jun 13, 2024•201

MuirBench:堅牢なマルチ画像理解のための包括的ベンチマーク
MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding

Fei Wang, Xingyu Fu, James Y. Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, Tianyi Lorena Yan, Wenjie Jacky Mo, Hsiang-Hui Liu, Pan Lu, Chunyuan Li, Chaowei Xiao, Kai-Wei Chang, Dan Roth, Sheng Zhang, Hoifung Poon, Muhao Chen•Jun 13, 2024•202

HelpSteer2: 最高性能の報酬モデルを訓練するためのオープンソースデータセット
HelpSteer2: Open-source dataset for training top-performing reward models

Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, Oleksii Kuchaiev•Jun 12, 2024•193

mOSCAR: 大規模多言語・多モーダル文書レベルコーパス
mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus

Matthieu Futeral, Armel Zebaze, Pedro Ortiz Suarez, Julien Abadji, Rémi Lacroix, Cordelia Schmid, Rachel Bawden, Benoît Sagot•Jun 13, 2024•164

CS-Bench: コンピュータサイエンス習得に向けた大規模言語モデルの包括的ベンチマーク
CS-Bench: A Comprehensive Benchmark for Large Language Models towards Computer Science Mastery

Xiaoshuai Song, Muxi Diao, Guanting Dong, Zhengyang Wang, Yujia Fu, Runqi Qiao, Zhexu Wang, Dayuan Fu, Huangxuan Wu, Bin Liang, Weihao Zeng, Yejie Wang, Zhuoma GongQue, Jianing Yu, Qiuna Tan, Weiran Xu•Jun 12, 2024•164

4M-21:数十のタスクとモダリティに対応する汎用ビジョンモデル
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities

Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir•Jun 13, 2024•152

EMMA: あなたのテキストから画像への拡散モデルは、密かにマルチモーダルなプロンプトを受け入れることができる
EMMA: Your Text-to-Image Diffusion Model Can Secretly Accept Multi-Modal Prompts

Yucheng Han, Rui Wang, Chi Zhang, Juntao Hu, Pei Cheng, Bin Fu, Hanwang Zhang•Jun 13, 2024•143

大規模オムニモーダル事前学習の限界を探る
Explore the Limits of Omni-modal Pretraining at Scale

Yiyuan Zhang, Handong Li, Jing Liu, Xiangyu Yue•Jun 13, 2024•113

認知科学に着想を得たエネルギーベースの世界モデル
Cognitively Inspired Energy-Based World Models

Alexi Gladstone, Ganesh Nanduru, Md Mofijul Islam, Aman Chadha, Jundong Li, Tariq Iqbal•Jun 13, 2024•107

Mistral-C2F:RLHFと効果的に統合されたLLMにおける分析的推論を強化するための粗密アクターモデル
Mistral-C2F: Coarse to Fine Actor for Analytical and Reasoning Enhancement in RLHF and Effective-Merged LLMs

Chen Zheng, Ke Sun, Xun Zhou•Jun 12, 2024•102

コモンセンス-T2Iチャレンジ:テキストから画像生成モデルはコモンセンスを理解できるか?
Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?

Xingyu Fu, Muyu He, Yujie Lu, William Yang Wang, Dan Roth•Jun 11, 2024•91

TC-Bench: テキストからビデオおよび画像からビデオ生成における時間的構成性のベンチマーキング
TC-Bench: Benchmarking Temporal Compositionality in Text-to-Video and Image-to-Video Generation

Weixi Feng, Jiachen Li, Michael Saxon, Tsu-jui Fu, Wenhu Chen, William Yang Wang•Jun 12, 2024•81

Real3D:実世界画像を用いた大規模再構成モデルのスケールアップ
Real3D: Scaling Up Large Reconstruction Models with Real-World Images

Hanwen Jiang, Qixing Huang, Georgios Pavlakos•Jun 12, 2024•71

生成AIの幻覚発生率の推定
Estimating the Hallucination Rate of Generative AI

Andrew Jesson, Nicolas Beltran-Velez, Quentin Chu, Sweta Karlekar, Jannik Kossen, Yarin Gal, John P. Cunningham, David Blei•Jun 11, 2024•71

MLKV: メモリ効率の良いTransformerデコーディングのための多層キー・バリューヘッド
MLKV: Multi-Layer Key-Value Heads for Memory Efficient Transformer Decoding

Zayd Muhammad Kawakibi Zuhri, Muhammad Farid Adilazuarda, Ayu Purwarianti, Alham Fikri Aji•Jun 13, 2024•62

言語モデル評議会:合意形成による高度に主観的なタスクにおける基盤モデルのベンチマーキング
Language Model Council: Benchmarking Foundation Models on Highly Subjective Tasks by Consensus

Justin Zhao, Flor Miriam Plaza-del-Arco, Amanda Cercas Curry•Jun 12, 2024•61

CVQA:文化的多様性を考慮した多言語視覚質問応答ベンチマーク
CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark

David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo, Teresa Lynn, Injy Hamed, Aditya Nanda Kishore, Aishik Mandal, Alina Dragonetti, Artem Abzaliev, Atnafu Lambebo Tonja, Bontu Fufa Balcha, Chenxi Whitehouse, Christian Salamea, Dan John Velasco, David Ifeoluwa Adelani, David Le Meur, Emilio Villa-Cueva, Fajri Koto, Fauzan Farooqui, Frederico Belcavello, Ganzorig Batnasan, Gisela Vallejo, Grainne Caulfield, Guido Ivetta, Haiyue Song, Henok Biadglign Ademtew, Hernán Maina, Holy Lovenia, Israel Abebe Azime, Jan Christian Blaise Cruz, Jay Gala, Jiahui Geng, Jesus-German Ortiz-Barajas, Jinheon Baek, Jocelyn Dunstan, Laura Alonso Alemany, Kumaranage Ravindu Yasas Nagasinghe, Luciana Benotti, Luis Fernando D'Haro, Marcelo Viridiano, Marcos Estecha-Garitagoitia, Maria Camila Buitrago Cabrera, Mario Rodríguez-Cantelar, Mélanie Jouitteau, Mihail Mihaylov, Mohamed Fazli Mohamed Imam, Muhammad Farid Adilazuarda, Munkhjargal Gochoo, Munkh-Erdene Otgonbold, Naome Etori, Olivier Niyomugisha, Paula Mónica Silva, Pranjal Chitale, Raj Dabre, Rendi Chevi, Ruochen Zhang, Ryandito Diandaru, Samuel Cahyawijaya, Santiago Góngora, Soyeong Jeong, Sukannya Purkayastha, Tatsuki Kuribayashi, Thanmay Jayakumar, Tiago Timponi Torrent, Toqeer Ehsan, Vladimir Araujo, Yova Kementchedjhieva, Zara Burzo, Zheng Wei Lim, Zheng Xin Yong, Oana Ignat, Joan Nwatu, Rada Mihalcea, Thamar Solorio, Alham Fikri Aji•Jun 10, 2024•61

LRM-Zero: 合成データを用いた大規模再構成モデルのトレーニング
LRM-Zero: Training Large Reconstruction Models with Synthesized Data

Desai Xie, Sai Bi, Zhixin Shu, Kai Zhang, Zexiang Xu, Yi Zhou, Sören Pirk, Arie Kaufman, Xin Sun, Hao Tan•Jun 13, 2024•51

拡散モデルにおける幻覚現象の理解:モード補間を通じて
Understanding Hallucinations in Diffusion Models through Mode Interpolation

Sumukh K Aithal, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter•Jun 13, 2024•51

CMC-Bench: 視覚信号圧縮の新たなパラダイムに向けて
CMC-Bench: Towards a New Paradigm of Visual Signal Compression

Chunyi Li, Xiele Wu, Haoning Wu, Donghui Feng, Zicheng Zhang, Guo Lu, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin•Jun 13, 2024•52

Toffee: 主題駆動型テキストから画像生成のための効率的な百万規模データセット構築
Toffee: Efficient Million-Scale Dataset Construction for Subject-Driven Text-to-Image Generation

Yufan Zhou, Ruiyi Zhang, Kaizhi Zheng, Nanxuan Zhao, Jiuxiang Gu, Zichao Wang, Xin Eric Wang, Tong Sun•Jun 13, 2024•52