ChatPaper.aiChatPaper.ai
홈

arXiv

HuggingFace

요금제계정작업공간

•
•

•
•

•
•

•
•

•
•

Footer

Company name

ChatPaper.ai: Your advanced AI reading assistant.

Contact us: [email protected]

X (Twitter)

Products

  • AI Search
  • AI Mind Map
  • Arxiv Summary
  • Huggingface Summary

Support

  • FAQ
  • Contact

Company

  • Blog
  • Privacy Policy
  • Terms of Service

Available Languages

  • 🇬🇧English
  • 🇨🇳中文简体
  • 🇭🇰繁體中文
  • 🇯🇵日本語
  • 🇰🇷한국어
  • 🇩🇪Deutsch
  • 🇫🇷Français
  • 🇷🇺Русский
  • 🇪🇸Español

© 2025 chatpaper.ai All rights reserved.

AI 연구 논문 데일리

번역이 포함된 일일 선별된 AI 연구 논문

LayerSkip: 조기 종료 추론과 자기 추측적 디코딩 활성화
LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding

Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed A Aly, Beidi Chen, Carole-Jean Wu•Apr 25, 2024•8012

GPT-4V에 얼마나 가까워졌는가? 오픈소스 제품군으로 상용 멀티모달 모델과의 격차 좁히기
How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites

Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao•Apr 25, 2024•585

당신의 LLM이 컨텍스트를 완전히 활용하도록 하라
Make Your LLM Fully Utilize the Context

Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou•Apr 25, 2024•552

Interactive3D: 인터랙티브 3D 생성으로 원하는 것을 창조하세요
Interactive3D: Create What You Want by Interactive 3D Generation

Shaocong Dong, Lihe Ding, Zhanpeng Huang, Zibin Wang, Tianfan Xue, Dan Xu•Apr 25, 2024•211

ConsistentID: 다중모드 세부 정체성 보존을 통한 초상화 생성
ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving

Jiehui Huang, Xiao Dong, Wenhui Song, Hanhui Li, Jun Zhou, Yuhao Cheng, Shutao Liao, Long Chen, Yiqiang Yan, Shengcai Liao, Xiaodan Liang•Apr 25, 2024•201

Tele-FLM 기술 보고서
Tele-FLM Technical Report

Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Chao Wang, Xinzhang Liu, Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang, Shuangyong Song, Yongxiang Li, Zheng Zhang, Bo Zhao, Aixin Sun, Yequan Wang, Zhongjiang He, Zhongyuan Wang, Xuelong Li, Tiejun Huang•Apr 25, 2024•181

항목을 하나씩 나열하기: 멀티모달 대형 언어 모델을 위한 새로운 데이터 소스 및 학습 패러다임
List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs

An Yan, Zhengyuan Yang, Junda Wu, Wanrong Zhu, Jianwei Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Julian McAuley, Jianfeng Gao, Lijuan Wang•Apr 25, 2024•182

Gecko를 통한 텍스트-이미지 평가 재고: 메트릭, 프롬프트, 그리고 인간 평가를 중심으로
Revisiting Text-to-Image Evaluation with Gecko: On Metrics, Prompts, and Human Ratings

Olivia Wiles, Chuhan Zhang, Isabela Albuquerque, Ivana Kajić, Su Wang, Emanuele Bugliarello, Yasumasa Onoe, Chris Knutsen, Cyrus Rashtchian, Jordi Pont-Tuset, Aida Nematzadeh•Apr 25, 2024•172

NeRF-XL: 다중 GPU를 활용한 NeRF 확장
NeRF-XL: Scaling NeRFs with Multiple GPUs

Ruilong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams•Apr 24, 2024•151

SEED-Bench-2-Plus: 텍스트가 풍부한 시각적 이해를 통해 멀티모달 대규모 언어 모델 벤치마킹
SEED-Bench-2-Plus: Benchmarking Multimodal Large Language Models with Text-Rich Visual Comprehension

Bohao Li, Yuying Ge, Yi Chen, Yixiao Ge, Ruimao Zhang, Ying Shan•Apr 25, 2024•91