ChatPaper.aiChatPaper

Lumos:透過場景文本識別強化多模態LLM

Lumos : Empowering Multimodal LLMs with Scene Text Recognition

February 12, 2024
作者: Ashish Shenoy, Yichao Lu, Srihari Jayakumar, Debojeet Chatterjee, Mohsen Moslehpour, Pierce Chuang, Abhay Harpale, Vikas Bhardwaj, Di Xu, Shicong Zhao, Longfang Zhao, Ankit Ramchandani, Xin Luna Dong, Anuj Kumar
cs.AI

摘要

我們介紹 Lumos,這是第一個具有文本理解能力的端對端多模式問答系統。Lumos 的核心是一個場景文本識別(STR)組件,從第一人稱視角圖像中提取文本,其輸出用於增強輸入到多模式大型語言模型(MM-LLM)。在構建 Lumos 過程中,我們遇到了許多與 STR 質量、整體延遲和模型推斷相關的挑戰。在本文中,我們深入探討這些挑戰,並討論用於克服這些障礙的系統架構、設計選擇和建模技術。我們還為每個組件提供了全面的評估,展示了高質量和效率。
English
We introduce Lumos, the first end-to-end multimodal question-answering system with text understanding capabilities. At the core of Lumos is a Scene Text Recognition (STR) component that extracts text from first person point-of-view images, the output of which is used to augment input to a Multimodal Large Language Model (MM-LLM). While building Lumos, we encountered numerous challenges related to STR quality, overall latency, and model inference. In this paper, we delve into those challenges, and discuss the system architecture, design choices, and modeling techniques employed to overcome these obstacles. We also provide a comprehensive evaluation for each component, showcasing high quality and efficiency.

Summary

AI-Generated Summary

PDF282December 15, 2024