ChatPaper.aiChatPaper

VisIT-Bench:一個受現實世界使用啟發的視覺語言指示跟隨基準。

VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use

August 12, 2023
作者: Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gardner, Rohan Taori, Ludwig Schimdt
cs.AI

摘要

我們介紹 VisIT-Bench(Visual InsTruction Benchmark),這是一個用於評估視覺語言模型在現實世界應用中遵循指示的基準。我們的起點是整理了 70 個「指示家族」,我們認為調整指示的視覺語言模型應該能夠應對這些家族。除了像 VQAv2 和 COCO 這樣的評估之外,任務範圍從基本識別到遊戲玩法和創意生成。在整理之後,我們的數據集包括 592 個測試查詢,每個查詢都有一個由人類撰寫的指示條件標題。這些描述展示了指示特定因素,例如,對於一個詢問輪椅使用者店鋪是否易於進入的指示,指示條件標題描述了坡道/潛在障礙物。這些描述使得我們能夠 1)為每個實例收集經人驗證的參考輸出;以及 2)使用僅文本的 LLM 自動評估候選多模態生成,與人類判斷保持一致。我們通過人工和自動評估量化模型與參考之間的質量差距;例如,在比較中,表現最佳的遵循指示模型僅在 27% 的情況下勝過 GPT-4 參考。VisIT-Bench 是一個動態參與的項目,從業者只需在項目網站上提交其模型的回應;數據、代碼和排行榜可在 visit-bench.github.io 上找到。
English
We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for evaluation of instruction-following vision-language models for real-world use. Our starting point is curating 70 'instruction families' that we envision instruction tuned vision-language models should be able to address. Extending beyond evaluations like VQAv2 and COCO, tasks range from basic recognition to game playing and creative generation. Following curation, our dataset comprises 592 test queries, each with a human-authored instruction-conditioned caption. These descriptions surface instruction-specific factors, e.g., for an instruction asking about the accessibility of a storefront for wheelchair users, the instruction-conditioned caption describes ramps/potential obstacles. These descriptions enable 1) collecting human-verified reference outputs for each instance; and 2) automatic evaluation of candidate multimodal generations using a text-only LLM, aligning with human judgment. We quantify quality gaps between models and references using both human and automatic evaluations; e.g., the top-performing instruction-following model wins against the GPT-4 reference in just 27% of the comparison. VisIT-Bench is dynamic to participate, practitioners simply submit their model's response on the project website; Data, code and leaderboard is available at visit-bench.github.io.
PDF61December 15, 2024