合作模型:透過LLM工具使用實現主觀視覺分類,最小化人力投入
Modeling Collaborator: Enabling Subjective Vision Classification With Minimal Human Effort via LLM Tool-Use
March 5, 2024
作者: Imad Eddine Toubal, Aditya Avinash, Neil Gordon Alldrin, Jan Dlabal, Wenlei Zhou, Enming Luo, Otilia Stretcu, Hao Xiong, Chun-Ta Lu, Howard Zhou, Ranjay Krishna, Ariel Fuxman, Tom Duerig
cs.AI
摘要
從內容審查到野生動物保護,需要模型識別微妙或主觀視覺概念的應用程式數量正在增加。傳統上,為這些概念開發分類器需要大量手動工作,需要數小時、數天,甚至數月來識別和標註訓練所需的數據。即使使用最近提出的敏捷建模技術,可以快速啟動圖像分類器,用戶仍需要花費30分鐘或更多的單調、重複的數據標註來訓練單個分類器。借鑒菲斯克的認知懶人理論,我們提出了一個新框架,通過用自然語言交互取代人工標註,減少定義概念所需的總工作量一個數量級:從標註2,000張圖像到僅需100多個自然語言交互。我們的框架利用了最近基礎模型的進展,包括大型語言模型和視覺-語言模型,通過對話和自動標註訓練數據點來劃分概念空間。最重要的是,我們的框架消除了對眾包標註的需求。此外,我們的框架最終生成可在成本敏感場景中部署的輕量級分類模型。在15個主觀概念和2個公共圖像分類數據集中,我們訓練的模型表現優於傳統的敏捷建模以及ALIGN、CLIP、CuPL等最新的零樣本分類模型,以及大型視覺問答模型如PaLI-X。
English
From content moderation to wildlife conservation, the number of applications
that require models to recognize nuanced or subjective visual concepts is
growing. Traditionally, developing classifiers for such concepts requires
substantial manual effort measured in hours, days, or even months to identify
and annotate data needed for training. Even with recently proposed Agile
Modeling techniques, which enable rapid bootstrapping of image classifiers,
users are still required to spend 30 minutes or more of monotonous, repetitive
data labeling just to train a single classifier. Drawing on Fiske's Cognitive
Miser theory, we propose a new framework that alleviates manual effort by
replacing human labeling with natural language interactions, reducing the total
effort required to define a concept by an order of magnitude: from labeling
2,000 images to only 100 plus some natural language interactions. Our framework
leverages recent advances in foundation models, both large language models and
vision-language models, to carve out the concept space through conversation and
by automatically labeling training data points. Most importantly, our framework
eliminates the need for crowd-sourced annotations. Moreover, our framework
ultimately produces lightweight classification models that are deployable in
cost-sensitive scenarios. Across 15 subjective concepts and across 2 public
image classification datasets, our trained models outperform traditional Agile
Modeling as well as state-of-the-art zero-shot classification models like
ALIGN, CLIP, CuPL, and large visual question-answering models like PaLI-X.