用户未尽之言:模糊查询限制视觉语言模型性能
What Users Leave Unsaid: Under-Specified Queries Limit Vision-Language Models
January 7, 2026
作者: Dasol Choi, Guijin Son, Hanwool Lee, Minhyuk Kim, Hyunwoo Ko, Teabin Lim, Ahn Eungyeol, Jungwhan Kim, Seunghyeok Hong, Youngsook Song
cs.AI
摘要
當前視覺語言基準測試主要採用結構清晰、提示明確的問題。然而真實用戶查詢往往具有非正式性和資訊不完整性的特點,用戶會自然省略大量背景資訊,依賴圖像傳達語境。我們推出HAERAE-Vision基準數據集,包含從韓國網絡社區採集的653個真實視覺問題(從8.6萬候選問題中篩選留存率0.76%),每個問題均配備人工改寫的顯式版本,共計1,306個查詢變體。通過評估39個視覺語言模型發現,即便是尖端模型(GPT-5、Gemini 2.5 Pro)在原始查詢上的準確率也不足50%。關鍵在於,僅通過查詢顯式化處理就能帶來8至22個百分點的效能提升,其中小型模型受益最為顯著。我們進一步證明,即使結合網絡搜索,資訊不完整的查詢表現仍遜於未經搜索的顯式查詢,這表明現有檢索技術無法彌補用戶隱含的語境資訊。研究結果證實,視覺語言模型面臨的困難相當部分源於自然查詢的資訊不完整性,而非模型能力缺陷,這凸顯出基準測試與實際應用之間存在關鍵落差。
English
Current vision-language benchmarks predominantly feature well-structured questions with clear, explicit prompts. However, real user queries are often informal and underspecified. Users naturally leave much unsaid, relying on images to convey context. We introduce HAERAE-Vision, a benchmark of 653 real-world visual questions from Korean online communities (0.76% survival from 86K candidates), each paired with an explicit rewrite, yielding 1,306 query variants in total. Evaluating 39 VLMs, we find that even state-of-the-art models (GPT-5, Gemini 2.5 Pro) achieve under 50% on the original queries. Crucially, query explicitation alone yields 8 to 22 point improvements, with smaller models benefiting most. We further show that even with web search, under-specified queries underperform explicit queries without search, revealing that current retrieval cannot compensate for what users leave unsaid. Our findings demonstrate that a substantial portion of VLM difficulty stem from natural query under-specification instead of model capability, highlighting a critical gap between benchmark evaluation and real-world deployment.