ChatPaper.aiChatPaper

TAGS:一個測試時通用-專用框架,具備檢索增強推理與驗證功能

TAGS: A Test-Time Generalist-Specialist Framework with Retrieval-Augmented Reasoning and Verification

May 23, 2025
作者: Jianghao Wu, Feilong Tang, Yulong Li, Ming Hu, Haochen Xue, Shoaib Jameel, Yutong Xie, Imran Razzak
cs.AI

摘要

近期如思維鏈提示(Chain-of-Thought prompting)等技術的進步,顯著提升了大型語言模型(LLMs)在零樣本醫療推理上的表現。然而,基於提示的方法往往仍顯淺薄且不穩定,而經過微調的醫療LLMs在分佈變化下泛化能力差,對未見臨床場景的適應性有限。為解決這些限制,我們提出了TAGS,這是一個測試時框架,結合了具備廣泛能力的通用模型與特定領域的專家模型,以提供互補的視角,無需任何模型微調或參數更新。為支持這種通用-專家推理過程,我們引入了兩個輔助模塊:一個分層檢索機制,通過基於語義和推理層次相似性選擇樣本,提供多尺度範例;以及一個可靠性評分器,評估推理一致性以指導最終答案的聚合。TAGS在九個MedQA基準測試中表現出色,將GPT-4o的準確率提升了13.8%,DeepSeek-R1提升了16.8%,並將一個基礎的7B模型從14.1%提升至23.9%。這些結果超越了多個經過微調的醫療LLMs,且無需任何參數更新。代碼將在https://github.com/JianghaoWu/TAGS 提供。
English
Recent advances such as Chain-of-Thought prompting have significantly improved large language models (LLMs) in zero-shot medical reasoning. However, prompting-based methods often remain shallow and unstable, while fine-tuned medical LLMs suffer from poor generalization under distribution shifts and limited adaptability to unseen clinical scenarios. To address these limitations, we present TAGS, a test-time framework that combines a broadly capable generalist with a domain-specific specialist to offer complementary perspectives without any model fine-tuning or parameter updates. To support this generalist-specialist reasoning process, we introduce two auxiliary modules: a hierarchical retrieval mechanism that provides multi-scale exemplars by selecting examples based on both semantic and rationale-level similarity, and a reliability scorer that evaluates reasoning consistency to guide final answer aggregation. TAGS achieves strong performance across nine MedQA benchmarks, boosting GPT-4o accuracy by 13.8%, DeepSeek-R1 by 16.8%, and improving a vanilla 7B model from 14.1% to 23.9%. These results surpass several fine-tuned medical LLMs, without any parameter updates. The code will be available at https://github.com/JianghaoWu/TAGS.

Summary

AI-Generated Summary

PDF22May 27, 2025