ChatPaper.aiChatPaper

症狀AI:邁向日常症狀評估的對話式人工智慧代理

SymptomAI: Towards a Conversational AI Agent for Everyday Symptom Assessment

May 5, 2026
作者: Joseph Breda, Fadi Yousif, Beszel Hawkins, Marinela Cotoi, Miao Liu, Ray Luo, Po-Hsuan Cameron Chen, Mike Schaekermann, Samuel Schmidgall, Xin Liu, Girish Narayanswamy, Samuel Solomon, Maxwell A. Xu, Xiaoran Fan, Longfei Shangguan, Anran Wang, Bhavna Daryani, Buddy Herkenham, Cara Tan, Mark Malhotra, Shwetak Patel, John B. Hernandez, Quang Duong, Yun Liu, Zach Wasson, Dimitrios Antos, Bob Lou, Matthew Thompson, Jonathan Richina, Anupam Pathak, Nichole Young-Lin, Jake Sunshine, Daniel McDuff
cs.AI

摘要

語言模型在精心設計的醫學案例診斷評估中表現卓越,其表現可與臨床專業人員媲美甚至更優。然而現有研究多聚焦於情境豐富的複雜場景,難以據此判斷這類系統在患者日常報告症狀時的實際表現。我們透過Fitbit應用程式部署了SymptomAI——一套用於端到端患者問診與鑑別診斷的對話式AI代理,在一項隨機分配13,917名參與者與五款AI代理互動的研究中,該語料庫捕捉了真實人群的多樣化溝通方式與疾病分佈。其中1,228名參與者回報了臨床醫師診斷結果,另有517例經醫師小組進行逾250小時的標註覆核。在盲法隨機對照中,SymptomAI的鑑別診斷準確率顯著高於獨立臨床醫師(勝算比=2.47,p<0.001)。更重要的是,採用專屬症狀訪談策略(即在診斷前主動獲取額外症狀資訊)的代理表現顯著優於用戶主導對話的基線模型(p<0.001)。針對美國一般人群1,509則對話的輔助分析證實,此結果可推廣至穿戴裝置用戶之外。我們將SymptomAI診斷結果作為13,917名參與者的標籤,分析涵蓋近400種獨特病症、總計逾50萬日的穿戴裝置數據,發現急性感染與生理指標變化存在強關聯(如流感勝算比>7)。雖然受自陳式真實標籤所限,這些結果仍證實:相較多數消費級LLM預設的用户主導症狀討論模式,實施完整專屬症狀訪談具有明顯優勢。
English
Language models excel at diagnostic assessments on currated medical case-studies and vignettes, performing on par with, or better than, clinical professionals. However, existing studies focus on complex scenarios with rich context making it difficult to draw conclusions about how these systems perform for patients reporting symptoms in everyday life. We deployed SymptomAI, a set of conversational AI agents for end-to-end patient interviewing and differential diagnosis (DDx), via the Fitbit app in a study that randomized participants (N=13,917) to interact with five AI agents. This corpus captures diverse communication and a realistic distribution of illnesses from a real world population. A subset of 1,228 participants reported a clinician-provided diagnosis, and 517 of these were further evaluated by a panel of clinicians during over 250 hours of annotation. SymptomAI DDx were significantly more accurate (OR = 2.47, p < 0.001) than those from independent clinicians given the same dialogue in a blinded randomized comparison. Moreover, agentic strategies which conduct a dedicated symptom interview that elicit additional symptom information before providing a diagnosis, perform substantially better than baseline, user-guided conversations (p < 0.001). An auxiliary analysis on 1,509 conversations from a general US population panel validated that these results generalize beyond wearable device users. We used SymptomAI diagnoses as labels for all 13,917 participants to analyze over 500,000 days of wearable metrics across nearly 400 unique conditions. We identified strong associations between acute infections and physiological shifts (e.g., OR > 7 for influenza). While limited by self-reported ground truth, these results demonstrate the benefits of a dedicated and complete symptom interview compared to a user-guided symptom discussion, which is the default of most consumer LLMs.
PDF61May 7, 2026