ChatPaper.aiChatPaper

基於串流式第一人稱視訊的主動式助理對話生成

Proactive Assistant Dialogue Generation from Streaming Egocentric Videos

June 6, 2025
作者: Yichi Zhang, Xin Luna Dong, Zhaojiang Lin, Andrea Madotto, Anuj Kumar, Babak Damavandi, Joyce Chai, Seungwhan Moon
cs.AI

摘要

近期在對話式人工智慧領域取得了顯著進展,但開發用於感知任務指導的即時系統仍面臨挑戰。這些系統必須基於串流視覺輸入提供互動式、主動的協助,然而其開發受到數據收集和系統評估過程成本高昂且耗時的制約。為解決這些限制,我們提出了一個包含三大關鍵貢獻的綜合框架。首先,我們引入了一種新穎的數據策展管道,從註解的第一人稱視角影片中合成對話,從而產生了跨越多個領域的大規模合成對話數據集 \dataset。其次,我們開發了一套自動評估指標,並通過廣泛的人類研究進行了驗證。第三,我們提出了一個端到端模型,該模型處理串流視頻輸入以生成上下文適宜的回應,並整合了處理數據不平衡和長時間視頻的新技術。這項工作為開發能夠引導用戶完成多樣任務的即時、主動型AI助手奠定了基礎。項目頁面:https://pro-assist.github.io/
English
Recent advances in conversational AI have been substantial, but developing real-time systems for perceptual task guidance remains challenging. These systems must provide interactive, proactive assistance based on streaming visual inputs, yet their development is constrained by the costly and labor-intensive process of data collection and system evaluation. To address these limitations, we present a comprehensive framework with three key contributions. First, we introduce a novel data curation pipeline that synthesizes dialogues from annotated egocentric videos, resulting in \dataset, a large-scale synthetic dialogue dataset spanning multiple domains. Second, we develop a suite of automatic evaluation metrics, validated through extensive human studies. Third, we propose an end-to-end model that processes streaming video inputs to generate contextually appropriate responses, incorporating novel techniques for handling data imbalance and long-duration videos. This work lays the foundation for developing real-time, proactive AI assistants capable of guiding users through diverse tasks. Project page: https://pro-assist.github.io/
PDF22June 10, 2025