ChatPaper.aiChatPaper

Video-Skill-CoT:基於技能的思維鏈用於領域自適應的視頻推理

Video-Skill-CoT: Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning

June 4, 2025
作者: Daeun Lee, Jaehong Yoon, Jaemin Cho, Mohit Bansal
cs.AI

摘要

近期在思維鏈(Chain-of-Thought, CoT)推理方面的進展提升了複雜視頻理解的能力,但現有方法往往難以適應跨多種視頻內容的領域特定技能(如事件檢測、空間關係理解、情感理解)。為解決這一問題,我們提出了Video-Skill-CoT(簡稱Video-SKoT),這是一個自動構建並利用技能感知的CoT監督來實現領域適應性視頻推理的框架。首先,我們構建基於技能的CoT註釋:從訓練問題中提取與領域相關的推理技能,將其聚類為共享的技能分類體系,並為每個視頻-問題對創建詳細的多步驟CoT推理過程以供訓練。其次,我們引入了一種技能專家的學習框架。每個專家模塊專注於一部分推理技能,並通過輕量級適配器使用收集到的CoT監督進行訓練。我們在三個視頻理解基準測試中展示了所提出方法的有效性,Video-SKoT在這些測試中始終優於強基線模型。此外,我們還對比了不同CoT註釋流程及在多個視頻領域中學習到的技能,提供了深入的分析。
English
Recent advances in Chain-of-Thought (CoT) reasoning have improved complex video understanding, but existing methods often struggle to adapt to domain-specific skills (e.g., event detection, spatial relation understanding, emotion understanding) over various video content. To address this, we propose Video-Skill-CoT (a.k.a. Video-SKoT), a framework that automatically constructs and leverages skill-aware CoT supervisions for domain-adaptive video reasoning. First, we construct skill-based CoT annotations: we extract domain-relevant reasoning skills from training questions, cluster them into a shared skill taxonomy, and create detailed multi-step CoT rationale tailored to each video-question pair for training. Second, we introduce a skill-specific expert learning framework. Each expert module specializes in a subset of reasoning skills and is trained with lightweight adapters using the collected CoT supervision. We demonstrate the effectiveness of the proposed approach on three video understanding benchmarks, where Video-SKoT consistently outperforms strong baselines. We also provide in-depth analyses on comparing different CoT annotation pipelines and learned skills over multiple video domains.
PDF52June 5, 2025