AntGPT:大型语言模型能帮助视频中的长期行动预测吗?
AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
July 31, 2023
作者: Qi Zhao, Ce Zhang, Shijie Wang, Changcheng Fu, Nakul Agarwal, Kwonjoon Lee, Chen Sun
cs.AI
摘要
我们能否通过了解当前行动(例如打蛋)之后通常会发生什么(例如打蛋壳)来更好地预测演员的未来行动?如果我们还知道演员的长期目标(例如制作蛋炒饭)呢?长期行动预测(LTA)任务旨在从视频观察中以动词和名词序列的形式预测演员的未来行为,对于人机交互至关重要。我们建议从两个角度制定LTA任务:自下而上的方法通过建模时间动态来自回归地预测下一步行动;自上而下的方法推断演员的目标并规划完成目标所需的步骤。我们假设在程序文本数据(例如食谱、操作指南)上预训练的大型语言模型(LLMs)有潜力帮助从这两个角度进行LTA。它可以帮助提供可能的下一步行动的先验知识,并根据观察到的程序部分推断目标。为了利用LLMs,我们提出了一个两阶段框架,AntGPT。它首先识别观察视频中已执行的行动,然后要求LLM通过有条件生成来预测未来行动,或者通过思维链提示来推断目标并规划整个过程。在Ego4D LTA v1和v2基准、EPIC-Kitchens-55以及EGTEA GAZE+上的实证结果展示了我们提出方法的有效性。AntGPT在所有上述基准上均取得了最先进的性能,并且可以通过定性分析成功推断目标,从而执行目标条件下的“反事实”预测。代码和模型将在以下网址发布:https://brown-palm.github.io/AntGPT
English
Can we better anticipate an actor's future actions (e.g. mix eggs) by knowing
what commonly happens after his/her current action (e.g. crack eggs)? What if
we also know the longer-term goal of the actor (e.g. making egg fried rice)?
The long-term action anticipation (LTA) task aims to predict an actor's future
behavior from video observations in the form of verb and noun sequences, and it
is crucial for human-machine interaction. We propose to formulate the LTA task
from two perspectives: a bottom-up approach that predicts the next actions
autoregressively by modeling temporal dynamics; and a top-down approach that
infers the goal of the actor and plans the needed procedure to accomplish the
goal. We hypothesize that large language models (LLMs), which have been
pretrained on procedure text data (e.g. recipes, how-tos), have the potential
to help LTA from both perspectives. It can help provide the prior knowledge on
the possible next actions, and infer the goal given the observed part of a
procedure, respectively. To leverage the LLMs, we propose a two-stage
framework, AntGPT. It first recognizes the actions already performed in the
observed videos and then asks an LLM to predict the future actions via
conditioned generation, or to infer the goal and plan the whole procedure by
chain-of-thought prompting. Empirical results on the Ego4D LTA v1 and v2
benchmarks, EPIC-Kitchens-55, as well as EGTEA GAZE+ demonstrate the
effectiveness of our proposed approach. AntGPT achieves state-of-the-art
performance on all above benchmarks, and can successfully infer the goal and
thus perform goal-conditioned "counterfactual" prediction via qualitative
analysis. Code and model will be released at
https://brown-palm.github.io/AntGPT