利用ChatGPT优化描述生成细粒度人类动作
Generating Fine-Grained Human Motions Using ChatGPT-Refined Descriptions
December 5, 2023
作者: Xu Shi, Chuanchen Luo, Junran Peng, Hongwen Zhang, Yunlian Sun
cs.AI
摘要
最近,在基于文本的动作生成方面取得了显著进展,实现了能够生成符合文本描述的多样化且高质量的人类动作。然而,由于缺乏详细文本描述的数据集,生成精细化或风格化动作仍然具有挑战性。通过采用分而治之的策略,我们提出了一种名为细粒度人体运动扩散模型(FG-MDM)的新框架用于人体运动生成。具体而言,我们首先通过利用大型语言模型(GPT-3.5)将先前模糊的文本注释解析为不同身体部位的细粒度描述。然后,我们使用这些细粒度描述来指导基于Transformer的扩散模型。FG-MDM能够生成细粒度且风格化的动作,甚至在训练数据分布之外。我们的实验结果表明了FG-MDM相对于先前方法的优越性,尤其是强大的泛化能力。我们将发布我们的细粒度文本注释用于HumanML3D和KIT。
English
Recently, significant progress has been made in text-based motion generation,
enabling the generation of diverse and high-quality human motions that conform
to textual descriptions. However, it remains challenging to generate
fine-grained or stylized motions due to the lack of datasets annotated with
detailed textual descriptions. By adopting a divide-and-conquer strategy, we
propose a new framework named Fine-Grained Human Motion Diffusion Model
(FG-MDM) for human motion generation. Specifically, we first parse previous
vague textual annotation into fine-grained description of different body parts
by leveraging a large language model (GPT-3.5). We then use these fine-grained
descriptions to guide a transformer-based diffusion model. FG-MDM can generate
fine-grained and stylized motions even outside of the distribution of the
training data. Our experimental results demonstrate the superiority of FG-MDM
over previous methods, especially the strong generalization capability. We will
release our fine-grained textual annotations for HumanML3D and KIT.