EMO:情感肖像活現 - 在弱條件下利用音訊到影片擴散模型生成具表現力的肖像影片
EMO: Emote Portrait Alive - Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
February 27, 2024
作者: Linrui Tian, Qi Wang, Bang Zhang, Liefeng Bo
cs.AI
摘要
在這項工作中,我們致力於增強說話頭像視頻生成中的寫實感和表現力,著重於音頻提示與面部運動之間的動態和微妙關係。我們確定傳統技術的局限性,往往無法捕捉到人類表情的全部範疇以及個人面部風格的獨特性。為了應對這些問題,我們提出了EMO,一種新穎的框架,採用直接的音頻到視頻合成方法,避免了中間3D模型或面部標誌的需求。我們的方法確保了無縫的幀過渡和視頻中一致的身份保留,從而產生高度表現力和逼真的動畫。實驗結果表明,EMO不僅能夠生成令人信服的說話視頻,還能以各種風格生成歌唱視頻,在表現力和寫實性方面顯著優於現有的最先進方法論。
English
In this work, we tackle the challenge of enhancing the realism and
expressiveness in talking head video generation by focusing on the dynamic and
nuanced relationship between audio cues and facial movements. We identify the
limitations of traditional techniques that often fail to capture the full
spectrum of human expressions and the uniqueness of individual facial styles.
To address these issues, we propose EMO, a novel framework that utilizes a
direct audio-to-video synthesis approach, bypassing the need for intermediate
3D models or facial landmarks. Our method ensures seamless frame transitions
and consistent identity preservation throughout the video, resulting in highly
expressive and lifelike animations. Experimental results demonsrate that EMO is
able to produce not only convincing speaking videos but also singing videos in
various styles, significantly outperforming existing state-of-the-art
methodologies in terms of expressiveness and realism.