PixelSmile:邁向精細化面部表情編輯
PixelSmile: Toward Fine-Grained Facial Expression Editing
March 26, 2026
作者: Jiabin Hua, Hengyuan Xu, Aojie Li, Wei Cheng, Gang Yu, Xingjun Ma, Yu-Gang Jiang
cs.AI
摘要
長期以來,細粒度面部表情編輯一直受制於內在語義重疊問題。為解決此難題,我們構建了具有連續情感標註的FFE數據集,並設立FFE-Bench評估框架,從結構混淆度、編輯精準度、線性可控性以及表情編輯與身份特徵保留的平衡性等維度進行系統評估。我們提出PixelSmile——基於全對稱聯合訓練的擴散框架,通過強度監督與對比學習相結合的機制,生成辨識度更高的鮮明表情,並藉由文本潛在空間插值實現精確穩定的線性表情控制。大量實驗表明,PixelSmile在實現連續可控的細粒度表情編輯的同時,能卓越地解耦語義特徵並穩定保持身份信息,其流暢的表情融合能力進一步驗證了該框架的有效性。
English
Fine-grained facial expression editing has long been limited by intrinsic semantic overlap. To address this, we construct the Flex Facial Expression (FFE) dataset with continuous affective annotations and establish FFE-Bench to evaluate structural confusion, editing accuracy, linear controllability, and the trade-off between expression editing and identity preservation. We propose PixelSmile, a diffusion framework that disentangles expression semantics via fully symmetric joint training. PixelSmile combines intensity supervision with contrastive learning to produce stronger and more distinguishable expressions, achieving precise and stable linear expression control through textual latent interpolation. Extensive experiments demonstrate that PixelSmile achieves superior disentanglement and robust identity preservation, confirming its effectiveness for continuous, controllable, and fine-grained expression editing, while naturally supporting smooth expression blending.