GazeGen:注视驱动的视觉内容生成用户交互
GazeGen: Gaze-Driven User Interaction for Visual Content Generation
November 7, 2024
作者: He-Yen Hsieh, Ziyun Li, Sai Qian Zhang, Wei-Te Mark Ting, Kao-Den Chang, Barbara De Salvo, Chiao Liu, H. T. Kung
cs.AI
摘要
我们提出了GazeGen,这是一个用户交互系统,可以根据用户眼睛注视的位置生成视觉内容(图像和视频)。GazeGen允许通过注视感兴趣区域来直观地操作视觉内容。利用目标检测和生成式人工智能的先进技术,GazeGen执行受注视控制的图像添加/删除、重新定位以及图像对象的表面材料变化,并将静态图像转换为视频。GazeGen的核心是DFT Gaze(精炼和微调注视)代理,这是一个超轻量级模型,仅有281K个参数,可以进行针对个体用户眼睛的准确实时注视预测,适用于小型边缘设备。GazeGen是第一个将视觉内容生成与实时注视估计结合起来的系统,这仅有DFT Gaze才能实现。这种实时注视估计使得各种视觉内容生成任务成为可能,所有这些任务都由用户的注视来控制。DFT Gaze的输入是用户的眼睛图像,而视觉内容生成的输入是用户的视角和来自DFT Gaze的预测注视点。为了实现高效的注视预测,我们通过新颖的知识蒸馏和个性化适应技术,从一个大模型(大小是原模型的10倍)中衍生出这个小模型。我们将知识蒸馏与掩蔽自编码器相结合,开发出一个紧凑而强大的注视估计模型。这个模型进一步通过适配器进行微调,实现高度准确和个性化的注视预测,用户输入最小。DFT Gaze确保低延迟和精准的注视跟踪,支持广泛的注视驱动任务。我们在AEA和OpenEDS2020基准测试上验证了DFT Gaze的性能,展示了在边缘设备(树莓派4)上低角度注视误差和低延迟。此外,我们描述了GazeGen的应用,展示了它在各种使用场景中的多功能性和有效性。
English
We present GazeGen, a user interaction system that generates visual content
(images and videos) for locations indicated by the user's eye gaze. GazeGen
allows intuitive manipulation of visual content by targeting regions of
interest with gaze. Using advanced techniques in object detection and
generative AI, GazeGen performs gaze-controlled image adding/deleting,
repositioning, and surface material changes of image objects, and converts
static images into videos. Central to GazeGen is the DFT Gaze (Distilled and
Fine-Tuned Gaze) agent, an ultra-lightweight model with only 281K parameters,
performing accurate real-time gaze predictions tailored to individual users'
eyes on small edge devices. GazeGen is the first system to combine visual
content generation with real-time gaze estimation, made possible exclusively by
DFT Gaze. This real-time gaze estimation enables various visual content
generation tasks, all controlled by the user's gaze. The input for DFT Gaze is
the user's eye images, while the inputs for visual content generation are the
user's view and the predicted gaze point from DFT Gaze. To achieve efficient
gaze predictions, we derive the small model from a large model (10x larger) via
novel knowledge distillation and personal adaptation techniques. We integrate
knowledge distillation with a masked autoencoder, developing a compact yet
powerful gaze estimation model. This model is further fine-tuned with Adapters,
enabling highly accurate and personalized gaze predictions with minimal user
input. DFT Gaze ensures low-latency and precise gaze tracking, supporting a
wide range of gaze-driven tasks. We validate the performance of DFT Gaze on AEA
and OpenEDS2020 benchmarks, demonstrating low angular gaze error and low
latency on the edge device (Raspberry Pi 4). Furthermore, we describe
applications of GazeGen, illustrating its versatility and effectiveness in
various usage scenarios.