GPT-4V(ision)是一种针对文本到3D生成的人类对齐评估器。
GPT-4V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation
January 8, 2024
作者: Tong Wu, Guandao Yang, Zhibing Li, Kai Zhang, Ziwei Liu, Leonidas Guibas, Dahua Lin, Gordon Wetzstein
cs.AI
摘要
尽管最近文本到3D生成方法取得了进展,但可靠的评估指标仍然缺乏。现有的指标通常侧重于单一标准,比如资产与输入文本的对齐程度。这些指标缺乏灵活性,无法推广到不同的评估标准,并且可能与人类偏好不太一致。进行用户偏好研究是一种既具有适应性又与人类偏好一致的替代方法。然而,用户研究在扩展规模上可能非常昂贵。本文提出了一种自动、多功能且与人类偏好一致的文本到3D生成模型评估指标。为此,我们首先使用GPT-4V开发了一个提示生成器,用于生成评估提示,这些提示作为比较文本到3D模型的输入。我们进一步设计了一种方法,指导GPT-4V根据用户定义的标准比较两个3D资产。最后,我们使用这些两两比较结果为这些模型分配Elo评分。实验结果表明,我们的指标在不同的评估标准下与人类偏好强烈一致。
English
Despite recent advances in text-to-3D generative methods, there is a notable
absence of reliable evaluation metrics. Existing metrics usually focus on a
single criterion each, such as how well the asset aligned with the input text.
These metrics lack the flexibility to generalize to different evaluation
criteria and might not align well with human preferences. Conducting user
preference studies is an alternative that offers both adaptability and
human-aligned results. User studies, however, can be very expensive to scale.
This paper presents an automatic, versatile, and human-aligned evaluation
metric for text-to-3D generative models. To this end, we first develop a prompt
generator using GPT-4V to generate evaluating prompts, which serve as input to
compare text-to-3D models. We further design a method instructing GPT-4V to
compare two 3D assets according to user-defined criteria. Finally, we use these
pairwise comparison results to assign these models Elo ratings. Experimental
results suggest our metric strongly align with human preference across
different evaluation criteria.