GlyphControl:用于视觉文本生成的字形条件控制
GlyphControl: Glyph Conditional Control for Visual Text Generation
May 29, 2023
作者: Yukang Yang, Dongnan Gui, Yuhui Yuan, Haisong Ding, Han Hu, Kai Chen
cs.AI
摘要
最近,对开发基于扩散的文本到图像生成模型的兴趣日益增长,这些模型能够生成连贯且形式良好的视觉文本。在本文中,我们提出了一种名为GlyphControl的新颖高效方法来解决这一任务。与现有方法(如ByT5)依赖于字符感知文本编码器并需要重新训练文本到图像模型不同,我们的方法利用额外的字形条件信息,以提升现成的Stable-Diffusion模型在生成准确视觉文本方面的性能。通过整合字形指令,用户可以根据特定要求定制生成文本的内容、位置和大小。为促进视觉文本生成的进一步研究,我们构建了一个名为LAION-Glyph的训练基准数据集。我们通过测量基于OCR的指标和生成视觉文本的CLIP分数来评估我们方法的有效性。我们的实证评估表明,GlyphControl在OCR准确性和CLIP分数方面优于最近的DeepFloyd IF方法,突显了我们方法的有效性。
English
Recently, there has been a growing interest in developing diffusion-based
text-to-image generative models capable of generating coherent and well-formed
visual text. In this paper, we propose a novel and efficient approach called
GlyphControl to address this task. Unlike existing methods that rely on
character-aware text encoders like ByT5 and require retraining of text-to-image
models, our approach leverages additional glyph conditional information to
enhance the performance of the off-the-shelf Stable-Diffusion model in
generating accurate visual text. By incorporating glyph instructions, users can
customize the content, location, and size of the generated text according to
their specific requirements. To facilitate further research in visual text
generation, we construct a training benchmark dataset called LAION-Glyph. We
evaluate the effectiveness of our approach by measuring OCR-based metrics and
CLIP scores of the generated visual text. Our empirical evaluations demonstrate
that GlyphControl outperforms the recent DeepFloyd IF approach in terms of OCR
accuracy and CLIP scores, highlighting the efficacy of our method.