ChatPaper.aiChatPaper

字形控制:用於視覺文本生成的字形條件控制

GlyphControl: Glyph Conditional Control for Visual Text Generation

May 29, 2023
作者: Yukang Yang, Dongnan Gui, Yuhui Yuan, Haisong Ding, Han Hu, Kai Chen
cs.AI

摘要

最近,開發基於擴散的文本到圖像生成模型,能夠生成連貫和形式良好的視覺文本,引起了廣泛關注。本文提出了一種名為GlyphControl 的新穎且高效的方法來應對這一任務。與現有方法(如 ByT5)依賴於字符感知文本編碼器並需要重新訓練文本到圖像模型不同,我們的方法利用額外的字形條件信息,以提升現成的 Stable-Diffusion 模型在生成準確視覺文本方面的性能。通過融入字形指令,用戶可以根據特定需求自定生成文本的內容、位置和大小。為促進視覺文本生成的進一步研究,我們構建了一個名為 LAION-Glyph 的訓練基準數據集。我們通過測量基於 OCR 的指標和生成視覺文本的 CLIP 分數,評估了我們方法的有效性。我們的實證評估表明,GlyphControl 在 OCR 準確性和 CLIP 分數方面優於最近的 DeepFloyd IF 方法,突顯了我們方法的功效。
English
Recently, there has been a growing interest in developing diffusion-based text-to-image generative models capable of generating coherent and well-formed visual text. In this paper, we propose a novel and efficient approach called GlyphControl to address this task. Unlike existing methods that rely on character-aware text encoders like ByT5 and require retraining of text-to-image models, our approach leverages additional glyph conditional information to enhance the performance of the off-the-shelf Stable-Diffusion model in generating accurate visual text. By incorporating glyph instructions, users can customize the content, location, and size of the generated text according to their specific requirements. To facilitate further research in visual text generation, we construct a training benchmark dataset called LAION-Glyph. We evaluate the effectiveness of our approach by measuring OCR-based metrics and CLIP scores of the generated visual text. Our empirical evaluations demonstrate that GlyphControl outperforms the recent DeepFloyd IF approach in terms of OCR accuracy and CLIP scores, highlighting the efficacy of our method.
PDF21December 15, 2024