DialectGen:多模态生成中的方言鲁棒性基准测试与提升
DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation
October 16, 2025
作者: Yu Zhou, Sohyun An, Haikang Deng, Da Yin, Clark Peng, Cho-Jui Hsieh, Kai-Wei Chang, Nanyun Peng
cs.AI
摘要
如英语这样的通用语言在方言形式上展现出丰富的区域差异,这些方言常被方言使用者用于与生成模型的互动中。然而,多模态生成模型能否在接收到方言文本输入时有效生成内容?在本研究中,我们通过构建一个涵盖六种常见英语方言的大规模基准来探讨这一问题。我们与方言使用者合作,收集并验证了超过4200条独特的提示,并在17个图像和视频生成模型上进行了评估。我们的自动与人工评估结果显示,当前最先进的多模态生成模型在提示中使用单一方言词汇时,性能下降了32.26%至48.17%。常见的缓解方法,如微调和提示重写,仅能小幅提升方言性能(<7%),同时可能对标准美式英语(SAE)造成显著的性能下降。为此,我们设计了一种基于编码器的通用缓解策略,用于多模态生成模型。我们的方法教导模型识别新的方言特征,同时保持SAE的性能。在如Stable Diffusion 1.5等模型上的实验表明,我们的方法能够同时将五种方言的性能提升至与SAE相当(+34.4%),而对SAE性能的影响几乎为零。
English
Contact languages like English exhibit rich regional variations in the form
of dialects, which are often used by dialect speakers interacting with
generative models. However, can multimodal generative models effectively
produce content given dialectal textual input? In this work, we study this
question by constructing a new large-scale benchmark spanning six common
English dialects. We work with dialect speakers to collect and verify over 4200
unique prompts and evaluate on 17 image and video generative models. Our
automatic and human evaluation results show that current state-of-the-art
multimodal generative models exhibit 32.26% to 48.17% performance degradation
when a single dialect word is used in the prompt. Common mitigation methods
such as fine-tuning and prompt rewriting can only improve dialect performance
by small margins (< 7%), while potentially incurring significant performance
degradation in Standard American English (SAE). To this end, we design a
general encoder-based mitigation strategy for multimodal generative models. Our
method teaches the model to recognize new dialect features while preserving SAE
performance. Experiments on models such as Stable Diffusion 1.5 show that our
method is able to simultaneously raise performance on five dialects to be on
par with SAE (+34.4%), while incurring near zero cost to SAE performance.