DialectGen:多模态生成中的方言鲁棒性基准测试与优化
DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation
October 16, 2025
作者: Yu Zhou, Sohyun An, Haikang Deng, Da Yin, Clark Peng, Cho-Jui Hsieh, Kai-Wei Chang, Nanyun Peng
cs.AI
摘要
作为通用语言的英语展现出丰富的区域性变体,即方言,这些方言常被方言使用者用于与生成模型的交互中。然而,多模态生成模型能否有效处理方言文本输入并生成相应内容?本研究通过构建一个涵盖六种常见英语方言的大规模基准数据集,深入探讨了这一问题。我们与方言使用者合作,收集并验证了超过4200条独特的提示词,并在17个图像和视频生成模型上进行了评估。自动评估与人工评估结果显示,当前最先进的多模态生成模型在提示词中仅使用一个方言词汇时,性能下降幅度达32.26%至48.17%。常见的缓解方法,如微调和提示词重写,仅能小幅提升方言处理性能(<7%),同时可能显著降低标准美式英语(SAE)的表现。为此,我们设计了一种基于编码器的通用缓解策略,旨在教导模型识别新的方言特征,同时保持SAE性能。在Stable Diffusion 1.5等模型上的实验表明,我们的方法能够将五种方言的处理性能提升至与SAE相当的水平(+34.4%),而对SAE性能的影响几乎为零。
English
Contact languages like English exhibit rich regional variations in the form
of dialects, which are often used by dialect speakers interacting with
generative models. However, can multimodal generative models effectively
produce content given dialectal textual input? In this work, we study this
question by constructing a new large-scale benchmark spanning six common
English dialects. We work with dialect speakers to collect and verify over 4200
unique prompts and evaluate on 17 image and video generative models. Our
automatic and human evaluation results show that current state-of-the-art
multimodal generative models exhibit 32.26% to 48.17% performance degradation
when a single dialect word is used in the prompt. Common mitigation methods
such as fine-tuning and prompt rewriting can only improve dialect performance
by small margins (< 7%), while potentially incurring significant performance
degradation in Standard American English (SAE). To this end, we design a
general encoder-based mitigation strategy for multimodal generative models. Our
method teaches the model to recognize new dialect features while preserving SAE
performance. Experiments on models such as Stable Diffusion 1.5 show that our
method is able to simultaneously raise performance on five dialects to be on
par with SAE (+34.4%), while incurring near zero cost to SAE performance.