屬性作為文本基因:利用大型語言模型作為遺傳算法模擬器進行條件性合成數據生成
Attributes as Textual Genes: Leveraging LLMs as Genetic Algorithm Simulators for Conditional Synthetic Data Generation
September 2, 2025
作者: Guangzeng Han, Weisi Liu, Xiaolei Huang
cs.AI
摘要
大型語言模型(LLMs)在生成合成數據方面表現卓越,但確保其質量和多樣性仍具挑戰性。我們提出了一種名為“基因提示”的新框架,該框架將遺傳算法與LLMs結合,以增強合成數據的生成。我們的方法將語義文本屬性視為基因序列,並利用LLM模擬交叉和變異操作。這一遺傳過程通過創造新的屬性組合來提升數據質量和多樣性,從而生成更接近真實世界數據的合成分佈。為了優化親本選擇,我們還整合了一種主動學習方案,以擴展後代的搜索空間。我們在多個自然語言處理任務上的實驗揭示了幾個關鍵發現:基因提示不僅顯著超越了最先進的基線模型,而且在不同生成模型大小和規模上均表現出穩健的性能。此外,我們證明將我們的合成數據與原始訓練集融合,能顯著提升下游模型的性能,尤其是在類別不平衡的情況下。我們的研究結果驗證了基因提示是一種有效的方法,能夠為廣泛的自然語言處理應用生成高質量的合成數據。
English
Large Language Models (LLMs) excel at generating synthetic data, but ensuring
its quality and diversity remains challenging. We propose Genetic Prompt, a
novel framework that combines genetic algorithms with LLMs to augment synthetic
data generation. Our approach treats semantic text attributes as gene sequences
and leverages the LLM to simulate crossover and mutation operations. This
genetic process enhances data quality and diversity by creating novel attribute
combinations, yielding synthetic distributions closer to real-world data. To
optimize parent selection, we also integrate an active learning scheme that
expands the offspring search space. Our experiments on multiple NLP tasks
reveal several key findings: Genetic Prompt not only significantly outperforms
state-of-the-art baselines but also shows robust performance across various
generator model sizes and scales. Moreover, we demonstrate that fusing our
synthetic data with the original training set significantly boosts downstream
model performance, particularly for class-imbalanced scenarios. Our findings
validate that Genetic Prompt is an effective method for producing high-quality
synthetic data for a wide range of NLP applications.