AnalogRetriever:面向模拟电路检索的跨模态表征学习
AnalogRetriever: Learning Cross-Modal Representations for Analog Circuit Retrieval
April 25, 2026
作者: Yihan Wang, Lei Li, Yao Lai, Jing Wang, Yan Lu
cs.AI
摘要
模拟电路设计高度依赖现有知识产权(IP)的重用,但跨SPICE网表、原理图和功能描述等异构表示的检索仍具挑战。现有方法大多局限于单一模态的精确匹配,难以捕捉跨模态语义关联。为此,我们提出统一的三模态检索框架AnalogRetriever。基于Masala-CHAI数据集,我们通过两阶段修复流程将网表编译成功率从22%提升至100%,构建了高质量数据集。在此基础之上,AnalogRetriever采用视觉语言模型编码原理图与描述文本,通过端口感知关系图卷积网络处理网表,并借助课程对比学习将三种模态映射到共享嵌入空间。实验表明,该框架在全部六种跨模态检索方向上平均Recall@1达到75.2%,显著超越现有基线。当作为检索增强生成模块集成至AnalogCoder智能体框架时,它能持续提升功能通过率,并完成此前无法解决的任务。我们的代码与数据集将公开发布。
English
Analog circuit design relies heavily on reusing existing intellectual property (IP), yet searching across heterogeneous representations such as SPICE netlists, schematics, and functional descriptions remains challenging. Existing methods are largely limited to exact matching within a single modality, failing to capture cross-modal semantic relationships. To bridge this gap, we present AnalogRetriever, a unified tri-modal retrieval framework for analog circuit search. We first build a high-quality dataset on top of Masala-CHAI through a two-stage repair pipeline that raises the netlist compile rate from 22\% to 100\%. Built on this foundation, AnalogRetriever encodes schematics and descriptions with a vision-language model and netlists with a port-aware relational graph convolutional network, mapping all three modalities into a shared embedding space via curriculum contrastive learning. Experiments show that AnalogRetriever achieves an average Recall@1 of 75.2\% across all six cross-modal retrieval directions, significantly outperforming existing baselines. When integrated into the AnalogCoder agentic framework as a retrieval-augmented generation module, it consistently improves functional pass rates and enables previously unsolved tasks to be completed. Our code and dataset will be released.