AnalogRetriever:面向模拟电路检索的跨模态表征学习框架
AnalogRetriever: Learning Cross-Modal Representations for Analog Circuit Retrieval
April 25, 2026
作者: Yihan Wang, Lei Li, Yao Lai, Jing Wang, Yan Lu
cs.AI
摘要
模拟电路设计高度依赖现有知识产权模块的重用,然而跨SPICE网表、原理图和功能描述等异构表示的检索仍具挑战。现有方法大多局限于单一模态内的精确匹配,难以捕捉跨模态语义关联。为弥补这一差距,我们提出AnalogRetriever——一种面向模拟电路检索的统一三模态框架。我们首先通过两阶段修复流程在Masala-CHAI基础上构建高质量数据集,将网表编译成功率从22%提升至100%。基于此基础,AnalogRetriever采用视觉语言模型编码原理图和描述,通过端口感知关系图卷积网络处理网表,并借助课程对比学习将三种模态映射到共享嵌入空间。实验表明,AnalogRetriever在全部六种跨模态检索方向上平均Recall@1达到75.2%,显著超越现有基线。当作为检索增强生成模块集成至AnalogCoder智能体框架时,该方法能持续提升功能通过率,并实现以往无法完成的任务。我们的代码与数据集将同步开源。
English
Analog circuit design relies heavily on reusing existing intellectual property (IP), yet searching across heterogeneous representations such as SPICE netlists, schematics, and functional descriptions remains challenging. Existing methods are largely limited to exact matching within a single modality, failing to capture cross-modal semantic relationships. To bridge this gap, we present AnalogRetriever, a unified tri-modal retrieval framework for analog circuit search. We first build a high-quality dataset on top of Masala-CHAI through a two-stage repair pipeline that raises the netlist compile rate from 22\% to 100\%. Built on this foundation, AnalogRetriever encodes schematics and descriptions with a vision-language model and netlists with a port-aware relational graph convolutional network, mapping all three modalities into a shared embedding space via curriculum contrastive learning. Experiments show that AnalogRetriever achieves an average Recall@1 of 75.2\% across all six cross-modal retrieval directions, significantly outperforming existing baselines. When integrated into the AnalogCoder agentic framework as a retrieval-augmented generation module, it consistently improves functional pass rates and enables previously unsolved tasks to be completed. Our code and dataset will be released.