NER检索器:基于类型感知嵌入的零样本命名实体检索
NER Retriever: Zero-Shot Named Entity Retrieval with Type-Aware Embeddings
September 4, 2025
作者: Or Shachar, Uri Katz, Yoav Goldberg, Oren Glickman
cs.AI
摘要
我们提出了NER检索器,一种用于即席命名实体检索的零样本检索框架,这是命名实体识别(NER)的一种变体,其中感兴趣的类型并未预先提供,而是通过用户定义的类型描述来检索提及该类型实体的文档。我们的方法不依赖于固定模式或微调模型,而是基于大型语言模型(LLMs)的内部表示,将实体提及和用户提供的开放式类型描述嵌入到一个共享的语义空间中。我们发现,内部表示,特别是来自中间层Transformer块的值向量,比常用的顶层嵌入更有效地编码细粒度类型信息。为了优化这些表示,我们训练了一个轻量级的对比投影网络,该网络对齐类型兼容的实体,同时分离不相关的类型。生成的实体嵌入紧凑、类型感知,非常适合最近邻搜索。在三个基准测试中,NER检索器显著优于词汇和密集句子级检索基线。我们的研究结果为LLMs中的表示选择提供了实证支持,并展示了一种可扩展、无模式实体检索的实用解决方案。NER检索器代码库已在https://github.com/ShacharOr100/ner_retriever 公开。
English
We present NER Retriever, a zero-shot retrieval framework for ad-hoc Named
Entity Retrieval, a variant of Named Entity Recognition (NER), where the types
of interest are not provided in advance, and a user-defined type description is
used to retrieve documents mentioning entities of that type. Instead of relying
on fixed schemas or fine-tuned models, our method builds on internal
representations of large language models (LLMs) to embed both entity mentions
and user-provided open-ended type descriptions into a shared semantic space. We
show that internal representations, specifically the value vectors from
mid-layer transformer blocks, encode fine-grained type information more
effectively than commonly used top-layer embeddings. To refine these
representations, we train a lightweight contrastive projection network that
aligns type-compatible entities while separating unrelated types. The resulting
entity embeddings are compact, type-aware, and well-suited for nearest-neighbor
search. Evaluated on three benchmarks, NER Retriever significantly outperforms
both lexical and dense sentence-level retrieval baselines. Our findings provide
empirical support for representation selection within LLMs and demonstrate a
practical solution for scalable, schema-free entity retrieval. The NER
Retriever Codebase is publicly available at
https://github.com/ShacharOr100/ner_retriever