缩小知识评估差距:多粒度答案的开放域问答
Narrowing the Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers
January 9, 2024
作者: Gal Yona, Roee Aharoni, Mor Geva
cs.AI
摘要
事实性问题通常可以以不同的细粒度正确回答。例如,“1961年8月4日”和“1961年”都是对问题“巴拉克·奥巴马是何时出生的?”的正确答案。然而,标准问答(QA)评估协议并未明确考虑这一点,而是将预测答案与单一粒度级别的答案进行比较。在这项工作中,我们提出了GRANOLA QA,这是一种新颖的评估设置,其中预测答案根据一组多粒度答案在准确性和信息量上进行评估。我们提出了一种简单的方法来丰富现有数据集的多粒度答案,并创建了GRANOLA-EQ,这是EntityQuestions数据集的多粒度版本。我们在GRANOLA-EQ上评估了一系列解码方法,包括一种新算法,称为响应聚合解码(DRAG),该算法旨在使响应粒度与模型的不确定性对齐。我们的实验表明,具有标准解码的大型语言模型往往会生成具体且常常不正确的答案。相比之下,在多粒度答案上进行评估时,DRAG平均准确率增加了近20个百分点,对于罕见实体来说增加更多。总体而言,这表明标准评估和解码方案可能严重低估了语言模型所包含的知识。
English
Factual questions typically can be answered correctly at different levels of
granularity. For example, both ``August 4, 1961'' and ``1961'' are correct
answers to the question ``When was Barack Obama born?''. Standard question
answering (QA) evaluation protocols, however, do not explicitly take this into
account and compare a predicted answer against answers of a single granularity
level. In this work, we propose GRANOLA QA, a novel evaluation setting where a
predicted answer is evaluated in terms of accuracy and informativeness against
a set of multi-granularity answers. We present a simple methodology for
enriching existing datasets with multi-granularity answers, and create
GRANOLA-EQ, a multi-granularity version of the EntityQuestions dataset. We
evaluate a range of decoding methods on GRANOLA-EQ, including a new algorithm,
called Decoding with Response Aggregation (DRAG), that is geared towards
aligning the response granularity with the model's uncertainty. Our experiments
show that large language models with standard decoding tend to generate
specific answers, which are often incorrect. In contrast, when evaluated on
multi-granularity answers, DRAG yields a nearly 20 point increase in accuracy
on average, which further increases for rare entities. Overall, this reveals
that standard evaluation and decoding schemes may significantly underestimate
the knowledge encapsulated in LMs.