为谷歌广告内容审核扩展LLM评论
Scaling Up LLM Reviews for Google Ads Content Moderation
February 7, 2024
作者: Wei Qiao, Tushar Dogra, Otilia Stretcu, Yu-Han Lyu, Tiantian Fang, Dongjin Kwon, Chun-Ta Lu, Enming Luo, Yuan Wang, Chih-Chun Chia, Ariel Fuxman, Fangzhou Wang, Ranjay Krishna, Mehmet Tek
cs.AI
摘要
大型语言模型(LLMs)是内容审核的强大工具,但其推理成本和延迟使它们在大型数据集上的日常使用变得困难,例如谷歌广告存储库。本研究提出了一种方法,用于在谷歌广告中扩展LLM审核内容的规模。首先,我们使用启发式方法通过过滤和去重来选择候选项,并为这些广告创建广告群集,从中选择一个代表性广告。然后,我们使用LLMs仅审核代表性广告。最后,我们将代表性广告的LLM决策传播回它们的群集。这种方法将审核数量减少了3个数量级以上,同时与基准非LLM模型相比,召回率提高了2倍。这种方法的成功与用于聚类和标签传播的表示的功能密切相关;我们发现,跨模态相似性表示比单模态表示产生更好的结果。
English
Large language models (LLMs) are powerful tools for content moderation, but
their inference costs and latency make them prohibitive for casual use on large
datasets, such as the Google Ads repository. This study proposes a method for
scaling up LLM reviews for content moderation in Google Ads. First, we use
heuristics to select candidates via filtering and duplicate removal, and create
clusters of ads for which we select one representative ad per cluster. We then
use LLMs to review only the representative ads. Finally, we propagate the LLM
decisions for the representative ads back to their clusters. This method
reduces the number of reviews by more than 3 orders of magnitude while
achieving a 2x recall compared to a baseline non-LLM model. The success of this
approach is a strong function of the representations used in clustering and
label propagation; we found that cross-modal similarity representations yield
better results than uni-modal representations.