ChatPaper.aiChatPaper

循环点击率:释放循环缩放能力在点击率预测中的潜力

LoopCTR: Unlocking the Loop Scaling Power for Click-Through Rate Prediction

April 21, 2026
作者: Jiakai Tang, Runfeng Zhang, Weiqiu Wang, Yifei Liu, Chuan Wang, Xu Chen, Yeqiu Yang, Jian Wu, Yuning Jiang, Bo Zheng
cs.AI

摘要

基于Transformer的点击率(CTR)预测模型通过堆叠更多参数实现扩展时,会带来持续增长的计算和存储开销,导致模型扩展目标与严苛的工业部署要求之间的差距日益扩大。我们提出LoopCTR模型,引入循环扩展范式:通过共享模型层的递归复用增加训练阶段计算量,实现计算开销与参数增长的解耦。该模型采用超连接残差结构和混合专家机制增强的三明治架构,并在每个循环深度实施过程监督,将多循环优势编码至共享参数中。由此实现"训练多循环-推理零循环"策略,即使不进行任何循环的单次前向传播也能超越所有基线模型。在三个公开基准和一个工业数据集上的实验表明,该方法达到了业界最优性能。潜力分析进一步揭示出0.02-0.04 AUC的未开发提升空间,且训练循环次数较少的模型展现出更高的潜力上限,这为自适应推理指明了富有前景的发展方向。
English
Scaling Transformer-based click-through rate (CTR) models by stacking more parameters brings growing computational and storage overhead, creating a widening gap between scaling ambitions and the stringent industrial deployment constraints. We propose LoopCTR, which introduces a loop scaling paradigm that increases training-time computation through recursive reuse of shared model layers, decoupling computation from parameter growth. LoopCTR adopts a sandwich architecture enhanced with Hyper-Connected Residuals and Mixture-of-Experts, and employs process supervision at every loop depth to encode multi-loop benefits into the shared parameters. This enables a train-multi-loop, infer-zero-loop strategy where a single forward pass without any loop already outperforms all baselines. Experiments on three public benchmarks and one industrial dataset demonstrate state-of-the-art performance. Oracle analysis further reveals 0.02--0.04 AUC of untapped headroom, with models trained with fewer loops exhibiting higher oracle ceilings, pointing to a promising frontier for adaptive inference.
PDF31April 23, 2026