ChatPaper.aiChatPaper

编码式伏笔-呼应文本生成

Codified Foreshadowing-Payoff Text Generation

January 11, 2026
作者: Longfei Yun, Kun Zhou, Yupeng Hou, Letian Peng, Jingbo Shang
cs.AI

摘要

伏笔与照应作为普遍存在的叙事手段,作者通过其在故事早期埋下承诺,并通过具体可观测的结果予以兑现。然而,尽管故事生成技术不断进步,大语言模型在处理这类长程叙事依赖时仍时常失效,即使必要语境已然存在,"契科夫的枪"也往往未被击发。现有评估方法大多忽视这种结构性缺陷,更关注表层连贯性而非叙事铺垫的逻辑实现。本文提出编码化伏笔-照应生成框架,通过照应实现的全新视角重构叙事质量评估体系。针对大语言模型难以直观把握伏笔事件"触发机制"的问题,CFPG将叙事连续性转化为可执行的因果谓词集合。通过从BookSum语料库中挖掘并编码"伏笔-触发-照应"三元组,我们提供的结构化监督机制能确保伏笔承诺不仅被提及,更能实现时空与逻辑层面的圆满兑现。实验表明,CFPG在照应准确度与叙事一致性方面显著优于标准提示基线。我们的研究证明,对叙事机制进行显式编码对于推动大语言模型从表层流畅性迈向真正的叙事能力具有关键意义。
English
Foreshadowing and payoff are ubiquitous narrative devices through which authors introduce commitments early in a story and resolve them through concrete, observable outcomes. However, despite advances in story generation, large language models (LLMs) frequently fail to bridge these long-range narrative dependencies, often leaving "Chekhov's guns" unfired even when the necessary context is present. Existing evaluations largely overlook this structural failure, focusing on surface-level coherence rather than the logical fulfillment of narrative setups. In this paper, we introduce Codified Foreshadowing-Payoff Generation (CFPG), a novel framework that reframes narrative quality through the lens of payoff realization. Recognizing that LLMs struggle to intuitively grasp the "triggering mechanism" of a foreshadowed event, CFPG transforms narrative continuity into a set of executable causal predicates. By mining and encoding Foreshadow-Trigger-Payoff triples from the BookSum corpus, we provide structured supervision that ensures foreshadowed commitments are not only mentioned but also temporally and logically fulfilled. Experiments demonstrate that CFPG significantly outperforms standard prompting baselines in payoff accuracy and narrative alignment. Our findings suggest that explicitly codifying narrative mechanics is essential for moving LLMs from surface-level fluency to genuine narrative competence.
PDF33February 8, 2026