ChatPaper.aiChatPaper

利用大型內容和行為模型來理解、模擬和優化內容和行為。

Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior

September 1, 2023
作者: Ashmit Khandelwal, Aditya Agrawal, Aanisha Bhattacharyya, Yaman K Singla, Somesh Singh, Uttaran Bhattacharya, Ishita Dasgupta, Stefano Petrangeli, Rajiv Ratn Shah, Changyou Chen, Balaji Krishnamurthy
cs.AI

摘要

在他開創性的論文中引入信息理論的過程中,Shannon將通訊分為三個層次:技術層、語義層和效果層。技術層關注於準確重建傳輸符號,而語義層和效果層則處理推斷意義及其對接收者的影響。得益於電信技術,第一層問題已經帶來了像互聯網這樣的巨大進步。大型語言模型(LLMs)在第二個目標上取得了一些進展,但第三層仍然基本保持不變。第三個問題涉及預測和優化通訊以獲得期望的接收者行為。LLMs雖然在各種任務上展示了廣泛的泛化能力,但無法解決這個問題。造成表現不佳的一個原因可能是LLMs的訓練語料庫中缺乏“行為標記”。行為標記定義了通訊中接收者的行為,例如分享、喜歡、點擊、購買、轉發等。在為LLMs訓練預處理數據時,行為標記通常被視為噪音而從語料庫中刪除。因此,在本論文中,我們朝著在LLMs訓練中重新引入行為標記取得了一些初步進展。訓練的模型除了在內容理解任務上表現出與LLMs相似的性能外,還在行為模擬、內容模擬、行為理解和行為領域適應方面展現了泛化能力。通過在兩個語料庫上進行各種任務,我們展示了所有這些能力的結果。我們將這些模型稱為大型內容和行為模型(LCBMs)。此外,為了激發更多關於LCBMs的研究,我們發布了我們的新內容行為語料庫(CBC),這是一個包含通訊者、消息以及相應接收者行為的存儲庫。
English
Shannon, in his seminal paper introducing information theory, divided the communication into three levels: technical, semantic, and effectivenss. While the technical level is concerned with accurate reconstruction of transmitted symbols, the semantic and effectiveness levels deal with the inferred meaning and its effect on the receiver. Thanks to telecommunications, the first level problem has produced great advances like the internet. Large Language Models (LLMs) make some progress towards the second goal, but the third level still remains largely untouched. The third problem deals with predicting and optimizing communication for desired receiver behavior. LLMs, while showing wide generalization capabilities across a wide range of tasks, are unable to solve for this. One reason for the underperformance could be a lack of "behavior tokens" in LLMs' training corpora. Behavior tokens define receiver behavior over a communication, such as shares, likes, clicks, purchases, retweets, etc. While preprocessing data for LLM training, behavior tokens are often removed from the corpora as noise. Therefore, in this paper, we make some initial progress towards reintroducing behavior tokens in LLM training. The trained models, other than showing similar performance to LLMs on content understanding tasks, show generalization capabilities on behavior simulation, content simulation, behavior understanding, and behavior domain adaptation. Using a wide range of tasks on two corpora, we show results on all these capabilities. We call these models Large Content and Behavior Models (LCBMs). Further, to spur more research on LCBMs, we release our new Content Behavior Corpus (CBC), a repository containing communicator, message, and corresponding receiver behavior.
PDF220December 15, 2024