ChatPaper.aiChatPaper

DeAL:大型語言模型的解碼時間對齊

DeAL: Decoding-time Alignment for Large Language Models

February 5, 2024
作者: James Y. Huang, Sailik Sengupta, Daniele Bonadiman, Yi-an Lai, Arshit Gupta, Nikolaos Pappas, Saab Mansour, Katrin Kirchoff, Dan Roth
cs.AI

摘要

大型語言模型(LLMs)現今被期望能夠生成符合人類偏好的內容。目前的研究聚焦於在模型訓練時實現對齊,透過諸如強化學習與人類反饋(RLHF)等技術。然而,目前尚不清楚這些方法是否是教導模型對齊目標的有效選擇。首先,無法整合多個自定義獎勵以及依賴模型開發者對於普遍和靜態原則的觀點是主要限制因素。其次,模型訓練中的殘留差距以及這些方法的可靠性也存在疑問(例如,即使經過安全訓練,仍然容易被破解)。為了應對這些問題,我們提出了DeAL,一個允許用戶自定義獎勵函數並實現大型語言模型(LLMs)解碼時對齊的框架。在核心思想上,我們將解碼視為一個啟發式引導的搜索過程,並促進各種對齊目標的應用。我們的實驗涉及程式約束,如關鍵詞和長度約束(在LLM時代前被廣泛研究),以及抽象目標,如無害性和幫助性(在LLM時代後被提出),顯示我們能夠在對齊目標的精細平衡中進行DeAL,提高對齊目標的遵循度,並解決LLMs中的殘留差距。最後,雖然DeAL可以有效地與RLHF和提示技術配對使用,但其泛用性使解碼速度變慢,這是我們留給未來工作進行優化的部分。
English
Large Language Models (LLMs) are nowadays expected to generate content aligned with human preferences. Current work focuses on alignment at model training time, through techniques such as Reinforcement Learning with Human Feedback (RLHF). However, it is unclear if such methods are an effective choice to teach alignment objectives to the model. First, the inability to incorporate multiple, custom rewards and reliance on a model developer's view of universal and static principles are key limitations. Second, the residual gaps in model training and the reliability of such approaches are also questionable (e.g. susceptibility to jail-breaking even after safety training). To address these, we propose DeAL, a framework that allows the user to customize reward functions and enables Decoding-time Alignment of LLMs (DeAL). At its core, we view decoding as a heuristic-guided search process and facilitate the use of a wide variety of alignment objectives. Our experiments with programmatic constraints such as keyword and length constraints (studied widely in the pre-LLM era) and abstract objectives such as harmlessness and helpfulness (proposed in the post-LLM era) show that we can DeAL with fine-grained trade-offs, improve adherence to alignment objectives, and address residual gaps in LLMs. Lastly, while DeAL can be effectively paired with RLHF and prompting techniques, its generality makes decoding slower, an optimization we leave for future work.
PDF91December 15, 2024