ChatPaper.aiChatPaper

分支-解決-合併改進大型語言模型的評估和生成

Branch-Solve-Merge Improves Large Language Model Evaluation and Generation

October 23, 2023
作者: Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, Xian Li
cs.AI

摘要

大型語言模型(LLMs)經常用於多方面的語言生成和評估任務,這些任務涉及滿足複雜的用戶限制或考慮多個方面和標準。然而,由於模型缺乏連貫性和無法規劃和分解問題,它們的表現可能會不盡如人意。我們提出了Branch-Solve-Merge(BSM),這是一個用於應對這些具有挑戰性的自然語言任務的大型語言模型程序(Schlag等人,2023年)。它包括分支、解決和合併模塊,這些模塊使用特定提示對基礎LLM進行參數化。這三個模塊計劃將任務分解為多個平行子任務,獨立解決這些子任務,並將解決方案融合到子任務中。我們將我們的方法應用於LLM回應評估和受限文本生成任務,並使用多個LLMs(包括Vicuna、LLaMA-2-chat和GPT-4)評估其有效性。BSM通過提高人-LLM一致性,使每個LLM的評估正確性和一致性提高了最多26%,將長度和成對位置偏差降低了最多50%,並使LLaMA-2-chat能夠在大多數領域與GPT-4匹敵甚至超越。在受限故事生成任務中,BSM提高了故事的連貫性,同時提高了約12%的限制滿足度。
English
Large Language Models (LLMs) are frequently used for multi-faceted language generation and evaluation tasks that involve satisfying intricate user constraints or taking into account multiple aspects and criteria. However, their performance can fall short, due to the model's lack of coherence and inability to plan and decompose the problem. We propose Branch-Solve-Merge (BSM), a Large Language Model program (Schlag et al., 2023) for tackling such challenging natural language tasks. It consists of branch, solve, and merge modules that are parameterized with specific prompts to the base LLM. These three modules plan a decomposition of the task into multiple parallel sub-tasks, independently solve them, and fuse the solutions to the sub-tasks. We apply our method to the tasks of LLM response evaluation and constrained text generation and evaluate its effectiveness with multiple LLMs, including Vicuna, LLaMA-2-chat, and GPT-4. BSM improves the evaluation correctness and consistency for each LLM by enhancing human-LLM agreement by up to 26%, reducing length and pairwise position biases by up to 50%, and allowing LLaMA-2-chat to match or outperform GPT-4 on most domains. On the constraint story generation task, BSM improves the coherence of the stories while also improving constraint satisfaction by 12%.
PDF80December 15, 2024