ChatPaper.aiChatPaper

通過帕累托最佳自監督實現大型語言模型的自動校準和錯誤修正。

Automatic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision

June 28, 2023
作者: Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon
cs.AI

摘要

大型語言模型(LLMs)展現了出色的能力,適用於廣泛的應用領域,然而準確性仍然是一個主要的增長領域,特別是在生物醫學等使命關鍵領域。一種有效的方法來校準LLM回應的信心水平對於自動檢測錯誤並促進人機協同驗證至關重要。校準信號的一個重要來源來自專家指定的程序監督,這通常成本較低,但也有其局限性,如噪音和覆蓋範圍。在本文中,我們介紹了一個帕累托最優自我監督框架,可以利用可用的程序監督系統地校準LLM回應,為每個回應生成風險分數,而無需進行額外的手動工作。通過學習一個協調模型來對齊LLM輸出與其他可用的監督來源,該模型將為更不確定的LLM回應分配較高的風險分數,並促進錯誤更正。在生物醫學和一般領域的標準關係提取任務上的實驗顯示了這種方法的潛力,我們提出的風險分數與LLMs的實際錯誤率高度相關。對於最不確定的測試實例,基於我們提出的風險分數的動態提示導致現成的LLMs的顯著準確性改善,將GPT-3的結果提升到最先進的弱監督和GPT-4的結果超越具有挑戰性的評估數據集上的最先進監督結果。
English
Large language models (LLMs) have demonstrated remarkable capabilities out of box for a wide range of applications, yet accuracy still remains a major growth area, especially in mission-critical domains such as biomedicine. An effective method to calibrate the confidence level on LLM responses is essential to automatically detect errors and facilitate human-in-the-loop verification. An important source of calibration signals stems from expert-stipulated programmatic supervision, which is often available at low cost but has its own limitations such as noise and coverage. In this paper, we introduce a Pareto optimal self-supervision framework that can leverage available programmatic supervision to systematically calibrate LLM responses by producing a risk score for every response, without any additional manual efforts. This is accomplished by learning a harmonizer model to align LLM output with other available supervision sources, which would assign higher risk scores to more uncertain LLM responses and facilitate error correction. Experiments on standard relation extraction tasks in biomedical and general domains demonstrate the promise of this approach, with our proposed risk scores highly correlated with the real error rate of LLMs. For the most uncertain test instances, dynamic prompting based on our proposed risk scores results in significant accuracy improvement for off-the-shelf LLMs, boosting GPT-3 results past state-of-the-art (SOTA) weak supervision and GPT-4 results past SOTA supervised results on challenging evaluation datasets.
PDF31December 15, 2024