PandaLM:用于LLM指令调优优化的自动评估基准。
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization
June 8, 2023
作者: Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
cs.AI
摘要
调整大型语言模型(LLMs)的指导仍然是一项具有挑战性的任务,这是由于超参数选择的复杂性以及评估调整模型的困难所致。为了确定最佳超参数,自动、稳健和可靠的评估基准至关重要。然而,建立这样一个基准并非易事,因为评估准确性和隐私保护所带来的挑战。针对这些挑战,我们介绍了一种名为PandaLM的评判大型语言模型,该模型经过训练,能够区分出若干个LLMs中的优越模型。PandaLM的重点不仅仅局限于传统评估数据集主要关注的响应客观正确性,还涉及关键的主观因素,如相对简洁性、清晰度、遵循指导、全面性和正式性。为确保PandaLM的可靠性,我们收集了一个多样化的人工标注测试数据集,其中所有上下文均由人类生成,标签与人类偏好保持一致。我们的结果表明,PandaLM-7B在我们的测试数据集上以F1分数为93.75%达到了GPT-3.5的评估能力,以88.28%达到了GPT-4的水平。PandaLM使得LLM的评估更加公平,但成本更低,通过PandaLM调整的模型相比使用默认Alpaca超参数训练的对应模型实现了显著改进。此外,PandaLM不依赖基于API的评估,从而避免潜在的数据泄露。PandaLM的所有资源均在https://github.com/WeOpenML/PandaLM 上发布。
English
Instruction tuning large language models (LLMs) remains a challenging task,
owing to the complexity of hyperparameter selection and the difficulty involved
in evaluating the tuned models. To determine the optimal hyperparameters, an
automatic, robust, and reliable evaluation benchmark is essential. However,
establishing such a benchmark is not a trivial task due to the challenges
associated with evaluation accuracy and privacy protection. In response to
these challenges, we introduce a judge large language model, named PandaLM,
which is trained to distinguish the superior model given several LLMs.
PandaLM's focus extends beyond just the objective correctness of responses,
which is the main focus of traditional evaluation datasets. It addresses vital
subjective factors such as relative conciseness, clarity, adherence to
instructions, comprehensiveness, and formality. To ensure the reliability of
PandaLM, we collect a diverse human-annotated test dataset, where all contexts
are generated by humans and labels are aligned with human preferences. Our
results indicate that PandaLM-7B achieves 93.75% of GPT-3.5's evaluation
ability and 88.28% of GPT-4's in terms of F1-score on our test dataset. PandaLM
enables the evaluation of LLM to be fairer but with less cost, evidenced by
significant improvements achieved by models tuned through PandaLM compared to
their counterparts trained with default Alpaca's hyperparameters. In addition,
PandaLM does not depend on API-based evaluations, thus avoiding potential data
leakage. All resources of PandaLM are released at
https://github.com/WeOpenML/PandaLM.