ChatPaper.aiChatPaper

語言複雜度測量作為評估LLM性能的噪音零-shot代理

Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance

February 17, 2025
作者: Birger Moell, Johan Boye
cs.AI

摘要

大型語言模型(LLMs)在自然語言生成方面取得了顯著進展,但在需要精確計算和結構分析的任務中常常面臨挑戰。本文通過計算 LIX 可讀性指標和平均依存距離(ADD),研究了最先進的LLMs在語言複雜度測量任務上的表現。我們使用瑞典高中和大學水平的文章,評估模型計算LIX分數和執行依存分析的能力,並將其結果與已建立的基準進行比較。我們的研究發現,儘管所有模型都展示了一定的這些任務能力,但 ChatGPT-o1-mini 表現最為一致,無論是在 LIX 計算還是依存分析方面都取得了最高的準確性。此外,我們觀察到在計算LIX的準確性和模型在 Massive Multitask Language Understanding(MMLU)基準測試中的整體表現之間存在著強烈的顯著負相關-0.875 p 0.026(N=6)。這些結果表明,語言複雜度測量能力可以作為評估LLMs整體能力的一種噪聲零樣本代理,為模型評估提供了一種實用方法,無需大量基準測試數據集。
English
Large Language Models (LLMs) have made significant strides in natural language generation but often face challenges in tasks requiring precise calculations and structural analysis. This paper investigates the performance of state-of-the-art LLMs on language complexity measurement tasks, through the computation of the LIX readability metric and Average Dependency Distance (ADD). Using Swedish high school and university-level essays, we evaluate the models' abilities to compute LIX scores and perform dependency parsing, comparing their results to established ground truths. Our findings reveal that while all models demonstrate some capacity for these tasks, ChatGPT-o1-mini performs most consistently, achieving the highest accuracy in both LIX computation and dependency parsing. Additionally, we observe a strong significant correlation -0.875 p 0.026 (N=6) between the models' accuracy in computing LIX and their overall performance on the Massive Multitask Language Understanding (MMLU) benchmark. These results suggest that language complexity measurement abilities can serve as a noisy zero-shot proxies for assessing the general capabilities of LLMs, providing a practical method for model evaluation without the need for extensive benchmarking datasets.

Summary

AI-Generated Summary

PDF02February 18, 2025