ChatPaper.aiChatPaper

在任务类型、应用领域和推理类型之间评估开放式语言模型:一项深入的实验分析

Evaluating Open Language Models Across Task Types, Application Domains, and Reasoning Types: An In-Depth Experimental Analysis

June 17, 2024
作者: Neelabh Sinha, Vinija Jain, Aman Chadha
cs.AI

摘要

语言模型(LMs)的迅速崛起扩大了它们在多个应用中的使用。然而,由于模型大小、相关成本或专有限制的限制,利用最先进的(SOTA)大型语言模型并非总是可行的。随着开放、较小的LMs出现,更多应用可以利用它们的能力,但选择合适的LM可能具有挑战性。本研究对10个较小、开放的LMs的输出的语义正确性进行了深入的实验分析,涵盖了任务类型、应用领域和推理类型三个方面,使用多样的提示样式。我们展示了根据具体要求,最有效的模型和提示样式会有所变化。我们的分析提供了对LMs和提示样式的比较评估,使用了一个基于用例和其他约束条件的三层方面模式,以便进行战略选择。我们还表明,如果适当利用,这些LMs可以与DeepSeek-v2、GPT-3.5-Turbo和GPT-4o等SOTA LLMs竞争,有时甚至表现更好。
English
The rapid rise of Language Models (LMs) has expanded their use in several applications. Yet, due to constraints of model size, associated cost, or proprietary restrictions, utilizing state-of-the-art (SOTA) LLMs is not always feasible. With open, smaller LMs emerging, more applications can leverage their capabilities, but selecting the right LM can be challenging. This work conducts an in-depth experimental analysis of the semantic correctness of outputs of 10 smaller, open LMs across three aspects: task types, application domains and reasoning types, using diverse prompt styles. We demonstrate that most effective models and prompt styles vary depending on the specific requirements. Our analysis provides a comparative assessment of LMs and prompt styles using a proposed three-tier schema of aspects for their strategic selection based on use-case and other constraints. We also show that if utilized appropriately, these LMs can compete with, and sometimes outperform, SOTA LLMs like DeepSeek-v2, GPT-3.5-Turbo, and GPT-4o.

Summary

AI-Generated Summary

PDF61December 6, 2024