ChatPaper.aiChatPaper

PM-LLM-Benchmark:在过程挖掘任务上评估大型语言模型

PM-LLM-Benchmark: Evaluating Large Language Models on Process Mining Tasks

July 18, 2024
作者: Alessandro Berti, Humam Kourani, Wil M. P. van der Aalst
cs.AI

摘要

大型语言模型(LLMs)有潜力在一些过程挖掘(PM)分析中实现半自动化。虽然商业模型已经足够满足许多分析任务的需求,但开源LLMs在PM任务中的竞争水平尚不明确。本文提出了PM-LLM-Benchmark,这是首个专注于领域知识(过程挖掘特定和过程特定)以及不同实现策略的PM全面基准。我们还关注创建这样一个基准所面临的挑战,包括数据的公开可用性以及LLMs可能存在的评估偏见。总体而言,我们观察到大多数考虑的LLMs能够以令人满意的水平执行一些过程挖掘任务,但在边缘设备上运行的微小模型仍然不足够。我们还得出结论,虽然所提出的基准对于识别适合进行过程挖掘任务的LLMs很有用,但需要进一步研究来克服评估偏见,并对竞争LLMs进行更全面的排名。
English
Large Language Models (LLMs) have the potential to semi-automate some process mining (PM) analyses. While commercial models are already adequate for many analytics tasks, the competitive level of open-source LLMs in PM tasks is unknown. In this paper, we propose PM-LLM-Benchmark, the first comprehensive benchmark for PM focusing on domain knowledge (process-mining-specific and process-specific) and on different implementation strategies. We focus also on the challenges in creating such a benchmark, related to the public availability of the data and on evaluation biases by the LLMs. Overall, we observe that most of the considered LLMs can perform some process mining tasks at a satisfactory level, but tiny models that would run on edge devices are still inadequate. We also conclude that while the proposed benchmark is useful for identifying LLMs that are adequate for process mining tasks, further research is needed to overcome the evaluation biases and perform a more thorough ranking of the competitive LLMs.

Summary

AI-Generated Summary

PDF22November 28, 2024