ChatPaper.aiChatPaper

PM-LLM-Benchmark:評估大型語言模型在流程挖掘任務上的表現

PM-LLM-Benchmark: Evaluating Large Language Models on Process Mining Tasks

July 18, 2024
作者: Alessandro Berti, Humam Kourani, Wil M. P. van der Aalst
cs.AI

摘要

大型語言模型(LLMs)有潛力在某些過程挖掘(PM)分析中實現半自動化。雖然商業模型已經足夠應對許多分析任務,但開源LLMs在PM任務中的競爭水平尚不明確。本文提出了PM-LLM-Benchmark,這是第一個針對PM的全面基準,專注於領域知識(特定於過程挖掘和特定於過程)以及不同的實施策略。我們還關注創建這樣一個基準所面臨的挑戰,包括數據的公開可用性以及LLMs對評估的偏見。總的來說,我們觀察到大多數考慮的LLMs可以在滿意水平上執行一些過程挖掘任務,但在邊緣設備上運行的微型模型仍然不足夠。我們還得出結論,雖然所提出的基準對於確定適合處理過程挖掘任務的LLMs很有用,但需要進一步研究來克服評估偏見,並對競爭LLMs進行更全面的排名。
English
Large Language Models (LLMs) have the potential to semi-automate some process mining (PM) analyses. While commercial models are already adequate for many analytics tasks, the competitive level of open-source LLMs in PM tasks is unknown. In this paper, we propose PM-LLM-Benchmark, the first comprehensive benchmark for PM focusing on domain knowledge (process-mining-specific and process-specific) and on different implementation strategies. We focus also on the challenges in creating such a benchmark, related to the public availability of the data and on evaluation biases by the LLMs. Overall, we observe that most of the considered LLMs can perform some process mining tasks at a satisfactory level, but tiny models that would run on edge devices are still inadequate. We also conclude that while the proposed benchmark is useful for identifying LLMs that are adequate for process mining tasks, further research is needed to overcome the evaluation biases and perform a more thorough ranking of the competitive LLMs.

Summary

AI-Generated Summary

PDF22November 28, 2024