RKEFino1:一款基于法规知识增强的大型语言模型
RKEFino1: A Regulation Knowledge-Enhanced Large Language Model
June 6, 2025
作者: Yan Wang, Yueru He, Ruoyu Xiang, Jeff Zhao
cs.AI
摘要
近期大型语言模型(LLMs)的进展为金融应用带来了巨大潜力,但也为数字监管报告(DRR)引入了关键的准确性和合规性挑战。为解决这些问题,我们提出了RKEFino1,这是一个基于Fino1构建的、通过XBRL、CDM和MOF领域知识微调的法规知识增强型金融推理模型。我们设计了两类问答任务——基于知识的推理和数学推理——并引入了一个新颖的数值命名实体识别(NER)任务,涵盖句子和表格中的金融实体。实验结果表明,RKEFino1在合规性要求严格的金融任务中展现出有效性和泛化能力。我们已在Hugging Face平台上发布了该模型。
English
Recent advances in large language models (LLMs) hold great promise for
financial applications but introduce critical accuracy and compliance
challenges in Digital Regulatory Reporting (DRR). To address these issues, we
propose RKEFino1, a regulation knowledge-enhanced financial reasoning model
built upon Fino1, fine-tuned with domain knowledge from XBRL, CDM, and MOF. We
formulate two QA tasks-knowledge-based and mathematical reasoning-and introduce
a novel Numerical NER task covering financial entities in both sentences and
tables. Experimental results demonstrate the effectiveness and generalization
capacity of RKEFino1 in compliance-critical financial tasks. We have released
our model on Hugging Face.