通过结构化指令改进的迭代优化方法在图表到代码生成中的应用
Improved Iterative Refinement for Chart-to-Code Generation via Structured Instruction
June 15, 2025
作者: Chengzhi Xu, Yuyang Wang, Lai Wei, Lichao Sun, Weiran Huang
cs.AI
摘要
近期,多模态大语言模型(MLLMs)因其强大的视觉理解能力而吸引了越来越多的研究关注。尽管这些模型在各种视觉任务上取得了令人瞩目的成果,但在图表到代码生成任务上的表现仍不尽如人意。该任务要求MLLMs生成能够复现给定图表的可执行代码,不仅需要精确的视觉理解,还需将视觉元素准确转化为结构化代码。直接提示MLLMs执行这一复杂任务往往效果欠佳。为应对这一挑战,我们提出了基于结构化指令的迭代优化方法——{ChartIR}。首先,我们将任务区分为视觉理解与代码翻译两部分。为实现视觉理解,我们设计了两类结构化指令:描述指令与差异指令。描述指令捕捉参考图表的视觉元素,而差异指令则刻画参考图表与生成图表之间的差异。这些指令有效地将视觉特征转化为语言表征,从而促进后续的代码翻译过程。其次,我们将整体图表生成流程分解为初始代码生成与迭代优化两个阶段,实现最终输出的渐进式提升。实验结果表明,相较于其他方法,我们的方法在开源模型Qwen2-VL与闭源模型GPT-4o上均取得了更优的性能。
English
Recently, multimodal large language models (MLLMs) have attracted increasing
research attention due to their powerful visual understanding capabilities.
While they have achieved impressive results on various vision tasks, their
performance on chart-to-code generation remains suboptimal. This task requires
MLLMs to generate executable code that can reproduce a given chart, demanding
not only precise visual understanding but also accurate translation of visual
elements into structured code. Directly prompting MLLMs to perform this complex
task often yields unsatisfactory results. To address this challenge, we propose
{ChartIR}, an iterative refinement method based on structured instruction.
First, we distinguish two tasks: visual understanding and code translation. To
accomplish the visual understanding component, we design two types of
structured instructions: description and difference. The description
instruction captures the visual elements of the reference chart, while the
difference instruction characterizes the discrepancies between the reference
chart and the generated chart. These instructions effectively transform visual
features into language representations, thereby facilitating the subsequent
code translation process. Second, we decompose the overall chart generation
pipeline into two stages: initial code generation and iterative refinement,
enabling progressive enhancement of the final output. Experimental results show
that, compared to other method, our method achieves superior performance on
both the open-source model Qwen2-VL and the closed-source model GPT-4o.