Enhancing Chart-to-Code Generation in Multimodal Large Language Models via Iterative Dual Preference Learning
Enhancing Chart-to-Code Generation in Multimodal Large Language Models via Iterative Dual Preference Learning
Chart-to-code generation, the process of converting chart images into executable plotting scripts, provides a lossless representation of chart information, requiring models to accurately capture and summarize all visual and structural elements. However, this remains a significant challenge for multimodal large language models (MLLMs), which are not inherently well-aligned with code generation tasks. To bridge this gap, we introduce Chart2Code, a novel iterative dual preference learning framework designed to enhance MLLMs' chart-to-code generation capabilities through structured code variant generation and fine-grained dual reward signals. We validate Chart2Code across three MLLMs and find that iterative preference learning consistently improves out-of-distribution chart-to-code generation quality. Throughout this process, our dual scoring method, which evaluates both the textual code structure and its visual representation, leads to greater performance improvements, even with a reduced preference dataset size. Further analysis explores the key components of our framework and highlights the interplay between chart-to-code generation and broader chart reasoning, paving the way for future advancements in chart comprehension.
Zhihan Zhang、Yixin Cao、Lizi Liao
计算技术、计算机技术
Zhihan Zhang,Yixin Cao,Lizi Liao.Enhancing Chart-to-Code Generation in Multimodal Large Language Models via Iterative Dual Preference Learning[EB/OL].(2025-04-03)[2025-05-09].https://arxiv.org/abs/2504.02906.点此复制
评论