VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization
VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization
We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities--such as threshold-based filtering, slice extraction, and statistical analysis--through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system's adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., ``visualize the skull"). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization.
Nathan DeBardeleben、Earl Lawrence、Ayan Biswas、Terece L. Turton、Nishath Rajiv Ranasinghe、Shawn Jones、Bradley Love、William Jones、Aric Hagberg、Han-Wei Shen
计算技术、计算机技术
Nathan DeBardeleben,Earl Lawrence,Ayan Biswas,Terece L. Turton,Nishath Rajiv Ranasinghe,Shawn Jones,Bradley Love,William Jones,Aric Hagberg,Han-Wei Shen.VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization[EB/OL].(2025-07-18)[2025-08-11].https://arxiv.org/abs/2507.21124.点此复制
评论