Integrating Large Language Model for Improved Causal Discovery
Integrating Large Language Model for Improved Causal Discovery
Recovering the structure of causal graphical models from observational data is an essential yet challenging task for causal discovery in scientific scenarios. Domain-specific causal discovery usually relies on expert validation or prior analysis to improve the reliability of recovered causality, which is yet limited by the scarcity of expert resources. Recently, Large Language Models (LLM) have been used for causal analysis across various domain-specific scenarios, suggesting its potential as autonomous expert roles in guiding data-based structure learning. However, integrating LLMs into causal discovery faces challenges due to inaccuracies in LLM-based reasoning on revealing the actual causal structure. To address this challenge, we propose an error-tolerant LLM-driven causal discovery framework. The error-tolerant mechanism is designed three-fold with sufficient consideration on potential inaccuracies. In the LLM-based reasoning process, an accuracy-oriented prompting strategy restricts causal analysis to a reliable range. Next, a knowledge-to-structure transition aligns LLM-derived causal statements with structural causal interactions. In the structure learning process, the goodness-of-fit to data and adherence to LLM-derived priors are balanced to further address prior inaccuracies. Evaluation of eight real-world causal structures demonstrates the efficacy of our LLM-driven approach in improving data-based causal discovery, along with its robustness to inaccurate LLM-derived priors. Codes are available at https://github.com/tyMadara/LLM-CD.
Taiyu Ban、Lyuzhou Chen、Derui Lyu、Xiangyu Wang、Qinrui Zhu、Qiang Tu、Huanhuan Chen
自然科学研究方法系统科学、系统技术信息科学、信息技术
Taiyu Ban,Lyuzhou Chen,Derui Lyu,Xiangyu Wang,Qinrui Zhu,Qiang Tu,Huanhuan Chen.Integrating Large Language Model for Improved Causal Discovery[EB/OL].(2025-08-26)[2025-09-05].https://arxiv.org/abs/2306.16902.点此复制
评论