ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models
ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models
Large Language Models (LLMs) are increasingly used in tasks requiring interpretive and inferential accuracy. In this paper, we introduce ExpliCa, a new dataset for evaluating LLMs in explicit causal reasoning. ExpliCa uniquely integrates both causal and temporal relations presented in different linguistic orders and explicitly expressed by linguistic connectives. The dataset is enriched with crowdsourced human acceptability ratings. We tested LLMs on ExpliCa through prompting and perplexity-based metrics. We assessed seven commercial and open-source LLMs, revealing that even top models struggle to reach 0.80 accuracy. Interestingly, models tend to confound temporal relations with causal ones, and their performance is also strongly influenced by the linguistic order of the events. Finally, perplexity-based scores and prompting performance are differently affected by model size.
Serena Auriemma、Alessandro Bondielli、Alessandro Lenci、Martina Miliani、Emmanuele Chersoni、Lucia Passaro、Irene Sucameli
计算技术、计算机技术
Serena Auriemma,Alessandro Bondielli,Alessandro Lenci,Martina Miliani,Emmanuele Chersoni,Lucia Passaro,Irene Sucameli.ExpliCa: Evaluating Explicit Causal Reasoning in Large Language Models[EB/OL].(2025-07-24)[2025-08-17].https://arxiv.org/abs/2502.15487.点此复制
评论