Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes
Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes
Rational speakers are supposed to know what they know and what they do not know, and to generate expressions matching the strength of evidence. In contrast, it is still a challenge for current large language models to generate corresponding utterances based on the assessment of facts and confidence in an uncertain real-world environment. While it has recently become popular to estimate and calibrate confidence of LLMs with verbalized uncertainty, what is lacking is a careful examination of the linguistic knowledge of uncertainty encoded in the latent space of LLMs. In this paper, we draw on typological frameworks of epistemic expressions to evaluate LLMs' knowledge of epistemic modality, using controlled stories. Our experiments show that the performance of LLMs in generating epistemic expressions is limited and not robust, and hence the expressions of uncertainty generated by LLMs are not always reliable. To build uncertainty-aware LLMs, it is necessary to enrich semantic knowledge of epistemic modality in LLMs.
Meng Li、Michael Vrazitulis、David Schlangen
语言学
Meng Li,Michael Vrazitulis,David Schlangen.Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes[EB/OL].(2025-06-02)[2025-06-27].https://arxiv.org/abs/2506.01512.点此复制
评论