|国家预印本平台
首页|StoryBench: A Dynamic Benchmark for Evaluating Long-Term Memory with Multi Turns

StoryBench: A Dynamic Benchmark for Evaluating Long-Term Memory with Multi Turns

StoryBench: A Dynamic Benchmark for Evaluating Long-Term Memory with Multi Turns

来源:Arxiv_logoArxiv
英文摘要

Long-term memory (LTM) is essential for large language models (LLMs) to achieve autonomous intelligence in complex, evolving environments. Despite increasing efforts in memory-augmented and retrieval-based architectures, there remains a lack of standardized benchmarks to systematically evaluate LLMs' long-term memory abilities. Existing benchmarks still face challenges in evaluating knowledge retention and dynamic sequential reasoning, and in their own flexibility, all of which limit their effectiveness in assessing models' LTM capabilities. To address these gaps, we propose a novel benchmark framework based on interactive fiction games, featuring dynamically branching storylines with complex reasoning structures. These structures simulate real-world scenarios by requiring LLMs to navigate hierarchical decision trees, where each choice triggers cascading dependencies across multi-turn interactions. Our benchmark emphasizes two distinct settings to test reasoning complexity: one with immediate feedback upon incorrect decisions, and the other requiring models to independently trace back and revise earlier choices after failure. As part of this benchmark, we also construct a new dataset designed to test LLMs' LTM within narrative-driven environments. We further validate the effectiveness of our approach through detailed experiments. Experimental results demonstrate the benchmark's ability to robustly and reliably assess LTM in LLMs.

Luanbo Wan、Weizhi Ma

计算技术、计算机技术

Luanbo Wan,Weizhi Ma.StoryBench: A Dynamic Benchmark for Evaluating Long-Term Memory with Multi Turns[EB/OL].(2025-06-16)[2025-07-01].https://arxiv.org/abs/2506.13356.点此复制

评论