|国家预印本平台
首页|NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization

NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization

NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization

来源:Arxiv_logoArxiv
英文摘要

Model editing techniques are essential for efficiently updating knowledge in large language models (LLMs). However, the effectiveness of existing approaches degrades in massive editing scenarios, particularly when evaluated with practical metrics or in context-rich settings. We attribute these failures to embedding collisions among knowledge items, which undermine editing reliability at scale. To address this, we propose NAMET (Noise-aware Model Editing in Transformers), a simple yet effective method that introduces noise during memory extraction via a one-line modification to MEMIT. Extensive experiments across six LLMs and three datasets demonstrate that NAMET consistently outperforms existing methods when editing thousands of facts.

Yanbo Dai、Zhenlan Ji、Zongjie Li、Shuai Wang

计算技术、计算机技术

Yanbo Dai,Zhenlan Ji,Zongjie Li,Shuai Wang.NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization[EB/OL].(2025-05-17)[2025-07-16].https://arxiv.org/abs/2505.11876.点此复制

评论