|国家预印本平台
首页|Enhancing LLM Agent Safety via Causal Influence Prompting

Enhancing LLM Agent Safety via Causal Influence Prompting

Enhancing LLM Agent Safety via Causal Influence Prompting

来源:Arxiv_logoArxiv
英文摘要

As autonomous agents powered by large language models (LLMs) continue to demonstrate potential across various assistive tasks, ensuring their safe and reliable behavior is crucial for preventing unintended consequences. In this work, we introduce CIP, a novel technique that leverages causal influence diagrams (CIDs) to identify and mitigate risks arising from agent decision-making. CIDs provide a structured representation of cause-and-effect relationships, enabling agents to anticipate harmful outcomes and make safer decisions. Our approach consists of three key steps: (1) initializing a CID based on task specifications to outline the decision-making process, (2) guiding agent interactions with the environment using the CID, and (3) iteratively refining the CID based on observed behaviors and outcomes. Experimental results demonstrate that our method effectively enhances safety in both code execution and mobile device control tasks.

Dongyoon Hahm、Woogyeol Jin、June Suk Choi、Sungsoo Ahn、Kimin Lee

自动化基础理论计算技术、计算机技术

Dongyoon Hahm,Woogyeol Jin,June Suk Choi,Sungsoo Ahn,Kimin Lee.Enhancing LLM Agent Safety via Causal Influence Prompting[EB/OL].(2025-07-01)[2025-07-25].https://arxiv.org/abs/2507.00979.点此复制

评论