Seven Security Challenges That Must be Solved in Cross-domain Multi-agent LLM Systems
Seven Security Challenges That Must be Solved in Cross-domain Multi-agent LLM Systems
Large language models (LLMs) are rapidly evolving into autonomous agents that cooperate across organizational boundaries, enabling joint disaster response, supply-chain optimization, and other tasks that demand decentralized expertise without surrendering data ownership. Yet, cross-domain collaboration shatters the unified trust assumptions behind current alignment and containment techniques. An agent benign in isolation may, when receiving messages from an untrusted peer, leak secrets or violate policy, producing risks driven by emergent multi-agent dynamics rather than classical software bugs. This position paper maps the security agenda for cross-domain multi-agent LLM systems. We introduce seven categories of novel security challenges, for each of which we also present plausible attacks, security evaluation metrics, and future research guidelines.
Jiseong Jeong、Ronny Ko、Shuyuan Zheng、Chuan Xiao、Tae-Wan Kim、Makoto Onizuka、Won-Yong Shin
计算技术、计算机技术
Jiseong Jeong,Ronny Ko,Shuyuan Zheng,Chuan Xiao,Tae-Wan Kim,Makoto Onizuka,Won-Yong Shin.Seven Security Challenges That Must be Solved in Cross-domain Multi-agent LLM Systems[EB/OL].(2025-07-15)[2025-08-02].https://arxiv.org/abs/2505.23847.点此复制
评论