Securing Generative AI Agentic Workflows: Risks, Mitigation, and a Proposed Firewall Architecture
Securing Generative AI Agentic Workflows: Risks, Mitigation, and a Proposed Firewall Architecture
Generative Artificial Intelligence (GenAI) presents significant advancements but also introduces novel security challenges, particularly within agentic workflows where AI agents operate autonomously. These risks escalate in multi-agent systems due to increased interaction complexity. This paper outlines critical security vulnerabilities inherent in GenAI agentic workflows, including data privacy breaches, model manipulation, and issues related to agent autonomy and system integration. It discusses key mitigation strategies such as data encryption, access control, prompt engineering, model monitoring, agent sandboxing, and security audits. Furthermore, it details a proposed "GenAI Security Firewall" architecture designed to provide comprehensive, adaptable, and efficient protection for these systems by integrating various security services and leveraging GenAI itself for enhanced defense. Addressing these security concerns is paramount for the responsible and safe deployment of this transformative technology.
Sunil Kumar Jang Bahadur、Gopala Dhar
安全科学计算技术、计算机技术
Sunil Kumar Jang Bahadur,Gopala Dhar.Securing Generative AI Agentic Workflows: Risks, Mitigation, and a Proposed Firewall Architecture[EB/OL].(2025-06-10)[2025-07-16].https://arxiv.org/abs/2506.17266.点此复制
评论