|国家预印本平台
首页|Safety in Large Reasoning Models: A Survey

Safety in Large Reasoning Models: A Survey

Safety in Large Reasoning Models: A Survey

来源:Arxiv_logoArxiv
英文摘要

Large Reasoning Models (LRMs) have exhibited extraordinary prowess in tasks like mathematics and coding, leveraging their advanced reasoning capabilities. Nevertheless, as these capabilities progress, significant concerns regarding their vulnerabilities and safety have arisen, which can pose challenges to their deployment and application in real-world settings. This paper presents a comprehensive survey of LRMs, meticulously exploring and summarizing the newly emerged safety risks, attacks, and defense strategies. By organizing these elements into a detailed taxonomy, this work aims to offer a clear and structured understanding of the current safety landscape of LRMs, facilitating future research and development to enhance the security and reliability of these powerful models.

Cheng Wang、Yue Liu、Baolong Li、Duzhen Zhang、Zhongzhi Li、Junfeng Fang

计算技术、计算机技术

Cheng Wang,Yue Liu,Baolong Li,Duzhen Zhang,Zhongzhi Li,Junfeng Fang.Safety in Large Reasoning Models: A Survey[EB/OL].(2025-04-24)[2025-05-07].https://arxiv.org/abs/2504.17704.点此复制

评论