|国家预印本平台
| 注册
首页|Consistency of Large Reasoning Models Under Multi-Turn Attacks

Consistency of Large Reasoning Models Under Multi-Turn Attacks

Yubo Li Ramayya Krishnan Rema Padman

Arxiv_logoArxiv

Consistency of Large Reasoning Models Under Multi-Turn Attacks

Yubo Li Ramayya Krishnan Rema Padman

作者信息

Abstract

Large reasoning models with reasoning capabilities achieve state-of-the-art performance on complex tasks, but their robustness under multi-turn adversarial pressure remains underexplored. We evaluate nine frontier reasoning models under adversarial attacks. Our findings reveal that reasoning confers meaningful but incomplete robustness: most reasoning models studied significantly outperform instruction-tuned baselines, yet all exhibit distinct vulnerability profiles, with misleading suggestions universally effective and social pressure showing model-specific efficacy. Through trajectory analysis, we identify five failure modes (Self-Doubt, Social Conformity, Suggestion Hijacking, Emotional Susceptibility, and Reasoning Fatigue) with the first two accounting for 50% of failures. We further demonstrate that Confidence-Aware Response Generation (CARG), effective for standard LLMs, fails for reasoning models due to overconfidence induced by extended reasoning traces; counterintuitively, random confidence embedding outperforms targeted extraction. Our results highlight that reasoning capabilities do not automatically confer adversarial robustness and that confidence-based defenses require fundamental redesign for reasoning models.

引用本文复制引用

Yubo Li,Ramayya Krishnan,Rema Padman.Consistency of Large Reasoning Models Under Multi-Turn Attacks[EB/OL].(2026-02-16)[2026-02-19].https://arxiv.org/abs/2602.13093.

学科分类

计算技术、计算机技术

评论

首发时间 2026-02-16
下载量:0
|
点击量:7
段落导航相关论文