Efficient Reasoning via Chain of Unconscious Thought
Efficient Reasoning via Chain of Unconscious Thought
Large Reasoning Models (LRMs) achieve promising performance but compromise token efficiency due to verbose reasoning processes. Unconscious Thought Theory (UTT) posits that complex problems can be solved more efficiently through internalized cognitive processes. Inspired by UTT, we propose a new reasoning paradigm, termed Chain of Unconscious Thought (CoUT), to improve the token efficiency of LRMs by guiding them to mimic human unconscious thought and internalize reasoning processes. Concretely, we first prompt the model to internalize the reasoning by thinking in the hidden layer. Then, we design a bag of token-efficient strategies to further help models reduce unnecessary tokens yet preserve the performance. Our work reveals that models may possess beneficial unconscious thought, enabling improved efficiency without sacrificing performance. Extensive experiments demonstrate the effectiveness of CoUT. Remarkably, it surpasses CoT by reducing token usage by 47.62% while maintaining comparable accuracy, as shown in Figure 1. The code of CoUT is available at this link: https://github.com/Rohan-GRH/CoUT
Ruihan Gong、Yue Liu、Wenjie Qu、Mingzhe Du、Yufei He、Yingwei Ma、Yulin Chen、Xiang Liu、Yi Wen、Xinfeng Li、Ruidong Wang、Xinzhong Zhu、Bryan Hooi、Jiaheng Zhang
计算技术、计算机技术
Ruihan Gong,Yue Liu,Wenjie Qu,Mingzhe Du,Yufei He,Yingwei Ma,Yulin Chen,Xiang Liu,Yi Wen,Xinfeng Li,Ruidong Wang,Xinzhong Zhu,Bryan Hooi,Jiaheng Zhang.Efficient Reasoning via Chain of Unconscious Thought[EB/OL].(2025-05-26)[2025-06-18].https://arxiv.org/abs/2505.19756.点此复制
评论