Towards Reliable Large Audio Language Model
Towards Reliable Large Audio Language Model
Recent advancements in large audio language models (LALMs) have demonstrated impressive results and promising prospects in universal understanding and reasoning across speech, music, and general sound. However, these models still lack the ability to recognize their knowledge boundaries and refuse to answer questions they don't know proactively. While there have been successful attempts to enhance the reliability of LLMs, reliable LALMs remain largely unexplored. In this paper, we systematically investigate various approaches towards reliable LALMs, including training-free methods such as multi-modal chain-of-thought (MCoT), and training-based methods such as supervised fine-tuning (SFT). Besides, we identify the limitations of previous evaluation metrics and propose a new metric, the Reliability Gain Index (RGI), to assess the effectiveness of different reliable methods. Our findings suggest that both training-free and training-based methods enhance the reliability of LALMs to different extents. Moreover, we find that awareness of reliability is a "meta ability", which can be transferred across different audio modalities, although significant structural and content differences exist among sound, music, and speech.
Ziyang Ma、Xiquan Li、Yakun Song、Wenxi Chen、Chenpeng Du、Jian Wu、Yuanzhe Chen、Zhuo Chen、Yuping Wang、Yuxuan Wang、Xie Chen
计算技术、计算机技术
Ziyang Ma,Xiquan Li,Yakun Song,Wenxi Chen,Chenpeng Du,Jian Wu,Yuanzhe Chen,Zhuo Chen,Yuping Wang,Yuxuan Wang,Xie Chen.Towards Reliable Large Audio Language Model[EB/OL].(2025-05-25)[2025-06-27].https://arxiv.org/abs/2505.19294.点此复制
评论