|国家预印本平台
首页|AC/DC: LLM-based Audio Comprehension via Dialogue Continuation

AC/DC: LLM-based Audio Comprehension via Dialogue Continuation

AC/DC: LLM-based Audio Comprehension via Dialogue Continuation

来源:Arxiv_logoArxiv
英文摘要

We propose an instruction-following audio comprehension model that leverages the dialogue continuation ability of large language models (LLMs). Instead of directly generating target captions in training data, the proposed method trains a model to produce responses as if the input caption triggered a dialogue. This dialogue continuation training mitigates the caption variation problem. Learning to continue a dialogue effectively captures the caption's meaning beyond its surface-level words. As a result, our model enables zero-shot instruction-following capability without multitask instruction tuning, even trained solely on audio captioning datasets. Experiments on AudioCaps, WavCaps, and Clotho datasets with AudioBench audio-scene question-answering tests demonstrate our model's ability to follow various unseen instructions.

Yusuke Fujita、Tomoya Mizumoto、Atsushi Kojima、Lianbo Liu、Yui Sudo

计算技术、计算机技术

Yusuke Fujita,Tomoya Mizumoto,Atsushi Kojima,Lianbo Liu,Yui Sudo.AC/DC: LLM-based Audio Comprehension via Dialogue Continuation[EB/OL].(2025-06-11)[2025-07-16].https://arxiv.org/abs/2506.10312.点此复制

评论