|国家预印本平台
首页|Semantic-Aware Interpretable Multimodal Music Auto-Tagging

Semantic-Aware Interpretable Multimodal Music Auto-Tagging

Semantic-Aware Interpretable Multimodal Music Auto-Tagging

来源:Arxiv_logoArxiv
英文摘要

Music auto-tagging is essential for organizing and discovering music in extensive digital libraries. While foundation models achieve exceptional performance in this domain, their outputs often lack interpretability, limiting trust and usability for researchers and end-users alike. In this work, we present an interpretable framework for music auto-tagging that leverages groups of musically meaningful multimodal features, derived from signal processing, deep learning, ontology engineering, and natural language processing. To enhance interpretability, we cluster features semantically and employ an expectation maximization algorithm, assigning distinct weights to each group based on its contribution to the tagging process. Our method achieves competitive tagging performance while offering a deeper understanding of the decision-making process, paving the way for more transparent and user-centric music tagging systems.

Andreas Patakis、Vassilis Lyberatos、Spyridon Kantarelis、Edmund Dervakos、Giorgos Stamou

计算技术、计算机技术

Andreas Patakis,Vassilis Lyberatos,Spyridon Kantarelis,Edmund Dervakos,Giorgos Stamou.Semantic-Aware Interpretable Multimodal Music Auto-Tagging[EB/OL].(2025-05-22)[2025-07-16].https://arxiv.org/abs/2505.17233.点此复制

评论