|国家预印本平台
首页|Layer-wise Investigation of Large-Scale Self-Supervised Music Representation Models

Layer-wise Investigation of Large-Scale Self-Supervised Music Representation Models

Layer-wise Investigation of Large-Scale Self-Supervised Music Representation Models

来源:Arxiv_logoArxiv
英文摘要

Recently, pre-trained models for music information retrieval based on self-supervised learning (SSL) are becoming popular, showing success in various downstream tasks. However, there is limited research on the specific meanings of the encoded information and their applicability. Exploring these aspects can help us better understand their capabilities and limitations, leading to more effective use in downstream tasks. In this study, we analyze the advanced music representation model MusicFM and the newly emerged SSL model MuQ. We focus on three main aspects: (i) validating the advantages of SSL models across multiple downstream tasks, (ii) exploring the specialization of layer-wise information for different tasks, and (iii) comparing performance differences when selecting specific layers. Through this analysis, we reveal insights into the structure and potential applications of SSL models in music information retrieval.

Yizhi Zhou、Haina Zhu、Hangting Chen

计算技术、计算机技术

Yizhi Zhou,Haina Zhu,Hangting Chen.Layer-wise Investigation of Large-Scale Self-Supervised Music Representation Models[EB/OL].(2025-05-22)[2025-07-16].https://arxiv.org/abs/2505.16306.点此复制

评论