|国家预印本平台
首页|Discovering Interpretable Concepts in Large Generative Music Models

Discovering Interpretable Concepts in Large Generative Music Models

Discovering Interpretable Concepts in Large Generative Music Models

来源:Arxiv_logoArxiv
英文摘要

The fidelity with which neural networks can now generate content such as music presents a scientific opportunity: these systems appear to have learned implicit theories of the structure of such content through statistical learning alone. This could offer a novel lens on theories of human-generated media. Where these representations align with traditional constructs (e.g. chord progressions in music), they demonstrate how these can be inferred from statistical regularities. Where they diverge, they highlight potential limits in our theoretical frameworks -- patterns that we may have overlooked but that nonetheless hold significant explanatory power. In this paper, we focus on the specific case of music generators. We introduce a method to discover musical concepts using sparse autoencoders (SAEs), extracting interpretable features from the residual stream activations of a transformer model. We evaluate this approach by extracting a large set of features and producing an automatic labeling and evaluation pipeline for them. Our results reveal both familiar musical concepts and counterintuitive patterns that lack clear counterparts in existing theories or natural language altogether. Beyond improving model transparency, our work provides a new empirical tool that might help discover organizing principles in ways that have eluded traditional methods of analysis and synthesis.

Nikhil Singh、Manuel Cherep、Pattie Maes

自然科学研究方法信息科学、信息技术计算技术、计算机技术

Nikhil Singh,Manuel Cherep,Pattie Maes.Discovering Interpretable Concepts in Large Generative Music Models[EB/OL].(2025-05-18)[2025-06-19].https://arxiv.org/abs/2505.18186.点此复制

评论